Sample records for model validation purposes

  1. Modeling Piezoelectric Stack Actuators for Control of Micromanipulation

    NASA Technical Reports Server (NTRS)

    Goldfarb, Michael; Celanovic, Nikola

    1997-01-01

    A nonlinear lumped-parameter model of a piezoelectric stack actuator has been developed to describe actuator behavior for purposes of control system analysis and design, and, in particular, for microrobotic applications requiring accurate position and/or force control. In formulating this model, the authors propose a generalized Maxwell resistive capacitor as a lumped-parameter causal representation of rate-independent hysteresis. Model formulation is validated by comparing results of numerical simulations to experimental data. Validation is followed by a discussion of model implications for purposes of actuator control.

  2. U.S. 75 Dallas, Texas, Model Validation and Calibration Report

    DOT National Transportation Integrated Search

    2010-02-01

    This report presents the model validation and calibration results of the Integrated Corridor Management (ICM) analysis, modeling, and simulation (AMS) for the U.S. 75 Corridor in Dallas, Texas. The purpose of the project was to estimate the benefits ...

  3. Adolescent Personality: A Five-Factor Model Construct Validation

    ERIC Educational Resources Information Center

    Baker, Spencer T.; Victor, James B.; Chambers, Anthony L.; Halverson, Jr., Charles F.

    2004-01-01

    The purpose of this study was to investigate convergent and discriminant validity of the five-factor model of adolescent personality in a school setting using three different raters (methods): self-ratings, peer ratings, and teacher ratings. The authors investigated validity through a multitrait-multimethod matrix and a confirmatory factor…

  4. Rethinking Validation in Complex High-Stakes Assessment Contexts

    ERIC Educational Resources Information Center

    Koch, Martha J.; DeLuca, Christopher

    2012-01-01

    In this article we rethink validation within the complex contexts of high-stakes assessment. We begin by considering the utility of existing models for validation and argue that these models tend to overlook some of the complexities inherent to assessment use, including the multiple interpretations of assessment purposes and the potential…

  5. Addendum to validation of FHWA's Traffic Noise Model (TNM) : phase 1

    DOT National Transportation Integrated Search

    2004-07-01

    (FHWA) is conducting a multiple-phase study to assess the accuracy and make recommendations on the use of FHWAs Traffic Noise Model (TNM). The TNM Validation Study involves highway noise data collection and TNM modeling for the purpose of data com...

  6. A Conceptual Model of Career Development to Enhance Academic Motivation

    ERIC Educational Resources Information Center

    Collins, Nancy Creighton

    2010-01-01

    The purpose of this study was to develop, refine, and validate a conceptual model of career development to enhance the academic motivation of community college students. To achieve this end, a straw model was built from the theoretical and empirical research literature. The model was then refined and validated through three rounds of a Delphi…

  7. Validation of catchment models for predicting land-use and climate change impacts. 1. Method

    NASA Astrophysics Data System (ADS)

    Ewen, J.; Parkin, G.

    1996-02-01

    Computer simulation models are increasingly being proposed as tools capable of giving water resource managers accurate predictions of the impact of changes in land-use and climate. Previous validation testing of catchment models is reviewed, and it is concluded that the methods used do not clearly test a model's fitness for such a purpose. A new generally applicable method is proposed. This involves the direct testing of fitness for purpose, uses established scientific techniques, and may be implemented within a quality assured programme of work. The new method is applied in Part 2 of this study (Parkin et al., J. Hydrol., 175:595-613, 1996).

  8. A Model-Based Method for Content Validation of Automatically Generated Test Items

    ERIC Educational Resources Information Center

    Zhang, Xinxin; Gierl, Mark

    2016-01-01

    The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…

  9. Team Psychological Safety and Team Learning: A Cultural Perspective

    ERIC Educational Resources Information Center

    Cauwelier, Peter; Ribière, Vincent M.; Bennet, Alex

    2016-01-01

    Purpose: The purpose of this paper was to evaluate if the concept of team psychological safety, a key driver of team learning and originally studied in the West, can be applied in teams from different national cultures. The model originally validated for teams in the West is applied to teams in Thailand to evaluate its validity, and the views team…

  10. Construct Validation of the Louisiana School Analysis Model (SAM) Instructional Staff Questionnaire

    ERIC Educational Resources Information Center

    Bray-Clark, Nikki; Bates, Reid

    2005-01-01

    The purpose of this study was to validate the Louisiana SAM Instructional Staff Questionnaire, a key component of the Louisiana School Analysis Model. The model was designed as a comprehensive evaluation tool for schools. Principle axis factoring with oblique rotation was used to uncover the underlying structure of the SISQ. (Contains 1 table.)

  11. Combined Use of Tissue Morphology, Neural Network Analysis of Chromatin Texture and Clinical Variables to Predict Prostate Cancer Agressiveness from Biopsy Water

    DTIC Science & Technology

    2000-10-01

    Purpose: To combine clinical, serum, pathologic and computer derived information into an artificial neural network to develop/validate a model to...Development of an artificial neural network (year 02). Prospective validation of this model (projected year 03). All models will be tested and

  12. Development and Validation of Personality Disorder Spectra Scales for the MMPI-2-RF.

    PubMed

    Sellbom, Martin; Waugh, Mark H; Hopwood, Christopher J

    2018-01-01

    The purpose of this study was to develop and validate a set of MMPI-2-RF (Ben-Porath & Tellegen, 2008/2011) personality disorder (PD) spectra scales. These scales could serve the purpose of assisting with DSM-5 PD diagnosis and help link categorical and dimensional conceptions of personality pathology within the MMPI-2-RF. We developed and provided initial validity results for scales corresponding to the 10 PD constructs listed in the DSM-5 using data from student, community, clinical, and correctional samples. Initial validation efforts indicated good support for criterion validity with an external PD measure as well as with dimensional personality traits included in the DSM-5 alternative model for PDs. Construct validity results using psychosocial history and therapists' ratings in a large clinical sample were generally supportive as well. Overall, these brief scales provide clinicians using MMPI-2-RF data with estimates of DSM-5 PD constructs that can support cross-model connections between categorical and dimensional assessment approaches.

  13. A New Ethic for Health Promotion: Reflections on a Philosophy of Health Education for the 21st Century

    ERIC Educational Resources Information Center

    Buchanan, David R.

    2006-01-01

    This article describes two models for thinking about the purposes of health education--a medical model and an education model--and traces how concerns about the validity of research have driven preference for the medical model. In the medical model, the purpose of health education is to develop effective interventions that will prevent people from…

  14. The Theory of Planned Behavior (TPB) and Pre-Service Teachers' Technology Acceptance: A Validation Study Using Structural Equation Modeling

    ERIC Educational Resources Information Center

    Teo, Timothy; Tan, Lynde

    2012-01-01

    This study applies the theory of planned behavior (TPB), a theory that is commonly used in commercial settings, to the educational context to explain pre-service teachers' technology acceptance. It is also interested in examining its validity when used for this purpose. It has found evidence that the TPB is a valid model to explain pre-service…

  15. The Effects of Cognitive Style on Edmodo Users' Behaviour: A Structural Equation Modeling-Based Multi-Group Analysis

    ERIC Educational Resources Information Center

    Ursavas, Omer Faruk; Reisoglu, Ilknur

    2017-01-01

    Purpose: The purpose of this paper is to explore the validity of extended technology acceptance model (TAM) in explaining pre-service teachers' Edmodo acceptance and the variation of variables related to TAM among pre-service teachers having different cognitive styles. Design/methodology/approach: Structural equation modeling approach was used to…

  16. Working Memory Structure in 10- and 15-Year Old Children with Mild to Borderline Intellectual, Disabilities

    ERIC Educational Resources Information Center

    van der Molen, Mariet J.

    2010-01-01

    The validity of Baddeley's working memory model within the typically developing population, was tested. However, it is not clear if this model also holds in children and adolescents with mild to, borderline intellectual disabilities (ID; IQ score 55-85). The main purpose of this study was therefore, to explore the model's validity in this…

  17. Culturally Adapted Skill Use as a Therapeutic Alliance Catalyst

    ERIC Educational Resources Information Center

    Lewicki, Todd

    2015-01-01

    Purpose: In this article, I explore how the therapeutic alliance, along with culturally competent and adapted skill use can be positively correlated with treatment outcome when using the ecological validity model as the frame. The ecological validity model refers to the degree to which there is consistency between the environment as experienced by…

  18. Validation and Use of the Multidimensional Wellness Inventory in Collegiate Student-Athletes and First-Generation Students

    ERIC Educational Resources Information Center

    Mayol, Mindy Hartman; Scott, Brianna M.; Schreiber, James B.

    2017-01-01

    Background: In some professions, "wellness" has become shorthand for physical fitness and nutrition but dimensions outside the physical are equally important. As wellness models continue to materialize, a validated instrument is needed to substantiate the characteristics of a multidimensional wellness model. Purpose: This 2-pronged study…

  19. Validating Work Discrimination and Coping Strategy Models for Sexual Minorities

    ERIC Educational Resources Information Center

    Chung, Y. Barry; Williams, Wendi; Dispenza, Franco

    2009-01-01

    The purpose of this study was to validate and expand on Y. B. Chung's (2001) models of work discrimination and coping strategies among lesbian, gay, and bisexual persons. In semistructured individual interviews, 17 lesbians and gay men reported 35 discrimination incidents and their related coping strategies. Responses were coded based on Chung's…

  20. Validating Cognitive Models of Task Performance in Algebra on the SAT®. Research Report No. 2009-3

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Leighton, Jacqueline P.; Wang, Changjiang; Zhou, Jiawen; Gokiert, Rebecca; Tan, Adele

    2009-01-01

    The purpose of the study is to present research focused on validating the four algebra cognitive models in Gierl, Wang, et al., using student response data collected with protocol analysis methods to evaluate the knowledge structures and processing skills used by a sample of SAT test takers.

  1. Validation of 2D flood models with insurance claims

    NASA Astrophysics Data System (ADS)

    Zischg, Andreas Paul; Mosimann, Markus; Bernet, Daniel Benjamin; Röthlisberger, Veronika

    2018-02-01

    Flood impact modelling requires reliable models for the simulation of flood processes. In recent years, flood inundation models have been remarkably improved and widely used for flood hazard simulation, flood exposure and loss analyses. In this study, we validate a 2D inundation model for the purpose of flood exposure analysis at the river reach scale. We validate the BASEMENT simulation model with insurance claims using conventional validation metrics. The flood model is established on the basis of available topographic data in a high spatial resolution for four test cases. The validation metrics were calculated with two different datasets; a dataset of event documentations reporting flooded areas and a dataset of insurance claims. The model fit relating to insurance claims is in three out of four test cases slightly lower than the model fit computed on the basis of the observed inundation areas. This comparison between two independent validation data sets suggests that validation metrics using insurance claims can be compared to conventional validation data, such as the flooded area. However, a validation on the basis of insurance claims might be more conservative in cases where model errors are more pronounced in areas with a high density of values at risk.

  2. Concurrent Validity of Hill's Educational Cognitive Style Model as a Prototype for Successful Academic Programs Among Lower-Class Students.

    ERIC Educational Resources Information Center

    London, David T.

    Data from the stepwise multiple regression of four educational cognitive style predictor sets on each of six academic competence criteria were used to define the concurrent validity of Hill's educational cognitive style model. The purpose was to determine how appropriate it may be to use this model as a prototype for successful academic programs…

  3. Evaluation of animal models of neurobehavioral disorders

    PubMed Central

    van der Staay, F Josef; Arndt, Saskia S; Nordquist, Rebecca E

    2009-01-01

    Animal models play a central role in all areas of biomedical research. The process of animal model building, development and evaluation has rarely been addressed systematically, despite the long history of using animal models in the investigation of neuropsychiatric disorders and behavioral dysfunctions. An iterative, multi-stage trajectory for developing animal models and assessing their quality is proposed. The process starts with defining the purpose(s) of the model, preferentially based on hypotheses about brain-behavior relationships. Then, the model is developed and tested. The evaluation of the model takes scientific and ethical criteria into consideration. Model development requires a multidisciplinary approach. Preclinical and clinical experts should establish a set of scientific criteria, which a model must meet. The scientific evaluation consists of assessing the replicability/reliability, predictive, construct and external validity/generalizability, and relevance of the model. We emphasize the role of (systematic and extended) replications in the course of the validation process. One may apply a multiple-tiered 'replication battery' to estimate the reliability/replicability, validity, and generalizability of result. Compromised welfare is inherent in many deficiency models in animals. Unfortunately, 'animal welfare' is a vaguely defined concept, making it difficult to establish exact evaluation criteria. Weighing the animal's welfare and considerations as to whether action is indicated to reduce the discomfort must accompany the scientific evaluation at any stage of the model building and evaluation process. Animal model building should be discontinued if the model does not meet the preset scientific criteria, or when animal welfare is severely compromised. The application of the evaluation procedure is exemplified using the rat with neonatal hippocampal lesion as a proposed model of schizophrenia. In a manner congruent to that for improving animal models, guided by the procedure expounded upon in this paper, the developmental and evaluation procedure itself may be improved by careful definition of the purpose(s) of a model and by defining better evaluation criteria, based on the proposed use of the model. PMID:19243583

  4. The Student-Customer Orientation Questionnaire (SCOQ): Application of Customer Metaphor to Higher Education

    ERIC Educational Resources Information Center

    Koris, Riina; Nokelainen, Petri

    2015-01-01

    Purpose: The purpose of this paper is to study Bayesian dependency modelling (BDM) to validate the model of educational experiences and the student-customer orientation questionnaire (SCOQ), and to identify the categories of educatonal experience in which students expect a higher educational institutions (HEI) to be student-customer oriented.…

  5. Friendship Quality Scale: Conceptualization, Development and Validation

    ERIC Educational Resources Information Center

    Thien, Lei Mee; Razak, Nordin Abd; Jamil, Hazri

    2012-01-01

    The purpose of this study is twofold: (1) to initialize a new conceptualization of positive feature based Friendship Quality (FQUA) scale on the basis of four dimensions: Closeness, Help, Acceptance, and Safety; and (2) to develop and validate FQUA scale in the form of reflective measurement model. The scale development and validation procedures…

  6. Description of a Website Resource for Turbulence Modeling Verification and Validation

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Smith, Brian R.; Huang, George P.

    2010-01-01

    The activities of the Turbulence Model Benchmarking Working Group - which is a subcommittee of the American Institute of Aeronautics and Astronautics (AIAA) Fluid Dynamics Technical Committee - are described. The group s main purpose is to establish a web-based repository for Reynolds-averaged Navier-Stokes turbulence model documentation, including verification and validation cases. This turbulence modeling resource has been established based on feedback from a survey on what is needed to achieve consistency and repeatability in turbulence model implementation and usage, and to document and disseminate information on new turbulence models or improvements to existing models. The various components of the website are described in detail: description of turbulence models, turbulence model readiness rating system, verification cases, validation cases, validation databases, and turbulence manufactured solutions. An outline of future plans of the working group is also provided.

  7. Development of Learning Models Based on Problem Solving and Meaningful Learning Standards by Expert Validity for Animal Development Course

    NASA Astrophysics Data System (ADS)

    Lufri, L.; Fitri, R.; Yogica, R.

    2018-04-01

    The purpose of this study is to produce a learning model based on problem solving and meaningful learning standards by expert assessment or validation for the course of Animal Development. This research is a development research that produce the product in the form of learning model, which consist of sub product, namely: the syntax of learning model and student worksheets. All of these products are standardized through expert validation. The research data is the level of validity of all sub products obtained using questionnaire, filled by validators from various field of expertise (field of study, learning strategy, Bahasa). Data were analysed using descriptive statistics. The result of the research shows that the problem solving and meaningful learning model has been produced. Sub products declared appropriate by expert include the syntax of learning model and student worksheet.

  8. A Comprehensive Validation Methodology for Sparse Experimental Data

    NASA Technical Reports Server (NTRS)

    Norman, Ryan B.; Blattnig, Steve R.

    2010-01-01

    A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.

  9. Evolution of Quality Assurance for Clinical Immunohistochemistry in the Era of Precision Medicine. Part 3: Technical Validation of Immunohistochemistry (IHC) Assays in Clinical IHC Laboratories.

    PubMed

    Torlakovic, Emina E; Cheung, Carol C; D'Arrigo, Corrado; Dietel, Manfred; Francis, Glenn D; Gilks, C Blake; Hall, Jacqueline A; Hornick, Jason L; Ibrahim, Merdol; Marchetti, Antonio; Miller, Keith; van Krieken, J Han; Nielsen, Soren; Swanson, Paul E; Vyberg, Mogens; Zhou, Xiaoge; Taylor, Clive R

    2017-03-01

    Validation of immunohistochemistry (IHC) assays is a subject that is of great importance to clinical practice as well as basic research and clinical trials. When applied to clinical practice and focused on patient safety, validation of IHC assays creates objective evidence that IHC assays used for patient care are "fit-for-purpose." Validation of IHC assays needs to be properly informed by and modeled to assess the purpose of the IHC assay, which will further determine what sphere of validation is required, as well as the scope, type, and tier of technical validation. These concepts will be defined in this review, part 3 of the 4-part series "Evolution of Quality Assurance for Clinical Immunohistochemistry in the Era of Precision Medicine."

  10. Contextual Differential Item Functioning: Examining the Validity of Teaching Self-Efficacy Instruments Using Hierarchical Generalized Linear Modeling

    ERIC Educational Resources Information Center

    Zhao, Jing

    2012-01-01

    The purpose of the study is to further investigate the validity of instruments used for collecting preservice teachers' perceptions of self-efficacy adapting the three-level IRT model described in Cheong's study (2006). The focus of the present study is to investigate whether the polytomously-scored items on the preservice teachers' self-efficacy…

  11. Computer simulation of Cerebral Arteriovenous Malformation-validation analysis of hemodynamics parameters.

    PubMed

    Kumar, Y Kiran; Mehta, Shashi Bhushan; Ramachandra, Manjunath

    2017-01-01

    The purpose of this work is to provide some validation methods for evaluating the hemodynamic assessment of Cerebral Arteriovenous Malformation (CAVM). This article emphasizes the importance of validating noninvasive measurements for CAVM patients, which are designed using lumped models for complex vessel structure. The validation of the hemodynamics assessment is based on invasive clinical measurements and cross-validation techniques with the Philips proprietary validated software's Qflow and 2D Perfursion. The modeling results are validated for 30 CAVM patients for 150 vessel locations. Mean flow, diameter, and pressure were compared between modeling results and with clinical/cross validation measurements, using an independent two-tailed Student t test. Exponential regression analysis was used to assess the relationship between blood flow, vessel diameter, and pressure between them. Univariate analysis is used to assess the relationship between vessel diameter, vessel cross-sectional area, AVM volume, AVM pressure, and AVM flow results were performed with linear or exponential regression. Modeling results were compared with clinical measurements from vessel locations of cerebral regions. Also, the model is cross validated with Philips proprietary validated software's Qflow and 2D Perfursion. Our results shows that modeling results and clinical results are nearly matching with a small deviation. In this article, we have validated our modeling results with clinical measurements. The new approach for cross-validation is proposed by demonstrating the accuracy of our results with a validated product in a clinical environment.

  12. Mathematical modeling in realistic mathematics education

    NASA Astrophysics Data System (ADS)

    Riyanto, B.; Zulkardi; Putri, R. I. I.; Darmawijoyo

    2017-12-01

    The purpose of this paper is to produce Mathematical modelling in Realistics Mathematics Education of Junior High School. This study used development research consisting of 3 stages, namely analysis, design and evaluation. The success criteria of this study were obtained in the form of local instruction theory for school mathematical modelling learning which was valid and practical for students. The data were analyzed using descriptive analysis method as follows: (1) walk through, analysis based on the expert comments in the expert review to get Hypothetical Learning Trajectory for valid mathematical modelling learning; (2) analyzing the results of the review in one to one and small group to gain practicality. Based on the expert validation and students’ opinion and answers, the obtained mathematical modeling problem in Realistics Mathematics Education was valid and practical.

  13. Development and Validity Testing of Belief Measurement Model in Buddhism for Junior High School Students at Chiang Rai Buddhist Scripture School: An Application for Multitrait-Multimethod Analysis

    ERIC Educational Resources Information Center

    Chaidi, Thirachai; Damrongpanich, Sunthorapot

    2016-01-01

    The purposes of this study were to develop a model to measure the belief in Buddhism of junior high school students at Chiang Rai Buddhist Scripture School, and to determine construct validity of the model for measuring the belief in Buddhism by using Multitrait-Multimethod analysis. The samples were 590 junior high school students at Buddhist…

  14. The Model Human Processor and the Older Adult: Parameter Estimation and Validation Within a Mobile Phone Task

    PubMed Central

    Jastrzembski, Tiffany S.; Charness, Neil

    2009-01-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048

  15. The Model Human Processor and the older adult: parameter estimation and validation within a mobile phone task.

    PubMed

    Jastrzembski, Tiffany S; Charness, Neil

    2007-12-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; M-sub(age) = 20) and older (N = 20; M-sub(age) = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies.

  16. Agent-Based vs. Equation-based Epidemiological Models:A Model Selection Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R; Nutaro, James J

    This paper is motivated by the need to design model validation strategies for epidemiological disease-spread models. We consider both agent-based and equation-based models of pandemic disease spread and study the nuances and complexities one has to consider from the perspective of model validation. For this purpose, we instantiate an equation based model and an agent based model of the 1918 Spanish flu and we leverage data published in the literature for our case- study. We present our observations from the perspective of each implementation and discuss the application of model-selection criteria to compare the risk in choosing one modeling paradigmmore » to another. We conclude with a discussion of our experience and document future ideas for a model validation framework.« less

  17. A Derivation of the Analytical Relationship between the Projected Albedo-Area Product of a Space Object and its Aggregate Photometric Measurements

    DTIC Science & Technology

    2013-09-01

    model , they are, for all intents and purposes, simply unit-less linear weights. Although this equation is technically valid for a Lambertian... modeled as a single flat facet, the same model cannot be assumed equally valid for the body. The body, after all, is a complex, three dimensional...facet (termed the “body”) and the solar tracking parts of the object as another facet (termed the solar panels). This comprises the two-facet model

  18. SAMICS Validation. SAMICS Support Study, Phase 3

    NASA Technical Reports Server (NTRS)

    1979-01-01

    SAMICS provides a consistent basis for estimating array costs and compares production technology costs. A review and a validation of the SAMICS model are reported. The review had the following purposes: (1) to test the computational validity of the computer model by comparison with preliminary hand calculations based on conventional cost estimating techniques; (2) to review and improve the accuracy of the cost relationships being used by the model: and (3) to provide an independent verification to users of the model's value in decision making for allocation of research and developement funds and for investment in manufacturing capacity. It is concluded that the SAMICS model is a flexible, accurate, and useful tool for managerial decision making.

  19. Factor Validity of the Motivated Strategies for Learning Questionnaire (MSLQ) in Asynchronous Online Learning Environments (AOLE)

    ERIC Educational Resources Information Center

    Cho, Moon-Heum; Summers, Jessica

    2012-01-01

    The purpose of this study was to investigate the factor validity of the Motivated Strategies for Learning Questionnaire (MSLQ) in asynchronous online learning environments. In order to check the factor validity, confirmatory factor analysis (CFA) was conducted with 193 cases. Using CFA, it was found that the original measurement model fit for…

  20. Development and Validation of Scores from an Instrument Measuring Student Test-Taking Motivation

    ERIC Educational Resources Information Center

    Eklof, Hanna

    2006-01-01

    Using the expectancy-value model of achievement motivation as a basis, this study's purpose is to develop, apply, and validate scores from a self-report instrument measuring student test-taking motivation. Sampled evidence of construct validity for the present sample indicates that a number of the items in the instrument could be used as an…

  1. Development and Validation of the Meaning of Work Inventory among French Workers

    ERIC Educational Resources Information Center

    Arnoux-Nicolas, Caroline; Sovet, Laurent; Lhotellier, Lin; Bernaud, Jean-Luc

    2017-01-01

    The purpose of this study was to validate a psychometric instrument among French workers for assessing the meaning of work. Following an empirical framework, a two-step procedure consisted of exploring and then validating the scale among distinctive samples. The consequent Meaning of Work Inventory is a 15-item scale based on a four-factor model,…

  2. The Development and Validation of an End-User Satisfaction Measure in a Student Laptop Environment

    ERIC Educational Resources Information Center

    Kim, Sung; Meng, Juan; Kalinowski, Jon; Shin, Dooyoung

    2014-01-01

    The purpose of this paper is to present the development and validation of a measurement model for student user satisfaction in a laptop environment. Using a "quasi Delphi" method in addition to contributions from prior research we used EFA and CFA (LISREL) to identify a five factor (14 item) measurement model that best fit the data. The…

  3. Using Structural Equation Modeling to Validate the Theory of Planned Behavior as a Model for Predicting Student Cheating

    ERIC Educational Resources Information Center

    Mayhew, Matthew J.; Hubbard, Steven M.; Finelli, Cynthia J.; Harding, Trevor S.; Carpenter, Donald D.

    2009-01-01

    The purpose of this paper is to validate the use of a modified Theory of Planned Behavior (TPB) for predicting undergraduate student cheating. Specifically, we administered a survey assessing how the TPB relates to cheating along with a measure of moral reasoning (DIT- 2) to 527 undergraduate students across three institutions; and analyzed the…

  4. Validation of Alternative In Vitro Methods to Animal Testing: Concepts, Challenges, Processes and Tools.

    PubMed

    Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie

    This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the test method for a given purpose. Relevance encapsulates the scientific basis of the test method, its capacity to predict adverse effects in the "target system" (i.e. human health or the environment) as well as its applicability for the intended purpose. In this chapter we focus on the validation of non-animal in vitro alternative testing methods and review the concepts, challenges, processes and tools fundamental to the validation of in vitro methods intended for hazard testing of chemicals. We explore major challenges and peculiarities of validation in this area. Based on the notion that validation per se is a scientific endeavour that needs to adhere to key scientific principles, namely objectivity and appropriate choice of methodology, we examine basic aspects of study design and management, and provide illustrations of statistical approaches to describe predictive performance of validated test methods as well as their reliability.

  5. Modelling of polymer photodegradation for solar cell modules

    NASA Technical Reports Server (NTRS)

    Somersall, A. C.; Guillet, J. E.

    1981-01-01

    A computer program developed to model and calculate by numerical integration the varying concentrations of chemical species formed during photooxidation of a polymeric material over time, using as input data a choice set of elementary reactions, corresponding rate constants and a convenient set of starting conditions is evaluated. Attempts were made to validate the proposed mechanism by experimentally monitoring the photooxidation products of small liquid alkane which are useful starting models for ethylene segments of polymers like EVA. The model system proved in appropriate for the intended purposes. Another validation model is recommended.

  6. Validation of the Transient Structural Response of a Threaded Assembly: Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott W.; Hemez, Francois M.; Robertson, Amy N.

    2004-04-01

    This report explores the application of model validation techniques in structural dynamics. The problem of interest is the propagation of an explosive-driven mechanical shock through a complex threaded joint. The study serves the purpose of assessing whether validating a large-size computational model is feasible, which unit experiments are required, and where the main sources of uncertainty reside. The results documented here are preliminary, and the analyses are exploratory in nature. The results obtained to date reveal several deficiencies of the analysis, to be rectified in future work.

  7. Construct Validity Evidence for Single-Response Items to Estimate Physical Activity Levels in Large Sample Studies

    ERIC Educational Resources Information Center

    Jackson, Allen W.; Morrow, James R., Jr.; Bowles, Heather R.; FitzGerald, Shannon J.; Blair, Steven N.

    2007-01-01

    Valid measurement of physical activity is important for studying the risks for morbidity and mortality. The purpose of this study was to examine evidence of construct validity of two similar single-response items assessing physical activity via self-report. Both items are based on the stages of change model. The sample was 687 participants (men =…

  8. Validating the cross-cultural factor structure and invariance property of the Insomnia Severity Index: evidence based on ordinal EFA and CFA.

    PubMed

    Chen, Po-Yi; Yang, Chien-Ming; Morin, Charles M

    2015-05-01

    The purpose of this study is to examine the factor structure of the Insomnia Severity Index (ISI) across samples recruited from different countries. We tried to identify the most appropriate factor model for the ISI and further examined the measurement invariance property of the ISI across samples from different countries. Our analyses included one data set collected from a Taiwanese sample and two data sets obtained from samples in Hong Kong and Canada. The data set collected in Taiwan was analyzed with ordinal exploratory factor analysis (EFA) to obtain the appropriate factor model for the ISI. After that, we conducted a series of confirmatory factor analyses (CFAs), which is a special case of the structural equation model (SEM) that concerns the parameters in the measurement model, to the statistics collected in Canada and Hong Kong. The purposes of these CFA were to cross-validate the result obtained from EFA and further examine the cross-cultural measurement invariance of the ISI. The three-factor model outperforms other models in terms of global fit indices in Taiwan's population. Its external validity is also supported by confirmatory factor analyses. Furthermore, the measurement invariance analyses show that the strong invariance property between the samples from different cultures holds, providing evidence that the ISI results obtained in different cultures are comparable. The factorial validity of the ISI is stable in different populations. More importantly, its invariance property across cultures suggests that the ISI is a valid measure of the insomnia severity construct across countries. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Validation Experiences and Persistence among Community College Students

    ERIC Educational Resources Information Center

    Barnett, Elisabeth A.

    2011-01-01

    The purpose of this correlational research was to examine the extent to which community college students' experiences with validation by faculty (Rendon, 1994, 2002) predicted: (a) their sense of integration, and (b) their intent to persist. The research was designed as an elaboration of constructs within Tinto's (1993) Longitudinal Model of…

  10. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test

    PubMed Central

    Ham, Joo-ho; Park, Hun-Young; Kim, Youn-ho; Bae, Sang-kon; Ko, Byung-hoon

    2017-01-01

    [Purpose] The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. [Methods] We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20–59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. [Results] Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. [Conclusion] These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. PMID:29036765

  11. USE OF PHARMACOKINETIC MODELING TO DESIGN STUDIES FOR PATHWAY-SPECIFIC EXPOSURE MODEL EVALUATION

    EPA Science Inventory

    Validating an exposure pathway model is difficult because the biomarker, which is often used to evaluate the model prediction, is an integrated measure for exposures from all the exposure routes/pathways. The purpose of this paper is to demonstrate a method to use pharmacokeneti...

  12. An Assessment of the Validity of the ECERS-R with Implications for Measures of Child Care Quality and Relations to Child Development

    ERIC Educational Resources Information Center

    Gordon, Rachel A.; Fujimoto, Ken; Kaestner, Robert; Korenman, Sanders; Abner, Kristin

    2013-01-01

    The Early Childhood Environment Rating Scale-Revised (ECERS-R) is widely used to associate child care quality with child development, but its validity for this purpose is not well established. We examined the validity of the ECERS-R using the multidimensional Rasch partial credit model (PCM), factor analyses, and regression analyses with data from…

  13. Sample Size and Power Estimates for a Confirmatory Factor Analytic Model in Exercise and Sport: A Monte Carlo Approach

    ERIC Educational Resources Information Center

    Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying

    2011-01-01

    Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…

  14. Study of Bias in 2012-Placement Test through Rasch Model in Terms of Gender Variable

    ERIC Educational Resources Information Center

    Turkan, Azmi; Cetin, Bayram

    2017-01-01

    Validity and reliability are among the most crucial characteristics of a test. One of the steps to make sure that a test is valid and reliable is to examine the bias in test items. The purpose of this study was to examine the bias in 2012 Placement Test items in terms of gender variable using Rasch Model in Turkey. The sample of this study was…

  15. Illuminating the Black Box of Entrepreneurship Education Programmes: Part 2

    ERIC Educational Resources Information Center

    Maritz, Alex

    2017-01-01

    Purpose: The purpose of this paper is to provide a justified, legitimate and validated model on entrepreneurship education programmes (EEPs), by combining recent research and scholarship in leading edge entrepreneurship education (EE). Design/methodology/approach: A systematic literature review of recent EE research and scholarship is followed by…

  16. Cultural Validation of the Maslach Burnout Inventory for Korean Students

    ERIC Educational Resources Information Center

    Shin, Hyojung; Puig, Ana; Lee, Jayoung; Lee, Ji Hee; Lee, Sang Min

    2011-01-01

    The purpose of this study was to examine the factorial validity of the MBI-SS in Korean students. Specifically, we investigated whether the original three-factor structure of the MBI-SS was appropriate for use with Korean students. In addition, by running multi-group structural equation model analyses with factorial invariance tests simultaneously…

  17. VEEP - Vehicle Economy, Emissions, and Performance program

    NASA Technical Reports Server (NTRS)

    Heimburger, D. A.; Metcalfe, M. A.

    1977-01-01

    VEEP is a general-purpose discrete event simulation program being developed to study the performance, fuel economy, and exhaust emissions of a vehicle modeled as a collection of its separate components. It is written in SIMSCRIPT II.5. The purpose of this paper is to present the design methodology, describe the simulation model and its components, and summarize the preliminary results. Topics include chief programmer team concepts, the SDDL design language, program portability, user-oriented design, the program's user command syntax, the simulation procedure, and model validation.

  18. Simulation of Flywheel Energy Storage System

    DTIC Science & Technology

    2006-01-01

    currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 02/11/06 2. REPORT TYPE...Government Purpose Rights 14. ABSTRACT Presented is a comprehensive power model for the Flywheel Attitude Control , Energy Transmission, and Storage...flywheel units and the Agile Multi-Purpose Satellite Simulator (AMPSS). The purpose of FACETS is to demonstrate integrated attitude control maneuvers

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, X; Wang, J; Hu, W

    Purpose: The Varian RapidPlan™ is a commercial knowledge-based optimization process which uses a set of clinically used treatment plans to train a model that can predict individualized dose-volume objectives. The purpose of this study is to evaluate the performance of RapidPlan to generate intensity modulated radiation therapy (IMRT) plans for cervical cancer. Methods: Totally 70 IMRT plans for cervical cancer with varying clinical and physiological indications were enrolled in this study. These patients were all previously treated in our institution. There were two prescription levels usually used in our institution: 45Gy/25 fractions and 50.4Gy/28 fractions. 50 of these plans weremore » selected to train the RapidPlan model for predicting dose-volume constraints. After model training, this model was validated with 10 plans from training pool(internal validation) and additional other 20 new plans(external validation). All plans used for the validation were re-optimized with the original beam configuration and the generated priorities from RapidPlan were manually adjusted to ensure that re-optimized DVH located in the range of the model prediction. DVH quantitative analysis was performed to compare the RapidPlan generated and the original manual optimized plans. Results: For all the validation cases, RapidPlan based plans (RapidPlan) showed similar or superior results compared to the manual optimized ones. RapidPlan increased the result of D98% and homogeneity in both two validations. For organs at risk, the RapidPlan decreased mean doses of bladder by 1.25Gy/1.13Gy (internal/external validation) on average, with p=0.12/p<0.01. The mean dose of rectum and bowel were also decreased by an average of 2.64Gy/0.83Gy and 0.66Gy/1.05Gy,with p<0.01/ p<0.01and p=0.04/<0.01 for the internal/external validation, respectively. Conclusion: The RapidPlan model based cervical cancer plans shows ability to systematically improve the IMRT plan quality. It suggests that RapidPlan has great potential to make the treatment planning process more efficient.« less

  20. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry

    PubMed Central

    Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen; Atkinson, E. Neely; Cody, Dianna D.

    2011-01-01

    Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49–33.03 mm Al on a computed tomography (CT) scanner, 0.09–1.93 mm Al on two mammography systems, and 0.1–0.45 mm Cu and 0.49–14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and∕or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry). PMID:21928626

  1. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen

    2011-08-15

    Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87more » mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R{sup 2} > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).« less

  2. Validation Evidence for the Elementary School Version of the MUSIC® Model of Academic Motivation Inventory (Pruebas de validación para el Modelo MUSIC® de Inventario de Motivación Educativa para Escuela Primaria)

    ERIC Educational Resources Information Center

    Jones, Brett D.; Sigmon, Miranda L.

    2016-01-01

    Introduction: The purpose of our study was to assess whether the Elementary School version of the MUSIC® Model of Academic Motivation Inventory was valid for use with elementary students in classrooms with regular classroom teachers and student teachers enrolled in a university teacher preparation program. Method: The participants included 535…

  3. [The isolated perfused porcine kidney model for investigations concerning surgical therapy procedures].

    PubMed

    Peters, Kristina; Michel, Maurice Stephan; Matis, Ulrike; Häcker, Axel

    2006-01-01

    Experiments to develop innovative surgical therapy procedures are conventionally conducted on animals, as crucial aspects like tissue removal and bleeding disposition cannot be investigated in vitro. Extracorporeal organ models however reflect these aspects and could thus reduce the use of animals for this purpose fundamentally in the future. The aim of this work was to validate the isolated perfused porcine kidney model with regard to its use for surgical purposes on the basis of histological and radiological procedures. The results show that neither storage nor artificial perfusion led to any structural or functional damage which would affect the quality of the organ. The kidney model is highly suitable for simulating the main aspects of renal physiology and allows a constant calibration of perfusion pressure and tissue temperature. Thus, with only a moderate amount of work involved, the kidney model provides a cheap and readily available alternative to conventional animal experiments; it allows standardised experimental settings and provides valid results.

  4. NREL and IBM Improve Solar Forecasting with Big Data | Energy Systems

    Science.gov Websites

    forecasting model using deep-machine-learning technology. The multi-scale, multi-model tool, named Watt-sun the first standard suite of metrics for this purpose. Validating Watt-sun at multiple sites across the

  5. Developing Enhanced Blood–Brain Barrier Permeability Models: Integrating External Bio-Assay Data in QSAR Modeling

    PubMed Central

    Wang, Wenyi; Kim, Marlene T.; Sedykh, Alexander

    2015-01-01

    Purpose Experimental Blood–Brain Barrier (BBB) permeability models for drug molecules are expensive and time-consuming. As alternative methods, several traditional Quantitative Structure-Activity Relationship (QSAR) models have been developed previously. In this study, we aimed to improve the predictivity of traditional QSAR BBB permeability models by employing relevant public bio-assay data in the modeling process. Methods We compiled a BBB permeability database consisting of 439 unique compounds from various resources. The database was split into a modeling set of 341 compounds and a validation set of 98 compounds. Consensus QSAR modeling workflow was employed on the modeling set to develop various QSAR models. A five-fold cross-validation approach was used to validate the developed models, and the resulting models were used to predict the external validation set compounds. Furthermore, we used previously published membrane transporter models to generate relevant transporter profiles for target compounds. The transporter profiles were used as additional biological descriptors to develop hybrid QSAR BBB models. Results The consensus QSAR models have R2=0.638 for fivefold cross-validation and R2=0.504 for external validation. The consensus model developed by pooling chemical and transporter descriptors showed better predictivity (R2=0.646 for five-fold cross-validation and R2=0.526 for external validation). Moreover, several external bio-assays that correlate with BBB permeability were identified using our automatic profiling tool. Conclusions The BBB permeability models developed in this study can be useful for early evaluation of new compounds (e.g., new drug candidates). The combination of chemical and biological descriptors shows a promising direction to improve the current traditional QSAR models. PMID:25862462

  6. Development and Validation of Accident Models for FeCrAl Cladding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamble, Kyle Allan Lawrence; Hales, Jason Dean

    2016-08-01

    The purpose of this milestone report is to present the work completed in regards to material model development for FeCrAl cladding and highlight the results of applying these models to Loss of Coolant Accidents (LOCA) and Station Blackouts (SBO). With the limited experimental data available (essentially only the data used to create the models) true validation is not possible. In the absence of another alternative, qualitative comparisons during postulated accident scenarios between FeCrAl and Zircaloy-4 cladded rods have been completed demonstrating the superior performance of FeCrAl.

  7. Collective Trust: A Social Indicator of Instructional Capacity

    ERIC Educational Resources Information Center

    Adams, Curt M.

    2013-01-01

    Purpose: The purpose of this study is to test the validity of using collective trust as a social indicator of instructional capacity. Design/methodology/approach: A hypothesized model was advanced for the empirical investigation. Collective trust was specified as a latent construct with observable indicators being principal trust in faculty (PTF),…

  8. Three-Step Validation of Exercise Behavior Processes of Change in an Adolescent Sample

    ERIC Educational Resources Information Center

    Rhodes, Ryan E.; Berry, Tanya; Naylor, Patti-Jean; Higgins, S. Joan Wharf

    2004-01-01

    Though the processes of change are conceived as the core constructs of the transtheoretical model (TTM), few researchers have examined their construct validity in the physical activity domain. Further, only 1 study was designed to investigate the processes of change in an adolescent sample. The purpose of this study was to examine the exercise…

  9. Open and Distance Education Accreditation Standards Scale: Validity and Reliability Studies

    ERIC Educational Resources Information Center

    Can, Ertug

    2016-01-01

    The purpose of this study is to develop, and test the validity and reliability of a scale for the use of researchers to determine the accreditation standards of open and distance education based on the views of administrators, teachers, staff and students. This research was designed according to the general descriptive survey model since it aims…

  10. Water Awareness Scale for Pre-Service Science Teachers: Validity and Reliability Study

    ERIC Educational Resources Information Center

    Filik Iscen, Cansu

    2015-01-01

    The role of teachers in the formation of environmentally sensitive behaviors in students is quite high. Thus, the water awareness of teachers, who represent role models for students, is rather important. The main purpose of this study is to identify the reliability and validity study outcomes of the Water Awareness Scale, which was developed to…

  11. Growth of Finiteness in the Third Year of Life: Replication and Predictive Validity

    ERIC Educational Resources Information Center

    Hadley, Pamela A.; Rispoli, Matthew; Holt, Janet K.; Fitzgerald, Colleen; Bahnsen, Alison

    2014-01-01

    Purpose: The authors of this study investigated the validity of tense and agreement productivity (TAP) scoring in diverse sentence frames obtained during conversational language sampling as an alternative measure of finiteness for use with young children. Method: Longitudinal language samples were used to model TAP growth from 21 to 30 months of…

  12. Validation of a Tool Evaluating Educational Apps for Smart Education

    ERIC Educational Resources Information Center

    Lee, Jeong-Sook; Kim, Sung-Wan

    2015-01-01

    The purpose of this study is to develop and validate an evaluation tool of educational apps for smart education. Based on literature reviews, a potential model for evaluating educational apps was suggested. An evaluation tool consisting of 57 survey items was delivered to 156 students in middle and high schools. An exploratory factor analysis was…

  13. Validation of Virtual Learning Team Competencies for Individual Students in a Distance Education Setting

    ERIC Educational Resources Information Center

    Topchyan, Ruzanna; Zhang, Jie

    2014-01-01

    The purpose of this study was twofold. First, the study aimed to validate the scale of the Virtual Team Competency Inventory in distance education, which had initially been designed for a corporate setting. Second, the methodological advantages of Exploratory Structural Equation Modeling (ESEM) framework over Confirmatory Factor Analysis (CFA)…

  14. Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Patera, Anthony

    1993-01-01

    Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.

  15. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    USGS Publications Warehouse

    Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, Gretchen G.

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.

  16. Modeling of resistive sheets in finite element solutions

    NASA Technical Reports Server (NTRS)

    Jin, J. M.; Volakis, John L.; Yu, C. L.; Woo, A. C.

    1992-01-01

    A formulation is presented for modeling a resistive card in the context of the finite element method. The appropriate variational function is derived and for validation purposes, results are presented for the scattering by a metal-backed cavity loaded with a resistive card.

  17. European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core 30: factorial models to Brazilian cancer patients

    PubMed Central

    Campos, Juliana Alvares Duarte Bonini; Spexoto, Maria Cláudia Bernardes; da Silva, Wanderson Roberto; Serrano, Sergio Vicente; Marôco, João

    2018-01-01

    ABSTRACT Objective To evaluate the psychometric properties of the seven theoretical models proposed in the literature for European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core 30 (EORTC QLQ-C30), when applied to a sample of Brazilian cancer patients. Methods Content and construct validity (factorial, convergent, discriminant) were estimated. Confirmatory factor analysis was performed. Convergent validity was analyzed using the average variance extracted. Discriminant validity was analyzed using correlational analysis. Internal consistency and composite reliability were used to assess the reliability of instrument. Results A total of 1,020 cancer patients participated. The mean age was 53.3±13.0 years, and 62% were female. All models showed adequate factorial validity for the study sample. Convergent and discriminant validities and the reliability were compromised in all of the models for all of the single items referring to symptoms, as well as for the “physical function” and “cognitive function” factors. Conclusion All theoretical models assessed in this study presented adequate factorial validity when applied to Brazilian cancer patients. The choice of the best model for use in research and/or clinical protocols should be centered on the purpose and underlying theory of each model. PMID:29694609

  18. CFD Modeling Needs and What Makes a Good Supersonic Combustion Validation Experiment

    NASA Technical Reports Server (NTRS)

    Gaffney, Richard L., Jr.; Cutler, Andrew D.

    2005-01-01

    If a CFD code/model developer is asked what experimental data he wants to validate his code or numerical model, his answer will be: "Everything, everywhere, at all times." Since this is not possible, practical, or even reasonable, the developer must understand what can be measured within the limits imposed by the test article, the test location, the test environment and the available diagnostic equipment. At the same time, it is important for the expermentalist/diagnostician to understand what the CFD developer needs (as opposed to wants) in order to conduct a useful CFD validation experiment. If these needs are not known, it is possible to neglect easily measured quantities at locations needed by the developer, rendering the data set useless for validation purposes. It is also important for the experimentalist/diagnostician to understand what the developer is trying to validate so that the experiment can be designed to isolate (as much as possible) the effects of a particular physical phenomena that is associated with the model to be validated. The probability of a successful validation experiment can be greatly increased if the two groups work together, each understanding the needs and limitations of the other.

  19. CFD Validation Experiment of a Mach 2.5 Axisymmetric Shock-Wave/Boundary-Layer Interaction

    NASA Technical Reports Server (NTRS)

    Davis, David O.

    2015-01-01

    Experimental investigations of specific flow phenomena, e.g., Shock Wave Boundary-Layer Interactions (SWBLI), provide great insight to the flow behavior but often lack the necessary details to be useful as CFD validation experiments. Reasons include: 1.Undefined boundary conditions Inconsistent results 2.Undocumented 3D effects (CL only measurements) 3.Lack of uncertainty analysis While there are a number of good subsonic experimental investigations that are sufficiently documented to be considered test cases for CFD and turbulence model validation, the number of supersonic and hypersonic cases is much less. This was highlighted by Settles and Dodsons [1] comprehensive review of available supersonic and hypersonic experimental studies. In all, several hundred studies were considered for their database.Of these, over a hundred were subjected to rigorous acceptance criteria. Based on their criteria, only 19 (12 supersonic, 7 hypersonic) were considered of sufficient quality to be used for validation purposes. Aeschliman and Oberkampf [2] recognized the need to develop a specific methodology for experimental studies intended specifically for validation purposes.

  20. Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2. (l’Hypersonique experimentale et de calcul - capacite, ameliorafion et validation)

    DTIC Science & Technology

    1998-12-01

    Soft Sphere Molecular Model for Inverse-Power-Law or Lennard Jones Potentials , Physics of Fluids A, Vol. 3, No. 10, pp. 2459-2465. 42. Legge, H...information; — Providing assistance to member nations for the purpose of increasing their scientific and technical potential ; — Rendering scientific and...nal, 34:756-763, 1996. [22] W. Jones and B. Launder. The Prediction of Laminarization with a Two-Equation Model of Turbulence. Int. Journal of Heat

  1. DEVELOPMENT AND VALIDATION OF A MULTIFIELD MODEL OF CHURN-TURBULENT GAS/LIQUID FLOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elena A. Tselishcheva; Steven P. Antal; Michael Z. Podowski

    The accuracy of numerical predictions for gas/liquid two-phase flows using Computational Multiphase Fluid Dynamics (CMFD) methods strongly depends on the formulation of models governing the interaction between the continuous liquid field and bubbles of different sizes. The purpose of this paper is to develop, test and validate a multifield model of adiabatic gas/liquid flows at intermediate gas concentrations (e.g., churn-turbulent flow regime), in which multiple-size bubbles are divided into a specified number of groups, each representing a prescribed range of sizes. The proposed modeling concept uses transport equations for the continuous liquid field and for each bubble field. The overallmore » model has been implemented in the NPHASE-CMFD computer code. The results of NPHASE-CMFD simulations have been validated against the experimental data from the TOPFLOW test facility. Also, a parametric analysis on the effect of various modeling assumptions has been performed.« less

  2. Psychometric Properties of the Vocational Rehabilitation Engagement Scale When Used with People with Mental Illness in Clubhouse Settings

    ERIC Educational Resources Information Center

    Fitzgerald, Sandra; Deiches, Jonathan; Umucu, Emre; Brooks, Jessica; Muller, Veronica; Wu, Jia-Rung; Chan, Fong

    2016-01-01

    Purpose: The purpose of this study was to validate the Vocational Rehabilitation Engagement Scale (VRES) for use in the Clubhouse Model of Psychosocial Rehabilitation. Method: There were 124 individuals with serious mental illness recruited from 8 Clubhouse programs in Hawaii. Measurement structure of the VRES was evaluated using exploratory…

  3. Exploring the Effects of Authentic Leadership on Academic Optimism and Teacher Engagement in Thailand

    ERIC Educational Resources Information Center

    Kulophas, Dhirapat; Hallinger, Philip; Ruengtrakul, Auyporn; Wongwanich, Suwimon

    2018-01-01

    Purpose: In the context of Thailand's progress towards education reform, scholars have identified a lack of effective school-level leadership as an impeding factor. The purpose of this paper is to develop and validate a theoretical model of authentic leadership effects on teacher academic optimism and work engagement. Authentic leadership was…

  4. Structural Equation Modeling of Writing Proficiency Using Can-Do Questionnaires

    ERIC Educational Resources Information Center

    Kobayashi, Wakako

    2017-01-01

    The purposes of this study were to validate the writing section of the Eiken Can-Do Questionnaires used in this study and the second purpose was to determine the effects of ten affective orientations (i.e., Desire to Write English, Attitude Toward Learning to Write English, Motivational Intensity, Instrumental Orientation for Writing in English,…

  5. How to Measure the Efficacy of VET Workplace Learning: The FET-WL Model

    ERIC Educational Resources Information Center

    Pineda-Herrero, Pilar; Quesada-Pallarès, Carla; Espona-Barcons, Berta; Mas-Torelló, Óscar

    2015-01-01

    Purpose: Workplace learning (WL) is a key part of vocational education and training (VET) because it allows students to develop their skills in a work environment, and provides important information about how well VET studies prepare skilled workers. Therefore, the purpose of this paper is to develop and validate an instrument to evaluate WL…

  6. Validating the Mexican American Intergenerational Caregiving Model

    ERIC Educational Resources Information Center

    Escandon, Socorro

    2011-01-01

    The purpose of this study was to substantiate and further develop a previously formulated conceptual model of Role Acceptance in Mexican American family caregivers by exploring the theoretical strengths of the model. The sample consisted of women older than 21 years of age who self-identified as Hispanic, were related through consanguinal or…

  7. A Model for the Education of Gifted Learners in Lebanon

    ERIC Educational Resources Information Center

    Sarouphim, Ketty M.

    2010-01-01

    The purpose of this paper is to present a model for developing a comprehensive system of education for gifted learners in Lebanon. The model consists of three phases and includes key elements for establishing gifted education in the country, such as raising community awareness, adopting valid identification measures, and developing effective…

  8. A Framework for Text Mining in Scientometric Study: A Case Study in Biomedicine Publications

    NASA Astrophysics Data System (ADS)

    Silalahi, V. M. M.; Hardiyati, R.; Nadhiroh, I. M.; Handayani, T.; Rahmaida, R.; Amelia, M.

    2018-04-01

    The data of Indonesians research publications in the domain of biomedicine has been collected to be text mined for the purpose of a scientometric study. The goal is to build a predictive model that provides a classification of research publications on the potency for downstreaming. The model is based on the drug development processes adapted from the literatures. An effort is described to build the conceptual model and the development of a corpus on the research publications in the domain of Indonesian biomedicine. Then an investigation is conducted relating to the problems associated with building a corpus and validating the model. Based on our experience, a framework is proposed to manage the scientometric study based on text mining. Our method shows the effectiveness of conducting a scientometric study based on text mining in order to get a valid classification model. This valid model is mainly supported by the iterative and close interactions with the domain experts starting from identifying the issues, building a conceptual model, to the labelling, validation and results interpretation.

  9. Möglichkeiten und Grenzen der Validierung flächenhaft modellierter Nitrateinträge ins Grundwasser mit der N2/Ar-Methode

    NASA Astrophysics Data System (ADS)

    Eschenbach, Wolfram; Budziak, Dörte; Elbracht, Jörg; Höper, Heinrich; Krienen, Lisa; Kunkel, Ralf; Meyer, Knut; Well, Reinhard; Wendland, Frank

    2018-06-01

    Valid models for estimating nitrate emissions from agriculture to groundwater are an indispensable forecasting tool. A major challenge for model validation is the spatial and temporal inconsistency between data from groundwater monitoring points and modelled nitrate inputs into groundwater, and the fact that many existing groundwater monitoring wells cannot be used for validation. With the help of the N2/Ar-method, groundwater monitoring wells in areas with reduced groundwater can now be used for model validation. For this purpose, 484 groundwater monitoring wells were sampled in Lower Saxony. For the first time, modelled potential nitrate concentrations in groundwater recharge (from the DENUZ model) were compared with nitrate input concentrations, which were calculated using the N2/Ar method. The results show a good agreement between both methods for glacial outwash plains and moraine deposits. Although the nitrate degradation processes in groundwater and soil merge seamlessly in areas with a shallow groundwater table, the DENUZ model only calculates denitrification in the soil zone. The DENUZ model thus predicts 27% higher nitrate emissions into the groundwater than the N2/Ar method in such areas. To account for high temporal and spatial variability of nitrate emissions into groundwater, a large number of groundwater monitoring points must be investigated for model validation.

  10. Sources of Self-Efficacy in Mathematics: A Validation Study

    ERIC Educational Resources Information Center

    Usher, Ellen L.; Pajares, Frank

    2009-01-01

    The purpose of this study was to develop and validate items with which to assess A. Bandura's (1997) theorized sources of self-efficacy among middle school mathematics students. Results from Phase 1 (N=1111) were used to develop and refine items for subsequent use. In Phase 2 of the study (N=824), a 39-item, four-factor exploratory model fit best.…

  11. Professional Learning Communities Assessment: Adaptation, Internal Validity, and Multidimensional Model Testing in Turkish Context

    ERIC Educational Resources Information Center

    Dogan, Selçuk; Tatik, R. Samil; Yurtseven, Nihal

    2017-01-01

    The main purpose of this study is to adapt and validate the Professional Learning Communities Assessment Revised (PLCA-R) by Olivier, Hipp, and Huffman within the context of Turkish schools. The instrument was translated and adapted to administer to teachers in Turkey. Internal structure of the Turkish version of PLCA-R was investigated by using…

  12. Investigation of the accuracy of breast tissue segmentation methods for the purpose of developing breast deformation models for use in adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Juneja, P.; Harris, E. J.; Evans, P. M.

    2014-03-01

    Realistic modelling of breast deformation requires the breast tissue to be segmented into fibroglandular and fatty tissue and assigned suitable material properties. There are a number of breast tissue segmentation methods proposed and used in the literature. The purpose of this study was to validate and compare the accuracy of various segmentation methods and to investigate the effect of the tissue distribution on the segmentation accuracy. Computed tomography (CT) data for 24 patients, both in supine and prone positions were segmented into fibroglandular and fatty tissue. The segmentation methods explored were: physical density thresholding; interactive thresholding; fuzzy c-means clustering (FCM) with three classes (FCM3) and four classes (FCM4); and k-means clustering. Validation was done in two-stages: firstly, a new approach, supine-prone validation based on the assumption that the breast composition should appear the same in the supine and prone scans was used. Secondly, outlines from three experts were used for validation. This study found that FCM3 gave the most accurate segmentation of breast tissue from CT data and that the segmentation accuracy is adversely affected by the sparseness of the fibroglandular tissue distribution.

  13. Validation of US3D for Capsule Aerodynamics using 05-CA Wind Tunnel Test Data

    NASA Technical Reports Server (NTRS)

    Schwing, Alan

    2012-01-01

    Several comparisons of computational fluid dynamics to wind tunnel test data are shown for the purpose of code validation. The wind tunnel test, 05-CA, uses a 7.66% model of NASA's Multi-Purpose Crew Vehicle in the 11-foot test section of the Ames Unitary Plan Wind tunnel. A variety of freestream conditions over four Mach numbers and three angles of attack are considered. Test data comparisons include time-averaged integrated forces and moments, time-averaged static pressure ports on the surface, and Strouhal Number. The applicability of the US3D code to subsonic and transonic flow over a bluff body is assessed on a comprehensive data set. With close comparison, this work validates US3D for highly separated flows similar to those examined here.

  14. Temperature and heat flux datasets of a complex object in a fire plume for the validation of fire and thermal response codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jernigan, Dann A.; Blanchat, Thomas K.

    It is necessary to improve understanding and develop temporally- and spatially-resolved integral scale validation data of the heat flux incident to a complex object in addition to measuring the thermal response of said object located within the fire plume for the validation of the SIERRA/FUEGO/SYRINX fire and SIERRA/CALORE codes. To meet this objective, a complex calorimeter with sufficient instrumentation to allow validation of the coupling between FUEGO/SYRINX/CALORE has been designed, fabricated, and tested in the Fire Laboratory for Accreditation of Models and Experiments (FLAME) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisonmore » between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. This report presents the data validation steps and processes, the results of the penlight radiant heat experiments (for the purpose of validating the CALORE heat transfer modeling of the complex calorimeter), and the results of the fire tests in FLAME.« less

  15. Innovative learning model for improving students’ argumentation skill and concept understanding on science

    NASA Astrophysics Data System (ADS)

    Nafsiati Astuti, Rini

    2018-04-01

    Argumentation skill is the ability to compose and maintain arguments consisting of claims, supports for evidence, and strengthened-reasons. Argumentation is an important skill student needs to face the challenges of globalization in the 21st century. It is not an ability that can be developed by itself along with the physical development of human, but it must be developed under nerve like process, giving stimulus so as to require a person to be able to argue. Therefore, teachers should develop students’ skill of arguing in science learning in the classroom. The purpose of this study is to obtain an innovative learning model that are valid in terms of content and construct in improving the skills of argumentation and concept understanding of junior high school students. The assessment of content validity and construct validity was done through Focus Group Discussion (FGD), using the content and construct validation sheet, book model, learning video, and a set of learning aids for one meeting. Assessment results from 3 (three) experts showed that the learning model developed in the category was valid. The validity itself shows that the developed learning model has met the content requirement, the student needs, state of the art, strong theoretical and empirical foundation and construct validity, which has a connection of syntax stages and components of learning model so that it can be applied in the classroom activities

  16. Developing and upgrading of solar system thermal energy storage simulation models. Technical progress report, March 1, 1979-February 29, 1980

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhn, J K; von Fuchs, G F; Zob, A P

    1980-05-01

    Two water tank component simulation models have been selected and upgraded. These models are called the CSU Model and the Extended SOLSYS Model. The models have been standardized and links have been provided for operation in the TRNSYS simulation program. The models are described in analytical terms as well as in computer code. Specific water tank tests were performed for the purpose of model validation. Agreement between model data and test data is excellent. A description of the limitations has also been included. Streamlining results and criteria for the reduction of computer time have also been shown for both watermore » tank computer models. Computer codes for the models and instructions for operating these models in TRNSYS have also been included, making the models readily available for DOE and industry use. Rock bed component simulation models have been reviewed and a model selected and upgraded. This model is a logical extension of the Mumma-Marvin model. Specific rock bed tests have been performed for the purpose of validation. Data have been reviewed for consistency. Details of the test results concerned with rock characteristics and pressure drop through the bed have been explored and are reported.« less

  17. Parametric model of servo-hydraulic actuator coupled with a nonlinear system: Experimental validation

    NASA Astrophysics Data System (ADS)

    Maghareh, Amin; Silva, Christian E.; Dyke, Shirley J.

    2018-05-01

    Hydraulic actuators play a key role in experimental structural dynamics. In a previous study, a physics-based model for a servo-hydraulic actuator coupled with a nonlinear physical system was developed. Later, this dynamical model was transformed into controllable canonical form for position tracking control purposes. For this study, a nonlinear device is designed and fabricated to exhibit various nonlinear force-displacement profiles depending on the initial condition and the type of materials used as replaceable coupons. Using this nonlinear system, the controllable canonical dynamical model is experimentally validated for a servo-hydraulic actuator coupled with a nonlinear physical system.

  18. Early Detection of Increased Intracranial Pressure Episodes in Traumatic Brain Injury: External Validation in an Adult and in a Pediatric Cohort.

    PubMed

    Güiza, Fabian; Depreitere, Bart; Piper, Ian; Citerio, Giuseppe; Jorens, Philippe G; Maas, Andrew; Schuhmann, Martin U; Lo, Tsz-Yan Milly; Donald, Rob; Jones, Patricia; Maier, Gottlieb; Van den Berghe, Greet; Meyfroidt, Geert

    2017-03-01

    A model for early detection of episodes of increased intracranial pressure in traumatic brain injury patients has been previously developed and validated based on retrospective adult patient data from the multicenter Brain-IT database. The purpose of the present study is to validate this early detection model in different cohorts of recently treated adult and pediatric traumatic brain injury patients. Prognostic modeling. Noninterventional, observational, retrospective study. The adult validation cohort comprised recent traumatic brain injury patients from San Gerardo Hospital in Monza (n = 50), Leuven University Hospital (n = 26), Antwerp University Hospital (n = 19), Tübingen University Hospital (n = 18), and Southern General Hospital in Glasgow (n = 8). The pediatric validation cohort comprised patients from neurosurgical and intensive care centers in Edinburgh and Newcastle (n = 79). None. The model's performance was evaluated with respect to discrimination, calibration, overall performance, and clinical usefulness. In the recent adult validation cohort, the model retained excellent performance as in the original study. In the pediatric validation cohort, the model retained good discrimination and a positive net benefit, albeit with a performance drop in the remaining criteria. The obtained external validation results confirm the robustness of the model to predict future increased intracranial pressure events 30 minutes in advance, in adult and pediatric traumatic brain injury patients. These results are a large step toward an early warning system for increased intracranial pressure that can be generally applied. Furthermore, the sparseness of this model that uses only two routinely monitored signals as inputs (intracranial pressure and mean arterial blood pressure) is an additional asset.

  19. Purpose-in-Life Test: Comparison of the Main Models in Patients with Mental Disorders.

    PubMed

    García-Alandete, Joaquín; Marco, José H; Pérez, Sandra

    2017-06-27

    The aim of this study was to compare the main proposed models for the Purpose-In-Life Test, a scale for assessing meaning in life, in 229 Spanish patients with mental disorders (195 females and 34 males, aged 13-68, M = 34.43, SD = 12.19). Confirmatory factor-analytic procedures showed that the original model of the Purpose-In-Life Test, a 20-item unidimensional scale, obtained a better fit than the other analyzed models, SBχ2(df) = 326.27(170), SBχ2/df = 1.92, TLI = .93, CFI = .94, IFI = .94, RMSEA = .063 (90% CI [.053, .074]), CAIC = -767.46, as well as a high internal consistency, (α = .90). The main conclusion is that the original version of the Purpose-In-Life shows a robust construct validity in a clinical population. However, authors recommend an in-depth psychometric analysis of the Purpose-In-Life Test among clinical population. Likewise, the importance of assessing meaning in life in order to enhance psychotherapeutic treatment is noted.

  20. Verification and Validation of a Three-Dimensional Generalized Composite Material Model

    NASA Technical Reports Server (NTRS)

    Hoffarth, Canio; Harrington, Joseph; Subramaniam, D. Rajan; Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Blankenhorn, Gunther

    2014-01-01

    A general purpose orthotropic elasto-plastic computational constitutive material model has been developed to improve predictions of the response of composites subjected to high velocity impact. The three-dimensional orthotropic elasto-plastic composite material model is being implemented initially for solid elements in LS-DYNA as MAT213. In order to accurately represent the response of a composite, experimental stress-strain curves are utilized as input, allowing for a more general material model that can be used on a variety of composite applications. The theoretical details are discussed in a companion paper. This paper documents the implementation, verification and qualitative validation of the material model using the T800- F3900 fiber/resin composite material.

  1. Verification and Validation of a Three-Dimensional Generalized Composite Material Model

    NASA Technical Reports Server (NTRS)

    Hoffarth, Canio; Harrington, Joseph; Rajan, Subramaniam D.; Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Blankenhorn, Gunther

    2015-01-01

    A general purpose orthotropic elasto-plastic computational constitutive material model has been developed to improve predictions of the response of composites subjected to high velocity impact. The three-dimensional orthotropic elasto-plastic composite material model is being implemented initially for solid elements in LS-DYNA as MAT213. In order to accurately represent the response of a composite, experimental stress-strain curves are utilized as input, allowing for a more general material model that can be used on a variety of composite applications. The theoretical details are discussed in a companion paper. This paper documents the implementation, verification and qualitative validation of the material model using the T800-F3900 fiber/resin composite material

  2. Use of Latent Class Analysis to define groups based on validity, cognition, and emotional functioning.

    PubMed

    Morin, Ruth T; Axelrod, Bradley N

    Latent Class Analysis (LCA) was used to classify a heterogeneous sample of neuropsychology data. In particular, we used measures of performance validity, symptom validity, cognition, and emotional functioning to assess and describe latent groups of functioning in these areas. A data-set of 680 neuropsychological evaluation protocols was analyzed using a LCA. Data were collected from evaluations performed for clinical purposes at an urban medical center. A four-class model emerged as the best fitting model of latent classes. The resulting classes were distinct based on measures of performance validity and symptom validity. Class A performed poorly on both performance and symptom validity measures. Class B had intact performance validity and heightened symptom reporting. The remaining two Classes performed adequately on both performance and symptom validity measures, differing only in cognitive and emotional functioning. In general, performance invalidity was associated with worse cognitive performance, while symptom invalidity was associated with elevated emotional distress. LCA appears useful in identifying groups within a heterogeneous sample with distinct performance patterns. Further, the orthogonal nature of performance and symptom validities is supported.

  3. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yuhe; Mazur, Thomas R.; Green, Olga

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: PENELOPE was first translated from FORTRAN to C++ and the result was confirmed to produce equivalent results to the original code. The C++ code was then adapted to CUDA in a workflow optimized for GPU architecture. The original code was expandedmore » to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gPENELOPE as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gPENELOPE. Ultimately, gPENELOPE was applied toward independent validation of patient doses calculated by MRIdian’s KMC. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread FORTRAN implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of PENELOPE. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.« less

  4. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model

    PubMed Central

    Wang, Yuhe; Mazur, Thomas R.; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H. Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H. Harold

    2016-01-01

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian’s kmc. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems. PMID:27370123

  5. Dependence and physical exercise: Spanish validation of the Exercise Dependence Scale-Revised (EDS-R).

    PubMed

    Sicilia, Alvaro; González-Cutre, David

    2011-05-01

    The purpose of this study was to validate the Spanish version of the Exercise Dependence Scale-Revised (EDS-R). To achieve this goal, a sample of 531 sport center users was used and the psychometric properties of the EDS-R were examined through different analyses. The results supported both the first-order seven-factor model and the higher-order model (seven first-order factors and one second-order factor). The structure of both models was invariant across age. Correlations among the subscales indicated a related factor model, supporting construct validity of the scale. Alpha values over .70 (except for Reduction in Other Activities) and suitable levels of temporal stability were obtained. Users practicing more than three days per week had higher scores in all subscales than the group practicing with a frequency of three days or fewer. The findings of this study provided reliability and validity for the EDS-R in a Spanish context.

  6. Variety and Drift in the Functions and Purposes of Assessment in K-12 Education

    ERIC Educational Resources Information Center

    Ho, Andrew D.

    2014-01-01

    Background/Context: The target of assessment validation is not an assessment but the use of an assessment for a purpose. Although the validation literature often provides examples of assessment purposes, comprehensive reviews of these purposes are rare. Additionally, assessment purposes posed for validation are generally described as discrete and…

  7. Validation of the Continuum of Care Conceptual Model for Athletic Therapy

    PubMed Central

    Lafave, Mark R.; Butterwick, Dale; Eubank, Breda

    2015-01-01

    Utilization of conceptual models in field-based emergency care currently borrows from existing standards of medical and paramedical professions. The purpose of this study was to develop and validate a comprehensive conceptual model that could account for injuries ranging from nonurgent to catastrophic events including events that do not follow traditional medical or prehospital care protocols. The conceptual model should represent the continuum of care from the time of initial injury spanning to an athlete's return to participation in their sport. Finally, the conceptual model should accommodate both novices and experts in the AT profession. This paper chronicles the content validation steps of the Continuum of Care Conceptual Model for Athletic Therapy (CCCM-AT). The stages of model development were domain and item generation, content expert validation using a three-stage modified Ebel procedure, and pilot testing. Only the final stage of the modified Ebel procedure reached a priori 80% consensus on three domains of interest: (1) heading descriptors; (2) the order of the model; (3) the conceptual model as a whole. Future research is required to test the use of the CCCM-AT in order to understand its efficacy in teaching and practice within the AT discipline. PMID:26464897

  8. [A Methodological Quality Assessment of South Korean Nursing Research using Structural Equation Modeling in South Korea].

    PubMed

    Kim, Jung-Hee; Shin, Sujin; Park, Jin-Hwa

    2015-04-01

    The purpose of this study was to evaluate the methodological quality of nursing studies using structural equation modeling in Korea. Databases of KISS, DBPIA, and National Assembly Library up to March 2014 were searched using the MeSH terms 'nursing', 'structure', 'model'. A total of 152 studies were screened. After removal of duplicates and non-relevant titles, 61 papers were read in full. Of the sixty-one articles retrieved, 14 studies were published between 1992 and 2000, 27, between 2001 and 2010, and 20, between 2011 and March 2014. The methodological quality of the review examined varied considerably. The findings of this study suggest that more rigorous research is necessary to address theoretical identification, two indicator rule, distribution of sample, treatment of missing values, mediator effect, discriminant validity, convergent validity, post hoc model modification, equivalent models issues, and alternative models issues should be undergone. Further research with robust consistent methodological study designs from model identification to model respecification is needed to improve the validity of the research.

  9. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

    ERIC Educational Resources Information Center

    Liu, Boquan; Polce, Evan; Sprott, Julien C.; Jiang, Jack J.

    2018-01-01

    Purpose: The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Study Design: Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100…

  10. Effects of a Teen Pregnancy Prevention Program on Teens' Attitudes toward Sexuality: A Latent Trait Modeling Approach

    ERIC Educational Resources Information Center

    Thomas, Charles L.; Dimitrov, Dimiter M.

    2007-01-01

    The purpose of this study was to examine the effects of program interventions in a school-based teen pregnancy program on hypothesized constructs underlying teens' attitudes toward sexuality. An important task related to this purpose was the validation of the constructs and their stability from pre- to postintervention measures. Data from 1,136…

  11. Evaluation of the whole body physiologically based pharmacokinetic (WB-PBPK) modeling of drugs.

    PubMed

    Munir, Anum; Azam, Shumaila; Fazal, Sahar; Bhatti, A I

    2018-08-14

    The Physiologically based pharmacokinetic (PBPK) modeling is a supporting tool in drug discovery and improvement. Simulations produced by these models help to save time and aids in examining the effects of different variables on the pharmacokinetics of drugs. For this purpose, Sheila and Peters suggested a PBPK model capable of performing simulations to study a given drug absorption. There is a need to extend this model to the whole body entailing all another process like distribution, metabolism, and elimination, besides absorption. The aim of this scientific study is to hypothesize a WB-PBPK model through integrating absorption, distribution, metabolism, and elimination processes with the existing PBPK model.Absorption, distribution, metabolism, and elimination models are designed, integrated with PBPK model and validated. For validation purposes, clinical records of few drugs are collected from the literature. The developed WB-PBPK model is affirmed by comparing the simulations produced by the model against the searched clinical data. . It is proposed that the WB-PBPK model may be used in pharmaceutical industries to create of the pharmacokinetic profiles of drug candidates for better outcomes, as it is advance PBPK model and creates comprehensive PK profiles for drug ADME in concentration-time plots. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Modeling Relationships among Learning, Attitude, Self-Perception, and Science Achievement for Grade 8 Saudi Students

    ERIC Educational Resources Information Center

    Tighezza, M'Hamed

    2014-01-01

    The purpose of the present study was to examine the validity of modeling science achievement in terms of 3 social psychological variables (school connectedness, science attitude, and active learning) and 2 self-perception variables (self-confidence and science value). Two models were tested: full mediation and partial mediation. In the…

  13. Combined Use of Tissue Morphology, Neural Network Analysis of Chromatin Texture & Clinical Variables to Predict Prostate Cancer Agressiveness from Biopsy Material

    DTIC Science & Technology

    1999-10-01

    THE PURPOSE OF THIS REPORT IS TO COMBINE CLINICAL, SERUM, PATHOLOGICAL AND COMPUTER DERIVED INFORMATION INTO AN ARTIFICIAL NEURAL NETWORK TO DEVELOP...01). Development of a artificial neural network model (year 02). Prospective validation of this model (projected year 03). All models will be tested

  14. Improved Characters and Student Learning Outcomes through Development of Character Education Based General Physics Learning Model

    ERIC Educational Resources Information Center

    Derlina; Sabani; Mihardi, Satria

    2015-01-01

    Education Research in Indonesia has begun to lead to the development of character education and is no longer fixated on the outcomes of cognitive learning. This study purposed to produce character education based general physics learning model (CEBGP Learning Model) and with valid, effective and practical peripheral devices to improve character…

  15. Modeling and validating the cost and clinical pathway of colorectal cancer.

    PubMed

    Joranger, Paal; Nesbakken, Arild; Hoff, Geir; Sorbye, Halfdan; Oshaug, Arne; Aas, Eline

    2015-02-01

    Cancer is a major cause of morbidity and mortality, and colorectal cancer (CRC) is the third most common cancer in the world. The estimated costs of CRC treatment vary considerably, and if CRC costs in a model are based on empirically estimated total costs of stage I, II, III, or IV treatments, then they lack some flexibility to capture future changes in CRC treatment. The purpose was 1) to describe how to model CRC costs and survival and 2) to validate the model in a transparent and reproducible way. We applied a semi-Markov model with 70 health states and tracked age and time since specific health states (using tunnels and 3-dimensional data matrix). The model parameters are based on an observational study at Oslo University Hospital (2049 CRC patients), the National Patient Register, literature, and expert opinion. The target population was patients diagnosed with CRC. The model followed the patients diagnosed with CRC from the age of 70 until death or 100 years. The study focused on the perspective of health care payers. The model was validated for face validity, internal and external validity, and cross-validity. The validation showed a satisfactory match with other models and empirical estimates for both cost and survival time, without any preceding calibration of the model. The model can be used to 1) address a range of CRC-related themes (general model) like survival and evaluation of the cost of treatment and prevention measures; 2) make predictions from intermediate to final outcomes; 3) estimate changes in resource use and costs due to changing guidelines; and 4) adjust for future changes in treatment and trends over time. The model is adaptable to other populations. © The Author(s) 2014.

  16. Evaluation of migration models that might be used in support of regulations for food-contact plastics.

    PubMed

    Begley, T; Castle, L; Feigenbaum, A; Franz, R; Hinrichs, K; Lickly, T; Mercea, P; Milana, M; O'Brien, A; Rebre, S; Rijk, R; Piringer, O

    2005-01-01

    Materials and articles intended to come into contact with food must be shown to be safe because they might interact with food during processing, storage and the transportation of foodstuffs. Framework Directive 89/109/EEC and its related specific Directives provide this safety basis for the protection of the consumer against inadmissible chemical contamination from food-contact materials. Recently, the European Commission charged an international group of experts to demonstrate that migration modelling can be regarded as a valid and reliable tool to calculate 'reasonable worst-case' migration rates from the most important food-contact plastics into the European Union official food simulants. The paper summarizes the main steps followed to build up and validate a migration estimation model that can be used, for a series of plastic food-contact materials and migrants, for regulatory purposes. Analytical solutions of the diffusion equation in conjunction with an 'upper limit' equation for the migrant diffusion coefficient, D(P), and the use of 'worst case' partitioning coefficients K(P,F) were used in the migration model. The results obtained were then validated, at a confidence level of 95%, by comparison with the available experimental evidence. The successful accomplishment of the goals of this project is reflected by the fact that in Directive 2002/72/EC, the European Commission included the mathematical modelling as an alternative tool to determine migration rates for compliance purposes.

  17. Assessment of family functioning in Caucasian and Hispanic Americans: reliability, validity, and factor structure of the Family Assessment Device.

    PubMed

    Aarons, Gregory A; McDonald, Elizabeth J; Connelly, Cynthia D; Newton, Rae R

    2007-12-01

    The purpose of this study was to examine the factor structure, reliability, and validity of the Family Assessment Device (FAD) among a national sample of Caucasian and Hispanic American families receiving public sector mental health services. A confirmatory factor analysis conducted to test model fit yielded equivocal findings. With few exceptions, indices of model fit, reliability, and validity were poorer for Hispanic Americans compared with Caucasian Americans. Contrary to our expectation, an exploratory factor analysis did not result in a better fitting model of family functioning. Without stronger evidence supporting a reformulation of the FAD, we recommend against such a course of action. Findings highlight the need for additional research on the role of culture in measurement of family functioning.

  18. Accuracy and generalizability of using automated methods for identifying adverse events from electronic health record data: a validation study protocol.

    PubMed

    Rochefort, Christian M; Buckeridge, David L; Tanguay, Andréanne; Biron, Alain; D'Aragon, Frédérick; Wang, Shengrui; Gallix, Benoit; Valiquette, Louis; Audet, Li-Anne; Lee, Todd C; Jayaraman, Dev; Petrucci, Bruno; Lefebvre, Patricia

    2017-02-16

    Adverse events (AEs) in acute care hospitals are frequent and associated with significant morbidity, mortality, and costs. Measuring AEs is necessary for quality improvement and benchmarking purposes, but current detection methods lack in accuracy, efficiency, and generalizability. The growing availability of electronic health records (EHR) and the development of natural language processing techniques for encoding narrative data offer an opportunity to develop potentially better methods. The purpose of this study is to determine the accuracy and generalizability of using automated methods for detecting three high-incidence and high-impact AEs from EHR data: a) hospital-acquired pneumonia, b) ventilator-associated event and, c) central line-associated bloodstream infection. This validation study will be conducted among medical, surgical and ICU patients admitted between 2013 and 2016 to the Centre hospitalier universitaire de Sherbrooke (CHUS) and the McGill University Health Centre (MUHC), which has both French and English sites. A random 60% sample of CHUS patients will be used for model development purposes (cohort 1, development set). Using a random sample of these patients, a reference standard assessment of their medical chart will be performed. Multivariate logistic regression and the area under the curve (AUC) will be employed to iteratively develop and optimize three automated AE detection models (i.e., one per AE of interest) using EHR data from the CHUS. These models will then be validated on a random sample of the remaining 40% of CHUS patients (cohort 1, internal validation set) using chart review to assess accuracy. The most accurate models developed and validated at the CHUS will then be applied to EHR data from a random sample of patients admitted to the MUHC French site (cohort 2) and English site (cohort 3)-a critical requirement given the use of narrative data -, and accuracy will be assessed using chart review. Generalizability will be determined by comparing AUCs from cohorts 2 and 3 to those from cohort 1. This study will likely produce more accurate and efficient measures of AEs. These measures could be used to assess the incidence rates of AEs, evaluate the success of preventive interventions, or benchmark performance across hospitals.

  19. The Bland-Altman Method Should Not Be Used in Regression Cross-Validation Studies

    ERIC Educational Resources Information Center

    O'Connor, Daniel P.; Mahar, Matthew T.; Laughlin, Mitzi S.; Jackson, Andrew S.

    2011-01-01

    The purpose of this study was to demonstrate the bias in the Bland-Altman (BA) limits of agreement method when it is used to validate regression models. Data from 1,158 men were used to develop three regression equations to estimate maximum oxygen uptake (R[superscript 2] = 0.40, 0.61, and 0.82, respectively). The equations were evaluated in a…

  20. The Predictive Validity of Interim Assessment Scores Based on the Full-Information Bifactor Model for the Prediction of End-of-Grade Test Performance

    ERIC Educational Resources Information Center

    Immekus, Jason C.; Atitya, Ben

    2016-01-01

    Interim tests are a central component of district-wide assessment systems, yet their technical quality to guide decisions (e.g., instructional) has been repeatedly questioned. In response, the study purpose was to investigate the validity of a series of English Language Arts (ELA) interim assessments in terms of dimensionality and prediction of…

  1. A Synthesis Model of Sustainable Market Orientation: Conceptualization, Measurement, and Influence on Academic Accreditation--A Case Study of Egyptian-Accredited Faculties

    ERIC Educational Resources Information Center

    Abou-Warda, Sherein H.

    2014-01-01

    Higher education institutions are increasingly concerned about accreditation. Although sustainable market orientation (SMO) bears on academic accreditation, to date, no study has developed a valid scale of SMO or assessed its influence on accreditation. The purpose of this paper is to construct and validate an SMO scale that was developed in…

  2. Rasch Modeling of Revised Token Test Performance: Validity and Sensitivity to Change

    ERIC Educational Resources Information Center

    Hula, William; Doyle, Patrick J.; McNeil, Malcolm R.; Mikolic, Joseph M.

    2006-01-01

    The purpose of this research was to examine the validity of the 55-item Revised Token Test (RTT) and to compare traditional and Rasch-based scores in their ability to detect group differences and change over time. The 55-item RTT was administered to 108 left- and right-hemisphere stroke survivors, and the data were submitted to Rasch analysis.…

  3. The Development of a Secondary-Level Solo Wind Instrument Performance Rubric Using the Multifaceted Rasch Partial Credit Measurement Model

    ERIC Educational Resources Information Center

    Wesolowski, Brian C.; Amend, Ross M.; Barnstead, Thomas S.; Edwards, Andrew S.; Everhart, Matthew; Goins, Quentin R.; Grogan, Robert J., III; Herceg, Amanda M.; Jenkins, S. Ira; Johns, Paul M.; McCarver, Christopher J.; Schaps, Robin E.; Sorrell, Gary W.; Williams, Jonathan D.

    2017-01-01

    The purpose of this study was to describe the development of a valid and reliable rubric to assess secondary-level solo instrumental music performance based on principles of invariant measurement. The research questions that guided this study included (1) What is the psychometric quality (i.e., validity, reliability, and precision) of a scale…

  4. Testing of the SEE and OEE post-hip fracture.

    PubMed

    Resnick, Barbara; Orwig, Denise; Zimmerman, Sheryl; Hawkes, William; Golden, Justine; Werner-Bronzert, Michelle; Magaziner, Jay

    2006-08-01

    The purpose of this study was to test the reliability and validity of the Self-Efficacy for Exercise (SEE) and the Outcome Expectations for Exercise (OEE) scales in a sample of 166 older women post-hip fracture. There was some evidence of validity of the SEE and OEE based on confirmatory factor analysis and Rasch model testing, criterion based and convergent validity, and evidence of internal consistency based on alpha coefficients and separation indices and reliability based on R2 estimates. Rasch model testing demonstrated that some items had high variability. Based on these findings suggestions are made for how items could be revised and the scales improved for future use.

  5. Goals and Status of the NASA Juncture Flow Experiment

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Morrison, Joseph H.

    2016-01-01

    The NASA Juncture Flow experiment is a new effort whose focus is attaining validation data in the juncture region of a wing-body configuration. The experiment is designed specifically for the purpose of CFD validation. Current turbulence models routinely employed by Reynolds-averaged Navier-Stokes CFD are inconsistent in their prediction of corner flow separation in aircraft juncture regions, so experimental data in the near-wall region of such a configuration will be useful both for assessment as well as for turbulence model improvement. This paper summarizes the Juncture Flow effort to date, including preliminary risk-reduction experiments already conducted and planned future experiments. The requirements and challenges associated with conducting a quality validation test are discussed.

  6. Validity of the Eating Attitude Test among Exercisers.

    PubMed

    Lane, Helen J; Lane, Andrew M; Matheson, Hilary

    2004-12-01

    Theory testing and construct measurement are inextricably linked. To date, no published research has looked at the factorial validity of an existing eating attitude inventory for use with exercisers. The Eating Attitude Test (EAT) is a 26-item measure that yields a single index of disordered eating attitudes. The original factor analysis showed three interrelated factors: Dieting behavior (13-items), oral control (7-items), and bulimia nervosa-food preoccupation (6-items). The primary purpose of the study was to examine the factorial validity of the EAT among a sample of exercisers. The second purpose was to investigate relationships between eating attitudes scores and selected psychological constructs. In stage one, 598 regular exercisers completed the EAT. Confirmatory factor analysis (CFA) was used to test the single-factor, a three-factor model, and a four-factor model, which distinguished bulimia from food pre-occupation. CFA of the single-factor model (RCFI = 0.66, RMSEA = 0.10), the three-factor-model (RCFI = 0.74; RMSEA = 0.09) showed poor model fit. There was marginal fit for the 4-factor model (RCFI = 0.91, RMSEA = 0.06). Results indicated five-items showed poor factor loadings. After these 5-items were discarded, the three models were re-analyzed. CFA results indicated that the single-factor model (RCFI = 0.76, RMSEA = 0.10) and three-factor model (RCFI = 0.82, RMSEA = 0.08) showed poor fit. CFA results for the four-factor model showed acceptable fit indices (RCFI = 0.98, RMSEA = 0.06). Stage two explored relationships between EAT scores, mood, self-esteem, and motivational indices toward exercise in terms of self-determination, enjoyment and competence. Correlation results indicated that depressed mood scores positively correlated with bulimia and dieting scores. Further, dieting was inversely related with self-determination toward exercising. Collectively, findings suggest that a 21-item four-factor model shows promising validity coefficients among exercise participants, and that future research is needed to investigate eating attitudes among samples of exercisers. Key PointsValidity of psychometric measures should be thoroughly investigated. Researchers should not assume that a scale validation on one sample will show the same validity coefficients in a different population.The Eating Attitude Test is a commonly used scale. The present study shows a revised 21-item scale was suitable for exercisers.Researchers using the Eating Attitude Test should use subscales of Dieting, Oral control, Food pre-occupation, and Bulimia.Future research should involve qualitative techniques and interview exercise participants to explore the nature of eating attitudes.

  7. TH-AB-BRA-07: PENELOPE-Based GPU-Accelerated Dose Calculation System Applied to MRI-Guided Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Y; Mazur, T; Green, O

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: We first translated PENELOPE from FORTRAN to C++ and validated that the translation produced equivalent results. Then we adapted the C++ code to CUDA in a workflow optimized for GPU architecture. We expanded upon the original code to include voxelized transportmore » boosted by Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, we incorporated the vendor-provided MRIdian head model into the code. We performed a set of experimental measurements on MRIdian to examine the accuracy of both the head model and gPENELOPE, and then applied gPENELOPE toward independent validation of patient doses calculated by MRIdian’s KMC. Results: We achieve an average acceleration factor of 152 compared to the original single-thread FORTRAN implementation with the original accuracy preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen (1), mediastinum (1) and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: We developed a Monte Carlo simulation platform based on a GPU-accelerated version of PENELOPE. We validated that both the vendor provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.« less

  8. Modeling and experimental validation of a Hybridized Energy Storage System for automotive applications

    NASA Astrophysics Data System (ADS)

    Fiorenti, Simone; Guanetti, Jacopo; Guezennec, Yann; Onori, Simona

    2013-11-01

    This paper presents the development and experimental validation of a dynamic model of a Hybridized Energy Storage System (HESS) consisting of a parallel connection of a lead acid (PbA) battery and double layer capacitors (DLCs), for automotive applications. The dynamic modeling of both the PbA battery and the DLC has been tackled via the equivalent electric circuit based approach. Experimental tests are designed for identification purposes. Parameters of the PbA battery model are identified as a function of state of charge and current direction, whereas parameters of the DLC model are identified for different temperatures. A physical HESS has been assembled at the Center for Automotive Research The Ohio State University and used as a test-bench to validate the model against a typical current profile generated for Start&Stop applications. The HESS model is then integrated into a vehicle simulator to assess the effects of the battery hybridization on the vehicle fuel economy and mitigation of the battery stress.

  9. A meta-model for computer executable dynamic clinical safety checklists.

    PubMed

    Nan, Shan; Van Gorp, Pieter; Lu, Xudong; Kaymak, Uzay; Korsten, Hendrikus; Vdovjak, Richard; Duan, Huilong

    2017-12-12

    Safety checklist is a type of cognitive tool enforcing short term memory of medical workers with the purpose of reducing medical errors caused by overlook and ignorance. To facilitate the daily use of safety checklists, computerized systems embedded in the clinical workflow and adapted to patient-context are increasingly developed. However, the current hard-coded approach of implementing checklists in these systems increase the cognitive efforts of clinical experts and coding efforts for informaticists. This is due to the lack of a formal representation format that is both understandable by clinical experts and executable by computer programs. We developed a dynamic checklist meta-model with a three-step approach. Dynamic checklist modeling requirements were extracted by performing a domain analysis. Then, existing modeling approaches and tools were investigated with the purpose of reusing these languages. Finally, the meta-model was developed by eliciting domain concepts and their hierarchies. The feasibility of using the meta-model was validated by two case studies. The meta-model was mapped to specific modeling languages according to the requirements of hospitals. Using the proposed meta-model, a comprehensive coronary artery bypass graft peri-operative checklist set and a percutaneous coronary intervention peri-operative checklist set have been developed in a Dutch hospital and a Chinese hospital, respectively. The result shows that it is feasible to use the meta-model to facilitate the modeling and execution of dynamic checklists. We proposed a novel meta-model for the dynamic checklist with the purpose of facilitating creating dynamic checklists. The meta-model is a framework of reusing existing modeling languages and tools to model dynamic checklists. The feasibility of using the meta-model is validated by implementing a use case in the system.

  10. Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation

    NASA Technical Reports Server (NTRS)

    DePriest, Douglas; Morgan, Carolyn

    2003-01-01

    The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.

  11. Numerical modeling and preliminary validation of drag-based vertical axis wind turbine

    NASA Astrophysics Data System (ADS)

    Krysiński, Tomasz; Buliński, Zbigniew; Nowak, Andrzej J.

    2015-03-01

    The main purpose of this article is to verify and validate the mathematical description of the airflow around a wind turbine with vertical axis of rotation, which could be considered as representative for this type of devices. Mathematical modeling of the airflow around wind turbines in particular those with the vertical axis is a problematic matter due to the complex nature of this highly swirled flow. Moreover, it is turbulent flow accompanied by a rotation of the rotor and the dynamic boundary layer separation. In such conditions, the key aspects of the mathematical model are accurate turbulence description, definition of circular motion as well as accompanying effects like centrifugal force or the Coriolis force and parameters of spatial and temporal discretization. The paper presents the impact of the different simulation parameters on the obtained results of the wind turbine simulation. Analysed models have been validated against experimental data published in the literature.

  12. Injector Design Tool Improvements: User's manual for FDNS V.4.5

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Wei, Hong; Liu, Jiwen

    1998-01-01

    The major emphasis of the current effort is in the development and validation of an efficient parallel machine computational model, based on the FDNS code, to analyze the fluid dynamics of a wide variety of liquid jet configurations for general liquid rocket engine injection system applications. This model includes physical models for droplet atomization, breakup/coalescence, evaporation, turbulence mixing and gas-phase combustion. Benchmark validation cases for liquid rocket engine chamber combustion conditions will be performed for model validation purpose. Test cases may include shear coaxial, swirl coaxial and impinging injection systems with combinations LOXIH2 or LOXISP-1 propellant injector elements used in rocket engine designs. As a final goal of this project, a well tested parallel CFD performance methodology together with a user's operation description in a final technical report will be reported at the end of the proposed research effort.

  13. Small-signal model for the series resonant converter

    NASA Technical Reports Server (NTRS)

    King, R. J.; Stuart, T. A.

    1985-01-01

    The results of a previous discrete-time model of the series resonant dc-dc converter are reviewed and from these a small signal dynamic model is derived. This model is valid for low frequencies and is based on the modulation of the diode conduction angle for control. The basic converter is modeled separately from its output filter to facilitate the use of these results for design purposes. Experimental results are presented.

  14. Assessment of MARMOT Grain Growth Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fromm, B.; Zhang, Y.; Schwen, D.

    2015-12-01

    This report assesses the MARMOT grain growth model by comparing modeling predictions with experimental results from thermal annealing. The purpose here is threefold: (1) to demonstrate the validation approach of using thermal annealing experiments with non-destructive characterization, (2) to test the reconstruction capability and computation efficiency in MOOSE, and (3) to validate the grain growth model and the associated parameters that are implemented in MARMOT for UO 2. To assure a rigorous comparison, the 2D and 3D initial experimental microstructures of UO 2 samples were characterized using non-destructive Synchrotron x-ray. The same samples were then annealed at 2273K for grainmore » growth, and their initial microstructures were used as initial conditions for simulated annealing at the same temperature using MARMOT. After annealing, the final experimental microstructures were characterized again to compare with the results from simulations. So far, comparison between modeling and experiments has been done for 2D microstructures, and 3D comparison is underway. The preliminary results demonstrated the usefulness of the non-destructive characterization method for MARMOT grain growth model validation. A detailed analysis of the 3D microstructures is in progress to fully validate the current model in MARMOT.« less

  15. Forward Bay Cover Separation Modeling and Testing for the Orion Multi-Purpose Crew Vehicle

    NASA Technical Reports Server (NTRS)

    Ali, Yasmin; Radke, Tara; Chuhta, Jesse; Hughes, Michael

    2014-01-01

    Spacecraft multi-body separation events during atmospheric descent require complex testing and analysis to validate the flight separation dynamics model and to verify no recontact. NASA Orion Multi-Purpose Crew Vehicle (MPCV) teams examined key model parameters and risk areas to develop a robust but affordable test campaign in order to validate and verify the Forward Bay Cover (FBC) separation event for Exploration Flight Test-1 (EFT-1). The FBC jettison simulation model is highly complex, consisting of dozens of parameters varied simultaneously, with numerous multi-parameter interactions (coupling and feedback) among the various model elements, and encompassing distinct near-field, mid-field, and far-field regimes. The test campaign was composed of component-level testing (for example gas-piston thrusters and parachute mortars), ground FBC jettison tests, and FBC jettison air-drop tests that were accomplished by a highly multi-disciplinary team. Three ground jettison tests isolated the testing of mechanisms and structures to anchor the simulation models excluding aerodynamic effects. Subsequently, two air-drop tests added aerodynamic and parachute parameters, and served as integrated system demonstrations, which had been preliminarily explored during the Orion Pad Abort-1 (PA-1) flight test in May 2010. Both ground and drop tests provided extensive data to validate analytical models and to verify the FBC jettison event for EFT-1, but more testing is required to support human certification, for which NASA and Lockheed Martin are applying knowledge from Apollo and EFT-1 testing and modeling to develop a robust but affordable human spacecraft capability.

  16. Concurrent and convergent validity of the mobility- and multidimensional-hierarchical disability categorization models with physical performance in community older adults.

    PubMed

    Hu, Ming-Hsia; Yeh, Chih-Jun; Chen, Tou-Rong; Wang, Ching-Yi

    2014-01-01

    A valid, time-efficient and easy-to-use instrument is important for busy clinical settings, large scale surveys, or community screening use. The purpose of this study was to validate the mobility hierarchical disability categorization model (an abbreviated model) by investigating its concurrent validity with the multidimensional hierarchical disability categorization model (a comprehensive model) and triangulating both models with physical performance measures in older adults. 604 community-dwelling older adults of at least 60 years in age volunteered to participate. Self-reported function on mobility, instrumental activities of daily living (IADL) and activities of daily living (ADL) domains were recorded and then the disability status determined based on both the multidimensional hierarchical categorization model and the mobility hierarchical categorization model. The physical performance measures, consisting of grip strength and usual and fastest gait speeds (UGS, FGS), were collected on the same day. Both categorization models showed high correlation (γs = 0.92, p < 0.001) and agreement (kappa = 0.61, p < 0.0001). Physical performance measures demonstrated significant different group means among the disability subgroups based on both categorization models. The results of multiple regression analysis indicated that both models individually explain similar amount of variance on all physical performances, with adjustments for age, sex, and number of comorbidities. Our results found that the mobility hierarchical disability categorization model is a valid and time efficient tool for large survey or screening use.

  17. Calibration and validation of rockfall models

    NASA Astrophysics Data System (ADS)

    Frattini, Paolo; Valagussa, Andrea; Zenoni, Stefania; Crosta, Giovanni B.

    2013-04-01

    Calibrating and validating landslide models is extremely difficult due to the particular characteristic of landslides: limited recurrence in time, relatively low frequency of the events, short durability of post-event traces, poor availability of continuous monitoring data, especially for small landslide and rockfalls. For this reason, most of the rockfall models presented in literature completely lack calibration and validation of the results. In this contribution, we explore different strategies for rockfall model calibration and validation starting from both an historical event and a full-scale field test. The event occurred in 2012 in Courmayeur (Western Alps, Italy), and caused serious damages to quarrying facilities. This event has been studied soon after the occurrence through a field campaign aimed at mapping the blocks arrested along the slope, the shape and location of the detachment area, and the traces of scars associated to impacts of blocks on the slope. The full-scale field test was performed by Geovert Ltd in the Christchurch area (New Zealand) after the 2011 earthquake. During the test, a number of large blocks have been mobilized from the upper part of the slope and filmed with high velocity cameras from different viewpoints. The movies of each released block were analysed to identify the block shape, the propagation path, the location of impacts, the height of the trajectory and the velocity of the block along the path. Both calibration and validation of rockfall models should be based on the optimization of the agreement between the actual trajectories or location of arrested blocks and the simulated ones. A measure that describe this agreement is therefore needed. For calibration purpose, this measure should simple enough to allow trial and error repetitions of the model for parameter optimization. In this contribution we explore different calibration/validation measures: (1) the percentage of simulated blocks arresting within a buffer of the actual blocks, (2) the percentage of trajectories passing through the buffer of the actual rockfall path, (3) the mean distance between the location of arrest of each simulated blocks and the location of the nearest actual blocks; (4) the mean distance between the location of detachment of each simulated block and the location of detachment of the actual block located closer to the arrest position. By applying the four measures to the case studies, we observed that all measures are able to represent the model performance for validation purposes. However, the third measure is more simple and reliable than the others, and seems to be optimal for model calibration, especially when using a parameter estimation and optimization modelling software for automated calibration.

  18. Reliability and Model Fit

    ERIC Educational Resources Information Center

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  19. Modelling Question Difficulty in an A Level Physics Examination

    ERIC Educational Resources Information Center

    Crisp, Victoria; Grayson, Rebecca

    2013-01-01

    "Item difficulty modelling" is a technique used for a number of purposes such as to support future item development, to explore validity in relation to the constructs that influence difficulty and to predict the difficulty of items. This research attempted to explore the factors influencing question difficulty in a general qualification…

  20. The Co-Creation-Wheel: A Four-Dimensional Model of Collaborative Interorganistional Innovation

    ERIC Educational Resources Information Center

    Ehlen, Corry; van der Klink, Marcel; Stoffers, Jol; Boshuizen, Henny

    2017-01-01

    Purpose: This study aims to design and validate a conceptual and practical model of co-creation. Co-creation, to design collaborative new products, services and processes in contact with users, has become more and more important because organisations increasingly require multidisciplinary collaboration inside and outside the organisation to…

  1. An Assessment of the Quantitative Literacy of Undergraduate Students

    ERIC Educational Resources Information Center

    Wilkins, Jesse L. M.

    2016-01-01

    Quantitative literacy (QLT) represents an underlying higher-order construct that accounts for a person's willingness to engage in quantitative situations in everyday life. The purpose of this study is to retest the construct validity of a model of quantitative literacy (Wilkins, 2010). In this model, QLT represents a second-order factor that…

  2. Judgment Research and the Dimensional Model of Personality

    ERIC Educational Resources Information Center

    Garb, Howard N.

    2008-01-01

    Comments on the original article "Plate tectonics in the classification of personality disorder: Shifting to a dimensional model," by T. A. Widiger and T. J. Trull. The purpose of this comment is to address (a) whether psychologists know how personality traits are currently assessed by clinicians and (b) the reliability and validity of those…

  3. Validation of a computational knee joint model using an alignment method for the knee laxity test and computed tomography.

    PubMed

    Kang, Kyoung-Tak; Kim, Sung-Hwan; Son, Juhyun; Lee, Young Han; Koh, Yong-Gon

    2017-01-01

    Computational models have been identified as efficient techniques in the clinical decision-making process. However, computational model was validated using published data in most previous studies, and the kinematic validation of such models still remains a challenge. Recently, studies using medical imaging have provided a more accurate visualization of knee joint kinematics. The purpose of the present study was to perform kinematic validation for the subject-specific computational knee joint model by comparison with subject's medical imaging under identical laxity condition. The laxity test was applied to the anterior-posterior drawer under 90° flexion and the varus-valgus under 20° flexion with a series of stress radiographs, a Telos device, and computed tomography. The loading condition in the computational subject-specific knee joint model was identical to the laxity test condition in the medical image. Our computational model showed knee laxity kinematic trends that were consistent with the computed tomography images, except for negligible differences because of the indirect application of the subject's in vivo material properties. Medical imaging based on computed tomography with the laxity test allowed us to measure not only the precise translation but also the rotation of the knee joint. This methodology will be beneficial in the validation of laxity tests for subject- or patient-specific computational models.

  4. Circulation Control Model Experimental Database for CFD Validation

    NASA Technical Reports Server (NTRS)

    Paschal, Keith B.; Neuhart, Danny H.; Beeler, George B.; Allan, Brian G.

    2012-01-01

    A 2D circulation control wing was tested in the Basic Aerodynamic Research Tunnel at the NASA Langley Research Center. A traditional circulation control wing employs tangential blowing along the span over a trailing-edge Coanda surface for the purpose of lift augmentation. This model has been tested extensively at the Georgia Tech Research Institute for the purpose of performance documentation at various blowing rates. The current study seeks to expand on the previous work by documenting additional flow-field data needed for validation of computational fluid dynamics. Two jet momentum coefficients were tested during this entry: 0.047 and 0.114. Boundary-layer transition was investigated and turbulent boundary layers were established on both the upper and lower surfaces of the model. Chordwise and spanwise pressure measurements were made, and tunnel sidewall pressure footprints were documented. Laser Doppler Velocimetry measurements were made on both the upper and lower surface of the model at two chordwise locations (x/c = 0.8 and 0.9) to document the state of the boundary layers near the spanwise blowing slot.

  5. Modelling the pre-assessment learning effects of assessment: evidence in the validity chain

    PubMed Central

    Cilliers, Francois J; Schuwirth, Lambert W T; van der Vleuten, Cees P M

    2012-01-01

    OBJECTIVES We previously developed a model of the pre-assessment learning effects of consequential assessment and started to validate it. The model comprises assessment factors, mechanism factors and learning effects. The purpose of this study was to continue the validation process. For stringency, we focused on a subset of assessment factor–learning effect associations that featured least commonly in a baseline qualitative study. Our aims were to determine whether these uncommon associations were operational in a broader but similar population to that in which the model was initially derived. METHODS A cross-sectional survey of 361 senior medical students at one medical school was undertaken using a purpose-made questionnaire based on a grounded theory and comprising pairs of written situational tests. In each pair, the manifestation of an assessment factor was varied. The frequencies at which learning effects were selected were compared for each item pair, using an adjusted alpha to assign significance. The frequencies at which mechanism factors were selected were calculated. RESULTS There were significant differences in the learning effect selected between the two scenarios of an item pair for 13 of this subset of 21 uncommon associations, even when a p-value of < 0.00625 was considered to indicate significance. Three mechanism factors were operational in most scenarios: agency; response efficacy, and response value. CONCLUSIONS For a subset of uncommon associations in the model, the role of most assessment factor–learning effect associations and the mechanism factors involved were supported in a broader but similar population to that in which the model was derived. Although model validation is an ongoing process, these results move the model one step closer to the stage of usefully informing interventions. Results illustrate how factors not typically included in studies of the learning effects of assessment could confound the results of interventions aimed at using assessment to influence learning. Discuss ideas arising from this article at ‘http://www.mededuc.com discuss’ PMID:23078685

  6. Modelling the pre-assessment learning effects of assessment: evidence in the validity chain.

    PubMed

    Cilliers, Francois J; Schuwirth, Lambert W T; van der Vleuten, Cees P M

    2012-11-01

    We previously developed a model of the pre-assessment learning effects of consequential assessment and started to validate it. The model comprises assessment factors, mechanism factors and learning effects. The purpose of this study was to continue the validation process. For stringency, we focused on a subset of assessment factor-learning effect associations that featured least commonly in a baseline qualitative study. Our aims were to determine whether these uncommon associations were operational in a broader but similar population to that in which the model was initially derived. A cross-sectional survey of 361 senior medical students at one medical school was undertaken using a purpose-made questionnaire based on a grounded theory and comprising pairs of written situational tests. In each pair, the manifestation of an assessment factor was varied. The frequencies at which learning effects were selected were compared for each item pair, using an adjusted alpha to assign significance. The frequencies at which mechanism factors were selected were calculated. There were significant differences in the learning effect selected between the two scenarios of an item pair for 13 of this subset of 21 uncommon associations, even when a p-value of < 0.00625 was considered to indicate significance. Three mechanism factors were operational in most scenarios: agency; response efficacy, and response value. For a subset of uncommon associations in the model, the role of most assessment factor-learning effect associations and the mechanism factors involved were supported in a broader but similar population to that in which the model was derived. Although model validation is an ongoing process, these results move the model one step closer to the stage of usefully informing interventions. Results illustrate how factors not typically included in studies of the learning effects of assessment could confound the results of interventions aimed at using assessment to influence learning. © Blackwell Publishing Ltd 2012.

  7. Development of Chemistry Game Card as an Instructional Media in the Subject of Naming Chemical Compound in Grade X

    NASA Astrophysics Data System (ADS)

    Bayharti; Iswendi, I.; Arifin, M. N.

    2018-04-01

    The purpose of this research was to produce a chemistry game card as an instructional media in the subject of naming chemical compounds and determine the degree of validity and practicality of instructional media produced. Type of this research was Research and Development (R&D) that produced a product. The development model used was4-D model which comprises four stages incuding: (1) define, (2) design, (3) develop, and (4) disseminate. This research was restricted at the development stage. Chemistry game card developed was validated by seven validators and practicality was tested to class X6 students of SMAN 5 Padang. Instrument of this research is questionnair that consist of validity sheet and practicality sheet. Technique in collection data was done by distributing questionnaire to the validators, chemistry teachers, and students. The data were analyzed by using formula Cohen’s Kappa. Based on data analysis, validity of chemistry game card was0.87 with category highly valid and practicality of chemistry game card was 0.91 with category highly practice.

  8. Method validation for chemical composition determination by electron microprobe with wavelength dispersive spectrometer

    NASA Astrophysics Data System (ADS)

    Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.

    2016-07-01

    The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.

  9. Prediction of functional aerobic capacity without exercise testing

    NASA Technical Reports Server (NTRS)

    Jackson, A. S.; Blair, S. N.; Mahar, M. T.; Wier, L. T.; Ross, R. M.; Stuteville, J. E.

    1990-01-01

    The purpose of this study was to develop functional aerobic capacity prediction models without using exercise tests (N-Ex) and to compare the accuracy with Astrand single-stage submaximal prediction methods. The data of 2,009 subjects (9.7% female) were randomly divided into validation (N = 1,543) and cross-validation (N = 466) samples. The validation sample was used to develop two N-Ex models to estimate VO2peak. Gender, age, body composition, and self-report activity were used to develop two N-Ex prediction models. One model estimated percent fat from skinfolds (N-Ex %fat) and the other used body mass index (N-Ex BMI) to represent body composition. The multiple correlations for the developed models were R = 0.81 (SE = 5.3 ml.kg-1.min-1) and R = 0.78 (SE = 5.6 ml.kg-1.min-1). This accuracy was confirmed when applied to the cross-validation sample. The N-Ex models were more accurate than what was obtained from VO2peak estimated from the Astrand prediction models. The SEs of the Astrand models ranged from 5.5-9.7 ml.kg-1.min-1. The N-Ex models were cross-validated on 59 men on hypertensive medication and 71 men who were found to have a positive exercise ECG. The SEs of the N-Ex models ranged from 4.6-5.4 ml.kg-1.min-1 with these subjects.(ABSTRACT TRUNCATED AT 250 WORDS).

  10. Facultative Stabilization Pond: Measuring Biological Oxygen Demand using Mathematical Approaches

    NASA Astrophysics Data System (ADS)

    Wira S, Ihsan; Sunarsih, Sunarsih

    2018-02-01

    Pollution is a man-made phenomenon. Some pollutants which discharged directly to the environment could create serious pollution problems. Untreated wastewater will cause contamination and even pollution on the water body. Biological Oxygen Demand (BOD) is the amount of oxygen required for the oxidation by bacteria. The higher the BOD concentration, the greater the organic matter would be. The purpose of this study was to predict the value of BOD contained in wastewater. Mathematical modeling methods were chosen in this study to depict and predict the BOD values contained in facultative wastewater stabilization ponds. Measurements of sampling data were carried out to validate the model. The results of this study indicated that a mathematical approach can be applied to predict the BOD contained in the facultative wastewater stabilization ponds. The model was validated using Absolute Means Error with 10% tolerance limit, and AME for model was 7.38% (< 10%), so the model is valid. Furthermore, a mathematical approach can also be applied to illustrate and predict the contents of wastewater.

  11. Validity and Reliability of the TGMD-2 in 7-10-Year-Old Flemish Children with Intellectual Disability

    ERIC Educational Resources Information Center

    Simons, Johan; Daly, Daniel; Theodorou, Fani; Caron, Cindy; Simons, Joke; Andoniadou, Elena

    2008-01-01

    The purpose of this study was to assess validity and reliability of the TGMD-2 on Flemish children with intellectual disability. The total sample consisted of 99 children aged 7-10 years of which 67 were boys and 32 were girls. A factor analysis supported a two factor model of the TGMD-2. A low significant age effect was also found for the object…

  12. Development and validation of instrument for ergonomic evaluation of tablet arm chairs

    PubMed Central

    Tirloni, Adriana Seára; dos Reis, Diogo Cunha; Bornia, Antonio Cezar; de Andrade, Dalton Francisco; Borgatto, Adriano Ferreti; Moro, Antônio Renato Pereira

    2016-01-01

    The purpose of this study was to develop and validate an evaluation instrument for tablet arm chairs based on ergonomic requirements, focused on user perceptions and using Item Response Theory (IRT). This exploratory study involved 1,633 participants (university students and professors) in four steps: a pilot study (n=26), semantic validation (n=430), content validation (n=11) and construct validation (n=1,166). Samejima's graded response model was applied to validate the instrument. The results showed that all the steps (theoretical and practical) of the instrument's development and validation processes were successful and that the group of remaining items (n=45) had a high consistency (0.95). This instrument can be used in the furniture industry by engineers and product designers and in the purchasing process of tablet arm chairs for schools, universities and auditoriums. PMID:28337099

  13. Quantitative impedance measurements for eddy current model validation

    NASA Astrophysics Data System (ADS)

    Khan, T. A.; Nakagawa, N.

    2000-05-01

    This paper reports on a series of laboratory-based impedance measurement data, collected by the use of a quantitatively accurate, mechanically controlled measurement station. The purpose of the measurement is to validate a BEM-based eddy current model against experiment. We have therefore selected two "validation probes," which are both split-D differential probes. Their internal structures and dimensions are extracted from x-ray CT scan data, and thus known within the measurement tolerance. A series of measurements was carried out, using the validation probes and two Ti-6Al-4V block specimens, one containing two 1-mm long fatigue cracks, and the other containing six EDM notches of a range of sizes. Motor-controlled XY scanner performed raster scans over the cracks, with the probe riding on the surface with a spring-loaded mechanism to maintain the lift off. Both an impedance analyzer and a commercial EC instrument were used in the measurement. The probes were driven in both differential and single-coil modes for the specific purpose of model validation. The differential measurements were done exclusively by the eddyscope, while the single-coil data were taken with both the impedance analyzer and the eddyscope. From the single-coil measurements, we obtained the transfer function to translate the voltage output of the eddyscope into impedance values, and then used it to translate the differential measurement data into impedance results. The presentation will highlight the schematics of the measurement procedure, a representative of raw data, explanation of the post data-processing procedure, and then a series of resulting 2D flaw impedance results. A noise estimation will be given also, in order to quantify the accuracy of these measurements, and to be used in probability-of-detection estimation.—This work was supported by the NSF Industry/University Cooperative Research Program.

  14. Supersonic Combustion Research at NASA

    NASA Technical Reports Server (NTRS)

    Drummond, J. P.; Danehy, Paul M.; Gaffney, Richard L., Jr.; Tedder, Sarah A.; Cutler, Andrew D.; Bivolaru, Daniel

    2007-01-01

    This paper discusses the progress of work to model high-speed supersonic reacting flow. The purpose of the work is to improve the state of the art of CFD capabilities for predicting the flow in high-speed propulsion systems, particularly combustor flowpaths. The program has several components including the development of advanced algorithms and models for simulating engine flowpaths as well as a fundamental experimental and diagnostic development effort to support the formulation and validation of the mathematical models. The paper will provide details of current work on experiments that will provide data for the modeling efforts along with the associated nonintrusive diagnostics used to collect the data from the experimental flowfield. Simulation of a recent experiment to partially validate the accuracy of a combustion code is also described.

  15. Content validation of the nursing diagnosis acute pain in the Czech Republic and Slovakia.

    PubMed

    Zeleníková, Renáta; Žiaková, Katarína; Čáp, Juraj; Jarošová, Darja

    2014-10-01

    The main purpose of the study was to validate the defining characteristics of the nursing diagnosis acute pain in the Czech Republic and Slovakia. This is a descriptive study. The validation process involved was based on Fehring's diagnostic content validity model. Four defining characteristics were classified as major by Slovak nurses and eight defining characteristics were classified as major by Czech nurses. Validation of the nursing diagnosis acute pain in the Czech and Slovak sociocultural context has shown that nurses prioritize characteristics that are behavioral in nature as well as patients' verbal reports of pain. Verbal reports of pain and behavioral indicators are important for arriving at the nursing diagnosis acute pain. © 2014 NANDA International, Inc.

  16. Comparison of 3D dynamic virtual model to link segment model for estimation of net L4/L5 reaction moments during lifting.

    PubMed

    Abdoli-Eramaki, Mohammad; Stevenson, Joan M; Agnew, Michael J; Kamalzadeh, Amin

    2009-04-01

    The purpose of this study was to validate a 3D dynamic virtual model for lifting tasks against a validated link segment model (LSM). A face validation study was conducted by collecting x, y, z coordinate data and using them in both virtual and LSM models. An upper body virtual model was needed to calculate the 3D torques about human joints for use in simulated lifting styles and to estimate the effect of external mechanical devices on human body. Firstly, the model had to be validated to be sure it provided accurate estimates of 3D moments in comparison to a previously validated LSM. Three synchronised Fastrak units with nine sensors were used to record data from one male subject who completed dynamic box lifting under 27 different load conditions (box weights (3), lifting techniques (3) and rotations (3)). The external moments about three axes of L4/L5 were compared for both models. A pressure switch on the box was used to denote the start and end of the lift. An excellent agreement [image omitted] was found between the two models for dynamic lifting tasks, especially for larger moments in flexion and extension. This virtual model was considered valid for use in a complete simulation of the upper body skeletal system. This biomechanical virtual model of the musculoskeletal system can be used by researchers and practitioners to give a better tool to study the causes of LBP and the effect of intervention strategies, by permitting the researcher to see and control a virtual subject's motions.

  17. Critical evaluation of mechanistic two-phase flow pipeline and well simulation models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhulesia, H.; Lopez, D.

    1996-12-31

    Mechanistic steady state simulation models, rather than empirical correlations, are used for a design of multiphase production system including well, pipeline and downstream installations. Among the available models, PEPITE, WELLSIM, OLGA, TACITE and TUFFP are widely used for this purpose and consequently, a critical evaluation of these models is needed. An extensive validation methodology is proposed which consists of two distinct steps: first to validate the hydrodynamic point model using the test loop data and, then to validate the over-all simulation model using the real pipelines and wells data. The test loop databank used in this analysis contains about 5952more » data sets originated from four different test loops and a majority of these data are obtained at high pressures (up to 90 bars) with real hydrocarbon fluids. Before performing the model evaluation, physical analysis of the test loops data is required to eliminate non-coherent data. The evaluation of these point models demonstrates that the TACITE and OLGA models can be applied to any configuration of pipes. The TACITE model performs better than the OLGA model because it uses the most appropriate closure laws from the literature validated on a large number of data. The comparison of predicted and measured pressure drop for various real pipelines and wells demonstrates that the TACITE model is a reliable tool.« less

  18. Propeller aircraft interior noise model utilization study and validation

    NASA Technical Reports Server (NTRS)

    Pope, L. D.

    1984-01-01

    Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.

  19. Factors Affecting the Effectiveness and Use of Moodle: Students' Perception

    ERIC Educational Resources Information Center

    Damnjanovic, Vesna; Jednak, Sandra; Mijatovic, Ivana

    2015-01-01

    The purpose of this research paper is to identify the factors affecting the effectiveness of Moodle from the students' perspective. The research hypotheses derived from the suggested extended Seddon model have been empirically validated using the responses to a survey on e-learning usage among 255 users. We tested the model across higher education…

  20. Developing Contextual Mathematical Thinking Learning Model to Enhance Higher-Order Thinking Ability for Middle School Students

    ERIC Educational Resources Information Center

    Samo, Damianus D.; Darhim; Kartasasmita, Bana

    2017-01-01

    The purpose of this research is to develop contextual mathematical thinking learning model which is valid, practical and effective based on the theoretical reviews and its support to enhance higher-order thinking ability. This study is a research and development (R & D) with three main phases: investigation, development, and implementation.…

  1. Identification of Reading Problems in First Grade within a Response-to-Intervention Framework

    ERIC Educational Resources Information Center

    Speece, Deborah L.; Schatschneider, Christopher; Silverman, Rebecca; Case, Lisa Pericola; Cooper, David H.; Jacobs, Dawn M.

    2011-01-01

    Models of Response to Intervention (RTI) include parameters of assessment and instruction. This study focuses on assessment with the purpose of developing a screening battery that validly and efficiently identifies first-grade children at risk for reading problems. In an RTI model, these children would be candidates for early intervention. We…

  2. Assessing Model Fit: Caveats and Recommendations for Confirmatory Factor Analysis and Exploratory Structural Equation Modeling

    ERIC Educational Resources Information Center

    Perry, John L.; Nicholls, Adam R.; Clough, Peter J.; Crust, Lee

    2015-01-01

    Despite the limitations of overgeneralizing cutoff values for confirmatory factor analysis (CFA; e.g., Marsh, Hau, & Wen, 2004), they are still often employed as golden rules for assessing factorial validity in sport and exercise psychology. The purpose of this study was to investigate the appropriateness of using the CFA approach with these…

  3. Tidal simulation using regional ocean modeling systems (ROMS)

    NASA Technical Reports Server (NTRS)

    Wang, Xiaochun; Chao, Yi; Li, Zhijin; Dong, Changming; Farrara, John; McWilliams, James C.; Shum, C. K.; Wang, Yu; Matsumoto, Koji; Rosenfeld, Leslie K.; hide

    2006-01-01

    The purpose of our research is to test the capability of ROMS in simulating tides. The research also serves as a necessary exercise to implement tides in an operational ocean forecasting system. In this paper, we emphasize the validation of the model tide simulation. The characteristics and energetics of tides of the region will be reported in separate publications.

  4. Developmental Spelling and Word Recognition: A Validation of Ehri's Model of Word Recognition Development

    ERIC Educational Resources Information Center

    Ebert, Ashlee A.

    2009-01-01

    Ehri's developmental model of word recognition outlines early reading development that spans from the use of logos to advanced knowledge of oral and written language to read words. Henderson's developmental spelling theory presents stages of word knowledge that progress in a similar manner to Ehri's phases. The purpose of this research study was…

  5. Linking Outcomes from Peabody Picture Vocabulary Test Forms Using Item Response Models

    ERIC Educational Resources Information Center

    Hoffman, Lesa; Templin, Jonathan; Rice, Mabel L.

    2012-01-01

    Purpose: The present work describes how vocabulary ability as assessed by 3 different forms of the Peabody Picture Vocabulary Test (PPVT; Dunn & Dunn, 1997) can be placed on a common latent metric through item response theory (IRT) modeling, by which valid comparisons of ability between samples or over time can then be made. Method: Responses…

  6. Landslide susceptibility mapping using GIS-based statistical models and Remote sensing data in tropical environment

    PubMed Central

    Hashim, Mazlan

    2015-01-01

    This research presents the results of the GIS-based statistical models for generation of landslide susceptibility mapping using geographic information system (GIS) and remote-sensing data for Cameron Highlands area in Malaysia. Ten factors including slope, aspect, soil, lithology, NDVI, land cover, distance to drainage, precipitation, distance to fault, and distance to road were extracted from SAR data, SPOT 5 and WorldView-1 images. The relationships between the detected landslide locations and these ten related factors were identified by using GIS-based statistical models including analytical hierarchy process (AHP), weighted linear combination (WLC) and spatial multi-criteria evaluation (SMCE) models. The landslide inventory map which has a total of 92 landslide locations was created based on numerous resources such as digital aerial photographs, AIRSAR data, WorldView-1 images, and field surveys. Then, 80% of the landslide inventory was used for training the statistical models and the remaining 20% was used for validation purpose. The validation results using the Relative landslide density index (R-index) and Receiver operating characteristic (ROC) demonstrated that the SMCE model (accuracy is 96%) is better in prediction than AHP (accuracy is 91%) and WLC (accuracy is 89%) models. These landslide susceptibility maps would be useful for hazard mitigation purpose and regional planning. PMID:25898919

  7. Landslide susceptibility mapping using GIS-based statistical models and Remote sensing data in tropical environment.

    PubMed

    Shahabi, Himan; Hashim, Mazlan

    2015-04-22

    This research presents the results of the GIS-based statistical models for generation of landslide susceptibility mapping using geographic information system (GIS) and remote-sensing data for Cameron Highlands area in Malaysia. Ten factors including slope, aspect, soil, lithology, NDVI, land cover, distance to drainage, precipitation, distance to fault, and distance to road were extracted from SAR data, SPOT 5 and WorldView-1 images. The relationships between the detected landslide locations and these ten related factors were identified by using GIS-based statistical models including analytical hierarchy process (AHP), weighted linear combination (WLC) and spatial multi-criteria evaluation (SMCE) models. The landslide inventory map which has a total of 92 landslide locations was created based on numerous resources such as digital aerial photographs, AIRSAR data, WorldView-1 images, and field surveys. Then, 80% of the landslide inventory was used for training the statistical models and the remaining 20% was used for validation purpose. The validation results using the Relative landslide density index (R-index) and Receiver operating characteristic (ROC) demonstrated that the SMCE model (accuracy is 96%) is better in prediction than AHP (accuracy is 91%) and WLC (accuracy is 89%) models. These landslide susceptibility maps would be useful for hazard mitigation purpose and regional planning.

  8. Validation of the Work-Life Balance Culture Scale (WLBCS).

    PubMed

    Nitzsche, Anika; Jung, Julia; Kowalski, Christoph; Pfaff, Holger

    2014-01-01

    The purpose of this paper is to describe the theoretical development and initial validation of the newly developed Work-Life Balance Culture Scale (WLBCS), an instrument for measuring an organizational culture that promotes the work-life balance of employees. In Study 1 (N=498), the scale was developed and its factorial validity tested through exploratory factor analyses. In Study 2 (N=513), confirmatory factor analysis (CFA) was performed to examine model fit and retest the dimensional structure of the instrument. To assess construct validity, a priori hypotheses were formulated and subsequently tested using correlation analyses. Exploratory and confirmatory factor analyses revealed a one-factor model. Results of the bivariate correlation analyses may be interpreted as preliminary evidence of the scale's construct validity. The five-item WLBCS is a new and efficient instrument with good overall quality. Its conciseness makes it particularly suitable for use in employee surveys to gain initial insight into a company's perceived work-life balance culture.

  9. Modeling Combustion in Supersonic Flows

    NASA Technical Reports Server (NTRS)

    Drummond, J. Philip; Danehy, Paul M.; Bivolaru, Daniel; Gaffney, Richard L.; Tedder, Sarah A.; Cutler, Andrew D.

    2007-01-01

    This paper discusses the progress of work to model high-speed supersonic reacting flow. The purpose of the work is to improve the state of the art of CFD capabilities for predicting the flow in high-speed propulsion systems, particularly combustor flow-paths. The program has several components including the development of advanced algorithms and models for simulating engine flowpaths as well as a fundamental experimental and diagnostic development effort to support the formulation and validation of the mathematical models. The paper will provide details of current work on experiments that will provide data for the modeling efforts along with with the associated nonintrusive diagnostics used to collect the data from the experimental flowfield. Simulation of a recent experiment to partially validate the accuracy of a combustion code is also described.

  10. Developing rural palliative care: validating a conceptual model.

    PubMed

    Kelley, Mary Lou; Williams, Allison; DeMiglio, Lily; Mettam, Hilary

    2011-01-01

    The purpose of this research was to validate a conceptual model for developing palliative care in rural communities. This model articulates how local rural healthcare providers develop palliative care services according to four sequential phases. The model has roots in concepts of community capacity development, evolves from collaborative, generalist rural practice, and utilizes existing health services infrastructure. It addresses how rural providers manage challenges, specifically those related to: lack of resources, minimal community understanding of palliative care, health professionals' resistance, the bureaucracy of the health system, and the obstacles of providing services in rural environments. Seven semi-structured focus groups were conducted with interdisciplinary health providers in 7 rural communities in two Canadian provinces. Using a constant comparative analysis approach, focus group data were analyzed by examining participants' statements in relation to the model and comparing emerging themes in the development of rural palliative care to the elements of the model. The data validated the conceptual model as the model was able to theoretically predict and explain the experiences of the 7 rural communities that participated in the study. New emerging themes from the data elaborated existing elements in the model and informed the requirement for minor revisions. The model was validated and slightly revised, as suggested by the data. The model was confirmed as being a useful theoretical tool for conceptualizing the development of rural palliative care that is applicable in diverse rural communities.

  11. Using the split Hopkinson pressure bar to validate material models.

    PubMed

    Church, Philip; Cornish, Rory; Cullis, Ian; Gould, Peter; Lewtas, Ian

    2014-08-28

    This paper gives a discussion of the use of the split-Hopkinson bar with particular reference to the requirements of materials modelling at QinetiQ. This is to deploy validated material models for numerical simulations that are physically based and have as little characterization overhead as possible. In order to have confidence that the models have a wide range of applicability, this means, at most, characterizing the models at low rate and then validating them at high rate. The split Hopkinson pressure bar (SHPB) is ideal for this purpose. It is also a very useful tool for analysing material behaviour under non-shock wave loading. This means understanding the output of the test and developing techniques for reliable comparison of simulations with SHPB data. For materials other than metals comparison with an output stress v strain curve is not sufficient as the assumptions built into the classical analysis are generally violated. The method described in this paper compares the simulations with as much validation data as can be derived from deployed instrumentation including the raw strain gauge data on the input and output bars, which avoids any assumptions about stress equilibrium. One has to take into account Pochhammer-Chree oscillations and their effect on the specimen and recognize that this is itself also a valuable validation test of the material model. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  12. Forward Bay Cover Separation Modeling and Testing for the Orion Multi-Purpose Crew Vehicle

    NASA Technical Reports Server (NTRS)

    Ali, Yasmin; Chuhta, Jesse D.; Hughes, Michael P.; Radke, Tara S.

    2015-01-01

    Spacecraft multi-body separation events during atmospheric descent require complex testing and analysis to validate the flight separation dynamics models used to verify no re-contact. The NASA Orion Multi-Purpose Crew Vehicle (MPCV) architecture includes a highly-integrated Forward Bay Cover (FBC) jettison assembly design that combines parachutes and piston thrusters to separate the FBC from the Crew Module (CM) and avoid re-contact. A multi-disciplinary team across numerous organizations examined key model parameters and risk areas to develop a robust but affordable test campaign in order to validate and verify the FBC separation event for Exploration Flight Test-1 (EFT-1). The FBC jettison simulation model is highly complex, consisting of dozens of parameters varied simultaneously, with numerous multi-parameter interactions (coupling and feedback) among the various model elements, and encompassing distinct near-field, mid-field, and far-field regimes. The test campaign was composed of component-level testing (for example gas-piston thrusters and parachute mortars), ground FBC jettison tests, and FBC jettison air-drop tests that were accomplished by a highly multi-disciplinary team. Three ground jettison tests isolated the testing of mechanisms and structures to anchor the simulation models excluding aerodynamic effects. Subsequently, two air-drop tests added aerodynamic and parachute elements, and served as integrated system demonstrations, which had been preliminarily explored during the Orion Pad Abort-1 (PA-1) flight test in May 2010. Both ground and drop tests provided extensive data to validate analytical models and to verify the FBC jettison event for EFT-1. Additional testing will be required to support human certification of this separation event, for which NASA and Lockheed Martin are applying knowledge from Apollo and EFT-1 testing and modeling to develop a robust human-rated FBC separation event.

  13. Use of a Computer-Mediated Delphi Process to Validate a Mass Casualty Conceptual Model

    PubMed Central

    CULLEY, JOAN M.

    2012-01-01

    Since the original work on the Delphi technique, multiple versions have been developed and used in research and industry; however, very little empirical research has been conducted that evaluates the efficacy of using online computer, Internet, and e-mail applications to facilitate a Delphi method that can be used to validate theoretical models. The purpose of this research was to develop computer, Internet, and e-mail applications to facilitate a modified Delphi technique through which experts provide validation for a proposed conceptual model that describes the information needs for a mass-casualty continuum of care. Extant literature and existing theoretical models provided the basis for model development. Two rounds of the Delphi process were needed to satisfy the criteria for consensus and/or stability related to the constructs, relationships, and indicators in the model. The majority of experts rated the online processes favorably (mean of 6.1 on a seven-point scale). Using online Internet and computer applications to facilitate a modified Delphi process offers much promise for future research involving model building or validation. The online Delphi process provided an effective methodology for identifying and describing the complex series of events and contextual factors that influence the way we respond to disasters. PMID:21076283

  14. Use of a computer-mediated Delphi process to validate a mass casualty conceptual model.

    PubMed

    Culley, Joan M

    2011-05-01

    Since the original work on the Delphi technique, multiple versions have been developed and used in research and industry; however, very little empirical research has been conducted that evaluates the efficacy of using online computer, Internet, and e-mail applications to facilitate a Delphi method that can be used to validate theoretical models. The purpose of this research was to develop computer, Internet, and e-mail applications to facilitate a modified Delphi technique through which experts provide validation for a proposed conceptual model that describes the information needs for a mass-casualty continuum of care. Extant literature and existing theoretical models provided the basis for model development. Two rounds of the Delphi process were needed to satisfy the criteria for consensus and/or stability related to the constructs, relationships, and indicators in the model. The majority of experts rated the online processes favorably (mean of 6.1 on a seven-point scale). Using online Internet and computer applications to facilitate a modified Delphi process offers much promise for future research involving model building or validation. The online Delphi process provided an effective methodology for identifying and describing the complex series of events and contextual factors that influence the way we respond to disasters.

  15. General model and control of an n rotor helicopter

    NASA Astrophysics Data System (ADS)

    Sidea, A. G.; Yding Brogaard, R.; Andersen, N. A.; Ravn, O.

    2014-12-01

    The purpose of this study was to create a dynamic, nonlinear mathematical model of a multirotor that would be valid for different numbers of rotors. Furthermore, a set of Single Input Single Output (SISO) controllers were implemented for attitude control. Both model and controllers were tested experimentally on a quadcopter. Using the combined model and controllers, simple system simulation and control is possible, by replacing the physical values for the individual systems.

  16. The SCALE Verified, Archived Library of Inputs and Data - VALID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Rearden, Bradley T

    The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional experiments from the IHECSBE, to include experiments from areas beyond criticality safety, such as reactor physics and shielding, and to include application models. In the future, external SCALE users may also obtain qualification under the VALID procedure and be involved in expanding the library. The VALID library provides a pathway for the criticality safety community to leverage modeling and analysis expertise at ORNL.« less

  17. Validating and determining the weight of items used for evaluating clinical governance implementation based on analytic hierarchy process model.

    PubMed

    Hooshmand, Elaheh; Tourani, Sogand; Ravaghi, Hamid; Vafaee Najar, Ali; Meraji, Marziye; Ebrahimipour, Hossein

    2015-04-08

    The purpose of implementing a system such as Clinical Governance (CG) is to integrate, establish and globalize distinct policies in order to improve quality through increasing professional knowledge and the accountability of healthcare professional toward providing clinical excellence. Since CG is related to change, and change requires money and time, CG implementation has to be focused on priority areas that are in more dire need of change. The purpose of the present study was to validate and determine the significance of items used for evaluating CG implementation. The present study was descriptive-quantitative in method and design. Items used for evaluating CG implementation were first validated by the Delphi method and then compared with one another and ranked based on the Analytical Hierarchy Process (AHP) model. The items that were validated for evaluating CG implementation in Iran include performance evaluation, training and development, personnel motivation, clinical audit, clinical effectiveness, risk management, resource allocation, policies and strategies, external audit, information system management, research and development, CG structure, implementation prerequisites, the management of patients' non-medical needs, complaints and patients' participation in the treatment process. The most important items based on their degree of significance were training and development, performance evaluation, and risk management. The least important items included the management of patients' non-medical needs, patients' participation in the treatment process and research and development. The fundamental requirements of CG implementation included having an effective policy at national level, avoiding perfectionism, using the expertise and potentials of the entire country and the coordination of this model with other models of quality improvement such as accreditation and patient safety. © 2015 by Kerman University of Medical Sciences.

  18. Cross Validation of Selection of Variables in Multiple Regression.

    DTIC Science & Technology

    1979-12-01

    55 vii CROSS VALIDATION OF SELECTION OF VARIABLES IN MULTIPLE REGRESSION I Introduction Background Long term DoD planning gcals...028545024 .31109000 BF * SS - .008700618 .0471961 Constant - .70977903 85.146786 55 had adequate predictive capabilities; the other two models (the...71ZCO F111D Control 54 73EGO FlIID Computer, General Purpose 55 73EPO FII1D Converter-Multiplexer 56 73HAO flllD Stabilizer Platform 57 73HCO F1ID

  19. Validation of the openEHR archetype library by using OWL reasoning.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2011-01-01

    Electronic Health Record architectures based on the dual model architecture use archetypes for representing clinical knowledge. Therefore, ensuring their correctness and consistency is a fundamental research goal. In this work, we explore how an approach based on OWL technologies can be used for such purpose. This method has been applied to the openEHR archetype repository, which is the largest available one nowadays. The results of this validation are also reported in this study.

  20. Investigating Mechanisms of Chronic Kidney Disease in Mouse Models

    PubMed Central

    Eddy, Allison A.; Okamura, Daryl M.; Yamaguchi, Ikuyo; López-Guisa, Jesús M.

    2011-01-01

    Animal models of chronic kidney disease (CKD) are important experimental tools that are used to investigate novel mechanistic pathways and to validate potential new therapeutic interventions prior to pre-clinical testing in humans. Over the past several years, mouse CKD models have been extensively used for these purposes. Despite significant limitations, the model of unilateral ureteral obstruction (UUO) has essentially become the high throughput in vivo model, as it recapitulates the fundamental pathogenetic mechanisms that typify all forms of CKD in a relatively short time span. In addition, several alternative mouse models are available that can be used to validate new mechanistic paradigms and/or novel therapies. Several models are reviewed – both genetic and experimentally induced – that provide investigators with an opportunity to include renal functional study end-points together with quantitative measures of fibrosis severity, something that is not possible with the UUO model. PMID:21695449

  1. Similarity Metrics for Closed Loop Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Yang, Lee C.; Bedrossian, Naz; Hall, Robert A.

    2008-01-01

    To what extent and in what ways can two closed-loop dynamic systems be said to be "similar?" This question arises in a wide range of dynamic systems modeling and control system design applications. For example, bounds on error models are fundamental to the controller optimization with modern control design methods. Metrics such as the structured singular value are direct measures of the degree to which properties such as stability or performance are maintained in the presence of specified uncertainties or variations in the plant model. Similarly, controls-related areas such as system identification, model reduction, and experimental model validation employ measures of similarity between multiple realizations of a dynamic system. Each area has its tools and approaches, with each tool more or less suited for one application or the other. Similarity in the context of closed-loop model validation via flight test is subtly different from error measures in the typical controls oriented application. Whereas similarity in a robust control context relates to plant variation and the attendant affect on stability and performance, in this context similarity metrics are sought that assess the relevance of a dynamic system test for the purpose of validating the stability and performance of a "similar" dynamic system. Similarity in the context of system identification is much more relevant than are robust control analogies in that errors between one dynamic system (the test article) and another (the nominal "design" model) are sought for the purpose of bounding the validity of a model for control design and analysis. Yet system identification typically involves open-loop plant models which are independent of the control system (with the exception of limited developments in closed-loop system identification which is nonetheless focused on obtaining open-loop plant models from closed-loop data). Moreover the objectives of system identification are not the same as a flight test and hence system identification error metrics are not directly relevant. In applications such as launch vehicles where the open loop plant is unstable it is similarity of the closed-loop system dynamics of a flight test that are relevant.

  2. On vital aid: the why, what and how of validation

    PubMed Central

    Kleywegt, Gerard J.

    2009-01-01

    Limitations to the data and subjectivity in the structure-determination process may cause errors in macromolecular crystal structures. Appropriate validation techniques may be used to reveal problems in structures, ideally before they are analysed, published or deposited. Additionally, such tech­niques may be used a posteriori to assess the (relative) merits of a model by potential users. Weak validation methods and statistics assess how well a model reproduces the information that was used in its construction (i.e. experimental data and prior knowledge). Strong methods and statistics, on the other hand, test how well a model predicts data or information that were not used in the structure-determination process. These may be data that were excluded from the process on purpose, general knowledge about macromolecular structure, information about the biological role and biochemical activity of the molecule under study or its mutants or complexes and predictions that are based on the model and that can be tested experimentally. PMID:19171968

  3. Validation of finite element model of transcranial electrical stimulation using scalp potentials: implications for clinical dose

    NASA Astrophysics Data System (ADS)

    Datta, Abhishek; Zhou, Xiang; Su, Yuzhou; Parra, Lucas C.; Bikson, Marom

    2013-06-01

    Objective. During transcranial electrical stimulation, current passage across the scalp generates voltage across the scalp surface. The goal was to characterize these scalp voltages for the purpose of validating subject-specific finite element method (FEM) models of current flow. Approach. Using a recording electrode array, we mapped skin voltages resulting from low-intensity transcranial electrical stimulation. These voltage recordings were used to compare the predictions obtained from the high-resolution model based on the subject undergoing transcranial stimulation. Main results. Each of the four stimulation electrode configurations tested resulted in a distinct distribution of scalp voltages; these spatial maps were linear with applied current amplitude (0.1 to 1 mA) over low frequencies (1 to 10 Hz). The FEM model accurately predicted the distinct voltage distributions and correlated the induced scalp voltages with current flow through cortex. Significance. Our results provide the first direct model validation for these subject-specific modeling approaches. In addition, the monitoring of scalp voltages may be used to verify electrode placement to increase transcranial electrical stimulation safety and reproducibility.

  4. Validity of worksheet-based guided inquiry and mind mapping for training students’ creative thinking skills

    NASA Astrophysics Data System (ADS)

    Susanti, L. B.; Poedjiastoeti, S.; Taufikurohmah, T.

    2018-04-01

    The purpose of this study is to explain the validity of guided inquiry and mind mapping-based worksheet that has been developed in this study. The worksheet implemented the phases of guided inquiry teaching models in order to train students’ creative thinking skills. The creative thinking skills which were trained in this study included fluency, flexibility, originality and elaboration. The types of validity used in this study included content and construct validity. The type of this study is development research with Research and Development (R & D) method. The data of this study were collected using review and validation sheets. Sources of the data were chemistry lecturer and teacher. The data is the analyzed descriptively. The results showed that the worksheet is very valid and could be used as a learning media with the percentage of validity ranged from 82.5%-92.5%.

  5. Enhanced data validation strategy of air quality monitoring network.

    PubMed

    Harkat, Mohamed-Faouzi; Mansouri, Majdi; Nounou, Mohamed; Nounou, Hazem

    2018-01-01

    Quick validation and detection of faults in measured air quality data is a crucial step towards achieving the objectives of air quality networks. Therefore, the objectives of this paper are threefold: (i) to develop a modeling technique that can be used to predict the normal behavior of air quality variables and help provide accurate reference for monitoring purposes; (ii) to develop fault detection method that can effectively and quickly detect any anomalies in measured air quality data. For this purpose, a new fault detection method that is based on the combination of generalized likelihood ratio test (GLRT) and exponentially weighted moving average (EWMA) will be developed. GLRT is a well-known statistical fault detection method that relies on maximizing the detection probability for a given false alarm rate. In this paper, we propose to develop GLRT-based EWMA fault detection method that will be able to detect the changes in the values of certain air quality variables; (iii) to develop fault isolation and identification method that allows defining the fault source(s) in order to properly apply appropriate corrective actions. In this paper, reconstruction approach that is based on Midpoint-Radii Principal Component Analysis (MRPCA) model will be developed to handle the types of data and models associated with air quality monitoring networks. All air quality modeling, fault detection, fault isolation and reconstruction methods developed in this paper will be validated using real air quality data (such as particulate matter, ozone, nitrogen and carbon oxides measurement). Copyright © 2017 Elsevier Inc. All rights reserved.

  6. SU-F-R-24: Identifying Prognostic Imaging Biomarkers in Early Stage Lung Cancer Using Radiomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, X; Wu, J; Cui, Y

    2016-06-15

    Purpose: Patients diagnosed with early stage lung cancer have favorable outcomes when treated with surgery or stereotactic radiotherapy. However, a significant proportion (∼20%) of patients will develop metastatic disease and eventually die of the disease. The purpose of this work is to identify quantitative imaging biomarkers from CT for predicting overall survival in early stage lung cancer. Methods: In this institutional review board-approved HIPPA-compliant retrospective study, we retrospectively analyzed the diagnostic CT scans of 110 patients with early stage lung cancer. Data from 70 patients were used for training/discovery purposes, while those of remaining 40 patients were used for independentmore » validation. We extracted 191 radiomic features, including statistical, histogram, morphological, and texture features. Cox proportional hazard regression model, coupled with the least absolute shrinkage and selection operator (LASSO), was used to predict overall survival based on the radiomic features. Results: The optimal prognostic model included three image features from the Law’s feature and wavelet texture. In the discovery cohort, this model achieved a concordance index or CI=0.67, and it separated the low-risk from high-risk groups in predicting overall survival (hazard ratio=2.72, log-rank p=0.007). In the independent validation cohort, this radiomic signature achieved a CI=0.62, and significantly stratified the low-risk and high-risk groups in terms of overall survival (hazard ratio=2.20, log-rank p=0.042). Conclusion: We identified CT imaging characteristics associated with overall survival in early stage lung cancer. If prospectively validated, this could potentially help identify high-risk patients who might benefit from adjuvant systemic therapy.« less

  7. Validation of a Model of Extramusical Influences on Solo and Small-Ensemble Festival Ratings

    ERIC Educational Resources Information Center

    Bergee, Martin J.

    2006-01-01

    This is the fourth in a series of studies whose purpose has been to develop a theoretical model of selected extramusical variables' ability to explain solo and small-ensemble festival ratings. Authors of the second and third of these (Bergee & McWhirter, 2005; Bergee & Westfall, 2005) used logistic regression as the basis for their…

  8. A Latent Variable Analysis of Continuing Professional Development Constructs Using PLS-SEM Modeling

    ERIC Educational Resources Information Center

    Yazdi, Mona Tabatabaee; Motallebzadeh, Khalil; Ashraf, Hamid; Baghaei, Purya

    2017-01-01

    Continuing Professional Development (CPD), in the area of teacher education, refers to the procedures, programs or strategies that help teachers encounter the challenges of their work and accomplish their own and their learning center's goals. To this aim, the purpose of this study is to propose and validate an appropriate model of EFL teachers'…

  9. Revision and Validation of a Culturally-Adapted Online Instructional Module Using Edmundson's CAP Model: A DBR Study

    ERIC Educational Resources Information Center

    Tapanes, Marie A.

    2011-01-01

    In the present study, the Cultural Adaptation Process Model was applied to an online module to include adaptations responsive to the online students' culturally-influenced learning styles and preferences. The purpose was to provide the online learners with a variety of course material presentations, where the e-learners had the opportunity to…

  10. Factors Affecting Higher Order Thinking Skills of Students: A Meta-Analytic Structural Equation Modeling Study

    ERIC Educational Resources Information Center

    Budsankom, Prayoonsri; Sawangboon, Tatsirin; Damrongpanit, Suntorapot; Chuensirimongkol, Jariya

    2015-01-01

    The purpose of the research is to develop and identify the validity of factors affecting higher order thinking skills (HOTS) of students. The thinking skills can be divided into three types: analytical, critical, and creative thinking. This analysis is done by applying the meta-analytic structural equation modeling (MASEM) based on a database of…

  11. Psychometric Properties of an Abbreviated Instrument of the Five-Factor Model

    ERIC Educational Resources Information Center

    Mullins-Sweatt, Stephanie N.; Jamerson, Janetta E.; Samuel, Douglas B.; Olson, David R.; Widiger, Thomas A.

    2006-01-01

    Brief measures of the five-factor model (FFM) have been developed but none include an assessment of facets within each domain. The purpose of this study was to examine the validity of a simple, one-page, facet-level description of the FFM. Five data collections were completed to assess the reliability and the convergent and discriminant validity…

  12. Teaching Play Skills to Children with Autism through Video Modeling: Small Group Arrangement and Observational Learning

    ERIC Educational Resources Information Center

    Ozen, Arzu; Batu, Sema; Birkan, Binyamin

    2012-01-01

    The purpose of the present study was to examine if video modeling was an effective way of teaching sociodramatic play skills to individuals with autism in a small group arrangement. Besides maintenance, observational learning and social validation data were collected. Three 9 year old boys with autism participated in the study. Multiple probe…

  13. WISC-IV and Clinical Validation of the Four- and Five-Factor Interpretative Approaches

    ERIC Educational Resources Information Center

    Weiss, Lawrence G.; Keith, Timothy Z.; Zhu, Jianjun; Chen, Hsinyi

    2013-01-01

    The purpose of this study was to determine the constructs measured by the WISC-IV and the consistency of measurement across large normative and clinical samples. Competing higher order four- and five-factor models were analyzed using the WISC-IV normative sample and clinical subjects. The four-factor solution is the model published with the test…

  14. Development of the PRO-SDLS: A Measure of Self-Direction in Learning Based on the Personal Responsibility Orientation Model

    ERIC Educational Resources Information Center

    Stockdale, Susan L.; Brockett, Ralph G.

    2011-01-01

    The purpose of this study was to develop a reliable and valid instrument to measure self-directedness in learning among college students based on an operationalization of the personal responsibility orientation (PRO) model of self-direction in learning. The resultant 25-item Personal Responsibility Orientation to Self-Direction in Learning Scale…

  15. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    Research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a special distributed computer environment is presented. This model is identified by the acronym ATAMM which represents Algorithms To Architecture Mapping Model. The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  16. A general-purpose framework to simulate musculoskeletal system of human body: using a motion tracking approach.

    PubMed

    Ehsani, Hossein; Rostami, Mostafa; Gudarzi, Mohammad

    2016-02-01

    Computation of muscle force patterns that produce specified movements of muscle-actuated dynamic models is an important and challenging problem. This problem is an undetermined one, and then a proper optimization is required to calculate muscle forces. The purpose of this paper is to develop a general model for calculating all muscle activation and force patterns in an arbitrary human body movement. For this aim, the equations of a multibody system forward dynamics, which is considered for skeletal system of the human body model, is derived using Lagrange-Euler formulation. Next, muscle contraction dynamics is added to this model and forward dynamics of an arbitrary musculoskeletal system is obtained. For optimization purpose, the obtained model is used in computed muscle control algorithm, and a closed-loop system for tracking desired motions is derived. Finally, a popular sport exercise, biceps curl, is simulated by using this algorithm and the validity of the obtained results is evaluated via EMG signals.

  17. Development and Initial Validation of the Five-Factor Model Adolescent Personality Questionnaire (FFM-APQ).

    PubMed

    Rogers, Mary E; Glendon, A Ian

    2018-01-01

    This research reports on the 4-phase development of the 25-item Five-Factor Model Adolescent Personality Questionnaire (FFM-APQ). The purpose was to develop and determine initial evidence for validity of a brief adolescent personality inventory using a vocabulary that could be understood by adolescents up to 18 years old. Phase 1 (N = 48) consisted of item generation and expert (N = 5) review of items; Phase 2 (N = 179) involved item analyses; in Phase 3 (N = 496) exploratory factor analysis assessed the underlying structure; in Phase 4 (N = 405) confirmatory factor analyses resulted in a 25-item inventory with 5 subscales.

  18. Consumer preference models: fuzzy theory approach

    NASA Astrophysics Data System (ADS)

    Turksen, I. B.; Wilson, I. A.

    1993-12-01

    Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).

  19. Validating Pseudo-dynamic Source Models against Observed Ground Motion Data at the SCEC Broadband Platform, Ver 16.5

    NASA Astrophysics Data System (ADS)

    Song, S. G.

    2016-12-01

    Simulation-based ground motion prediction approaches have several benefits over empirical ground motion prediction equations (GMPEs). For instance, full 3-component waveforms can be produced and site-specific hazard analysis is also possible. However, it is important to validate them against observed ground motion data to confirm their efficiency and validity before practical uses. There have been community efforts for these purposes, which are supported by the Broadband Platform (BBP) project at the Southern California Earthquake Center (SCEC). In the simulation-based ground motion prediction approaches, it is a critical element to prepare a possible range of scenario rupture models. I developed a pseudo-dynamic source model for Mw 6.5-7.0 by analyzing a number of dynamic rupture models, based on 1-point and 2-point statistics of earthquake source parameters (Song et al. 2014; Song 2016). In this study, the developed pseudo-dynamic source models were tested against observed ground motion data at the SCEC BBP, Ver 16.5. The validation was performed at two stages. At the first stage, simulated ground motions were validated against observed ground motion data for past events such as the 1992 Landers and 1994 Northridge, California, earthquakes. At the second stage, they were validated against the latest version of empirical GMPEs, i.e., NGA-West2. The validation results show that the simulated ground motions produce ground motion intensities compatible with observed ground motion data at both stages. The compatibility of the pseudo-dynamic source models with the omega-square spectral decay and the standard deviation of the simulated ground motion intensities are also discussed in the study

  20. Development and validation of a predictive model for excessive postpartum blood loss: A retrospective, cohort study.

    PubMed

    Rubio-Álvarez, Ana; Molina-Alarcón, Milagros; Arias-Arias, Ángel; Hernández-Martínez, Antonio

    2018-03-01

    postpartum haemorrhage is one of the leading causes of maternal morbidity and mortality worldwide. Despite the use of uterotonics agents as preventive measure, it remains a challenge to identify those women who are at increased risk of postpartum bleeding. to develop and to validate a predictive model to assess the risk of excessive bleeding in women with vaginal birth. retrospective cohorts study. "Mancha-Centro Hospital" (Spain). the elaboration of the predictive model was based on a derivation cohort consisting of 2336 women between 2009 and 2011. For validation purposes, a prospective cohort of 953 women between 2013 and 2014 were employed. Women with antenatal fetal demise, multiple pregnancies and gestations under 35 weeks were excluded METHODS: we used a multivariate analysis with binary logistic regression, Ridge Regression and areas under the Receiver Operating Characteristic curves to determine the predictive ability of the proposed model. there was 197 (8.43%) women with excessive bleeding in the derivation cohort and 63 (6.61%) women in the validation cohort. Predictive factors in the final model were: maternal age, primiparity, duration of the first and second stages of labour, neonatal birth weight and antepartum haemoglobin levels. Accordingly, the predictive ability of this model in the derivation cohort was 0.90 (95% CI: 0.85-0.93), while it remained 0.83 (95% CI: 0.74-0.92) in the validation cohort. this predictive model is proved to have an excellent predictive ability in the derivation cohort, and its validation in a latter population equally shows a good ability for prediction. This model can be employed to identify women with a higher risk of postpartum haemorrhage. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Simulation-based training for prostate surgery.

    PubMed

    Khan, Raheej; Aydin, Abdullatif; Khan, Muhammad Shamim; Dasgupta, Prokar; Ahmed, Kamran

    2015-10-01

    To identify and review the currently available simulators for prostate surgery and to explore the evidence supporting their validity for training purposes. A review of the literature between 1999 and 2014 was performed. The search terms included a combination of urology, prostate surgery, robotic prostatectomy, laparoscopic prostatectomy, transurethral resection of the prostate (TURP), simulation, virtual reality, animal model, human cadavers, training, assessment, technical skills, validation and learning curves. Furthermore, relevant abstracts from the American Urological Association, European Association of Urology, British Association of Urological Surgeons and World Congress of Endourology meetings, between 1999 and 2013, were included. Only studies related to prostate surgery simulators were included; studies regarding other urological simulators were excluded. A total of 22 studies that carried out a validation study were identified. Five validated models and/or simulators were identified for TURP, one for photoselective vaporisation of the prostate, two for holmium enucleation of the prostate, three for laparoscopic radical prostatectomy (LRP) and four for robot-assisted surgery. Of the TURP simulators, all five have demonstrated content validity, three face validity and four construct validity. The GreenLight laser simulator has demonstrated face, content and construct validities. The Kansai HoLEP Simulator has demonstrated face and content validity whilst the UroSim HoLEP Simulator has demonstrated face, content and construct validity. All three animal models for LRP have been shown to have construct validity whilst the chicken skin model was also content valid. Only two robotic simulators were identified with relevance to robot-assisted laparoscopic prostatectomy, both of which demonstrated construct validity. A wide range of different simulators are available for prostate surgery, including synthetic bench models, virtual-reality platforms, animal models, human cadavers, distributed simulation and advanced training programmes and modules. The currently validated simulators can be used by healthcare organisations to provide supplementary training sessions for trainee surgeons. Further research should be conducted to validate simulated environments, to determine which simulators have greater efficacy than others and to assess the cost-effectiveness of the simulators and the transferability of skills learnt. With surgeons investigating new possibilities for easily reproducible and valid methods of training, simulation offers great scope for implementation alongside traditional methods of training. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.

  2. A Comparison of Career-Related Assessment Tools/Models. Final [Report].

    ERIC Educational Resources Information Center

    WestEd, San Francisco, CA.

    This document contains charts that evaluate career related assessment items. Chart categories include: Purpose/Current Uses/Format; Intended Population; Oregon Career Related Learning Standards Addressed; Relationship to the Standards; Relationship to Endorsement Area Frameworks; Evidence of Validity; Evidence of Reliability; Evidence of Fairness…

  3. Alternative methods to determine headwater benefits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Y.S.; Perlack, R.D.; Sale, M.J.

    1997-11-10

    In 1992, the Federal Energy Regulatory Commission (FERC) began using a Flow Duration Analysis (FDA) methodology to assess headwater benefits in river basins where use of the Headwater Benefits Energy Gains (HWBEG) model may not result in significant improvements in modeling accuracy. The purpose of this study is to validate the accuracy and appropriateness of the FDA method for determining energy gains in less complex basins. This report presents the results of Oak Ridge National Laboratory`s (ORNL`s) validation of the FDA method. The validation is based on a comparison of energy gains using the FDA method with energy gains calculatedmore » using the MWBEG model. Comparisons of energy gains are made on a daily and monthly basis for a complex river basin (the Alabama River Basin) and a basin that is considered relatively simple hydrologically (the Stanislaus River Basin). In addition to validating the FDA method, ORNL was asked to suggest refinements and improvements to the FDA method. Refinements and improvements to the FDA method were carried out using the James River Basin as a test case.« less

  4. Predicting Overall Survival After Stereotactic Ablative Radiation Therapy in Early-Stage Lung Cancer: Development and External Validation of the Amsterdam Prognostic Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louie, Alexander V., E-mail: Dr.alexlouie@gmail.com; Department of Radiation Oncology, London Regional Cancer Program, University of Western Ontario, London, Ontario; Department of Epidemiology, Harvard School of Public Health, Harvard University, Boston, Massachusetts

    Purpose: A prognostic model for 5-year overall survival (OS), consisting of recursive partitioning analysis (RPA) and a nomogram, was developed for patients with early-stage non-small cell lung cancer (ES-NSCLC) treated with stereotactic ablative radiation therapy (SABR). Methods and Materials: A primary dataset of 703 ES-NSCLC SABR patients was randomly divided into a training (67%) and an internal validation (33%) dataset. In the former group, 21 unique parameters consisting of patient, treatment, and tumor factors were entered into an RPA model to predict OS. Univariate and multivariate models were constructed for RPA-selected factors to evaluate their relationship with OS. A nomogrammore » for OS was constructed based on factors significant in multivariate modeling and validated with calibration plots. Both the RPA and the nomogram were externally validated in independent surgical (n=193) and SABR (n=543) datasets. Results: RPA identified 2 distinct risk classes based on tumor diameter, age, World Health Organization performance status (PS) and Charlson comorbidity index. This RPA had moderate discrimination in SABR datasets (c-index range: 0.52-0.60) but was of limited value in the surgical validation cohort. The nomogram predicting OS included smoking history in addition to RPA-identified factors. In contrast to RPA, validation of the nomogram performed well in internal validation (r{sup 2}=0.97) and external SABR (r{sup 2}=0.79) and surgical cohorts (r{sup 2}=0.91). Conclusions: The Amsterdam prognostic model is the first externally validated prognostication tool for OS in ES-NSCLC treated with SABR available to individualize patient decision making. The nomogram retained strong performance across surgical and SABR external validation datasets. RPA performance was poor in surgical patients, suggesting that 2 different distinct patient populations are being treated with these 2 effective modalities.« less

  5. Servo-hydraulic actuator in controllable canonical form: Identification and experimental validation

    NASA Astrophysics Data System (ADS)

    Maghareh, Amin; Silva, Christian E.; Dyke, Shirley J.

    2018-02-01

    Hydraulic actuators have been widely used to experimentally examine structural behavior at multiple scales. Real-time hybrid simulation (RTHS) is one innovative testing method that largely relies on such servo-hydraulic actuators. In RTHS, interface conditions must be enforced in real time, and controllers are often used to achieve tracking of the desired displacements. Thus, neglecting the dynamics of hydraulic transfer system may result either in system instability or sub-optimal performance. Herein, we propose a nonlinear dynamical model for a servo-hydraulic actuator (a.k.a. hydraulic transfer system) coupled with a nonlinear physical specimen. The nonlinear dynamical model is transformed into controllable canonical form for further tracking control design purposes. Through a number of experiments, the controllable canonical model is validated.

  6. An ocean scatter propagation model for aeronautical satellite communication applications

    NASA Technical Reports Server (NTRS)

    Moreland, K. W.

    1990-01-01

    In this paper an ocean scattering propagation model, developed for aircraft-to-satellite (aeronautical) applications, is described. The purpose of the propagation model is to characterize the behavior of sea reflected multipath as a function of physical propagation path parameters. An accurate validation against the theoretical far field solution for a perfectly conducting sinusoidal surface is provided. Simulation results for typical L band aeronautical applications with low complexity antennas are presented.

  7. Chronic Pain: Content Validation of Nursing Diagnosis in Slovakia and the Czech Republic.

    PubMed

    Zeleníková, Renáta; Maniaková, Lenka

    2015-10-01

    The main purpose of the study was to validate the defining characteristics and related factors of the nursing diagnosis "chronic pain" in Slovakia and the Czech Republic. This is a descriptive study. The validation process involved was based on Fehring's Diagnostic Content Validity Model. Three defining characteristics (reports pain, altered ability to continue previous activities, and depression) were classified as major by Slovak nurses, and one defining characteristic (reports pain) was classified as major by Czech nurses. The results of the study provide guidance in devising strategies of pain assessment and can aid in the formulation of accurate nursing diagnoses. The defining characteristic "reports pain" is important for arriving at the nursing diagnosis "chronic pain." © 2014 NANDA International, Inc.

  8. Methods Data Qualification Interim Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Sam Alessi; Tami Grimmett; Leng Vang

    The overall goal of the Next Generation Nuclear Plant (NGNP) Data Management and Analysis System (NDMAS) is to maintain data provenance for all NGNP data including the Methods component of NGNP data. Multiple means are available to access data stored in NDMAS. A web portal environment allows users to access data, view the results of qualification tests and view graphs and charts of various attributes of the data. NDMAS also has methods for the management of the data output from VHTR simulation models and data generated from experiments designed to verify and validate the simulation codes. These simulation models representmore » the outcome of mathematical representation of VHTR components and systems. The methods data management approaches described herein will handle data that arise from experiment, simulation, and external sources for the main purpose of facilitating parameter estimation and model verification and validation (V&V). A model integration environment entitled ModelCenter is used to automate the storing of data from simulation model runs to the NDMAS repository. This approach does not adversely change the why computational scientists conduct their work. The method is to be used mainly to store the results of model runs that need to be preserved for auditing purposes or for display to the NDMAS web portal. This interim report demonstrates the currently development of NDMAS for Methods data and discusses data and its qualification that is currently part of NDMAS.« less

  9. SU-F-J-41: Experimental Validation of a Cascaded Linear System Model for MVCBCT with a Multi-Layer EPID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Y; Rottmann, J; Myronakis, M

    2016-06-15

    Purpose: The purpose of this study was to validate the use of a cascaded linear system model for MV cone-beam CT (CBCT) using a multi-layer (MLI) electronic portal imaging device (EPID) and provide experimental insight into image formation. A validated 3D model provides insight into salient factors affecting reconstructed image quality, allowing potential for optimizing detector design for CBCT applications. Methods: A cascaded linear system model was developed to investigate the potential improvement in reconstructed image quality for MV CBCT using an MLI EPID. Inputs to the three-dimensional (3D) model include projection space MTF and NPS. Experimental validation was performedmore » on a prototype MLI detector installed on the portal imaging arm of a Varian TrueBeam radiotherapy system. CBCT scans of up to 898 projections over 360 degrees were acquired at exposures of 16 and 64 MU. Image volumes were reconstructed using a Feldkamp-type (FDK) filtered backprojection (FBP) algorithm. Flat field images and scans of a Catphan model 604 phantom were acquired. The effect of 2×2 and 4×4 detector binning was also examined. Results: Using projection flat fields as an input, examination of the modeled and measured NPS in the axial plane exhibits good agreement. Binning projection images was shown to improve axial slice SDNR by a factor of approximately 1.4. This improvement is largely driven by a decrease in image noise of roughly 20%. However, this effect is accompanied by a subsequent loss in image resolution. Conclusion: The measured axial NPS shows good agreement with the theoretical calculation using a linear system model. Binning of projection images improves SNR of large objects on the Catphan phantom by decreasing noise. Specific imaging tasks will dictate the implementation image binning to two-dimensional projection images. The project was partially supported by a grant from Varian Medical Systems, Inc. and grant No. R01CA188446-01 from the National Cancer Institute.« less

  10. Validating MDS Data about Risk Factors for Perineal Dermatitis by Comparing With Nursing Home Records

    PubMed Central

    Toth, Anna M.; Bliss, Donna Z.; Savik, Kay; Wyman, Jean F.

    2011-01-01

    Perineal dermatitis is one of the main complications of incontinence and increases the cost of health care. The Minimum Data Set (MDS) contains data about factors associated with perineal dermatitis identified in a published conceptual model of perineal dermatitis. The purpose of this study was to determine the validity of MDS data related to perineal dermatitis risk factors by comparing them with data in nursing home chart records. Findings indicate that MDS items defining factors associated with perineal dermatitis were valid and supported use of the MDS in further investigation of a significant, costly, and understudied health problem of nursing home residents. PMID:18512629

  11. Cross validation issues in multiobjective clustering

    PubMed Central

    Brusco, Michael J.; Steinley, Douglas

    2018-01-01

    The implementation of multiobjective programming methods in combinatorial data analysis is an emergent area of study with a variety of pragmatic applications in the behavioural sciences. Most notably, multiobjective programming provides a tool for analysts to model trade offs among competing criteria in clustering, seriation, and unidimensional scaling tasks. Although multiobjective programming has considerable promise, the technique can produce numerically appealing results that lack empirical validity. With this issue in mind, the purpose of this paper is to briefly review viable areas of application for multiobjective programming and, more importantly, to outline the importance of cross-validation when using this method in cluster analysis. PMID:19055857

  12. Development and validation of a mass casualty conceptual model.

    PubMed

    Culley, Joan M; Effken, Judith A

    2010-03-01

    To develop and validate a conceptual model that provides a framework for the development and evaluation of information systems for mass casualty events. The model was designed based on extant literature and existing theoretical models. A purposeful sample of 18 experts validated the model. Open-ended questions, as well as a 7-point Likert scale, were used to measure expert consensus on the importance of each construct and its relationship in the model and the usefulness of the model to future research. Computer-mediated applications were used to facilitate a modified Delphi technique through which a panel of experts provided validation for the conceptual model. Rounds of questions continued until consensus was reached, as measured by an interquartile range (no more than 1 scale point for each item); stability (change in the distribution of responses less than 15% between rounds); and percent agreement (70% or greater) for indicator questions. Two rounds of the Delphi process were needed to satisfy the criteria for consensus or stability related to the constructs, relationships, and indicators in the model. The panel reached consensus or sufficient stability to retain all 10 constructs, 9 relationships, and 39 of 44 indicators. Experts viewed the model as useful (mean of 5.3 on a 7-point scale). Validation of the model provides the first step in understanding the context in which mass casualty events take place and identifying variables that impact outcomes of care. This study provides a foundation for understanding the complexity of mass casualty care, the roles that nurses play in mass casualty events, and factors that must be considered in designing and evaluating information-communication systems to support effective triage under these conditions.

  13. Automated finite element meshing of the lumbar spine: Verification and validation with 18 specimen-specific models.

    PubMed

    Campbell, J Q; Coombs, D J; Rao, M; Rullkoetter, P J; Petrella, A J

    2016-09-06

    The purpose of this study was to seek broad verification and validation of human lumbar spine finite element models created using a previously published automated algorithm. The automated algorithm takes segmented CT scans of lumbar vertebrae, automatically identifies important landmarks and contact surfaces, and creates a finite element model. Mesh convergence was evaluated by examining changes in key output variables in response to mesh density. Semi-direct validation was performed by comparing experimental results for a single specimen to the automated finite element model results for that specimen with calibrated material properties from a prior study. Indirect validation was based on a comparison of results from automated finite element models of 18 individual specimens, all using one set of generalized material properties, to a range of data from the literature. A total of 216 simulations were run and compared to 186 experimental data ranges in all six primary bending modes up to 7.8Nm with follower loads up to 1000N. Mesh convergence results showed less than a 5% difference in key variables when the original mesh density was doubled. The semi-direct validation results showed that the automated method produced results comparable to manual finite element modeling methods. The indirect validation results showed a wide range of outcomes due to variations in the geometry alone. The studies showed that the automated models can be used to reliably evaluate lumbar spine biomechanics, specifically within our intended context of use: in pure bending modes, under relatively low non-injurious simulated in vivo loads, to predict torque rotation response, disc pressures, and facet forces. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Modelling Attempts to Predict Fretting-Fatigue Life on Turbine Components

    DTIC Science & Technology

    2004-06-01

    validation purposes life prediction is compared with experimental results . 1. THE PROBLEMATIC OF FRETTING/WEAR FATIGUE ON AEROENGINES 1.1. Damage...Furthermore, unlike real engine conditions, there are no additional vibrational loads exerted on the dummy due to the fact that the test is run

  15. Core Competencies for Training Effective School Consultants

    ERIC Educational Resources Information Center

    Burkhouse, Katie Lynn Sutton

    2012-01-01

    The purpose of this research was to develop and validate a set of core competencies of effective school-based consultants for preservice school psychology consultation training. With recent changes in service delivery models, psychologists are challenged to engage in more indirect, preventative practices (Reschly, 2008). Consultation emerges as…

  16. Strategy and the Internationalisation of Universities

    ERIC Educational Resources Information Center

    Elkin, Graham; Farnsworth, John; Templer, Andrew

    2008-01-01

    Purpose: The paper's aims is to explore the relationship between having a complete strategic focus and the extent of the internationalisation of university business schools and the level of desire for the future internationalisation and to further validate the model of internationalisation. Design/methodology/approach: Data were collected for…

  17. Confirmatory factorial analysis of the children´s attraction to physical activity scale (capa).

    PubMed

    Seabra, A C; Maia, J A; Parker, M; Seabra, A; Brustad, R; Fonseca, A M

    2015-03-27

    Attraction to physical activity (PA) is an important contributor to children´s intrinsic motivation to engage in games, and sports. Previous studies have supported the utility of the children´s attraction to PA scale (CAPA) (Brustad, 1996) but the validity of this measure for use in Portugal has not been established. The purpose of this study was to cross-validate the shorter version of the CAPA scale in the Portuguese cultural context. A sample of 342 children (8--10 years of age) was used. Confirmatory factor analyses using EQS software ( version 6.1) tested t hree competing measurement models: a single--factor model, a five factor model, and a second order factor model. The single--factor model and the second order model showed a poor fit to the data. It was found that a five-factor model similar to the original one revealed good fit to the data (S--B χ 2 (67) =94.27,p=0.02; NNFI=0.93; CFI=0.95; RMSEA=0.04; 90%CI=0.02;0.05). The results indicated that the CAPA scale is valid and appropriate for use in the Portuguese cultural context. The availability of a valid scale to evaluate attraction to PA at schools should provide improved opportunities for better assessment and understanding of children´s involvement in PA.

  18. Institutional Communication Dynamics in Instructional Effectiveness: Development of a Student Self-Report Measure of FVP, LMX, and TMX in a Pedagogical Context

    ERIC Educational Resources Information Center

    Lucas, Aaron D.; Voss, Roger Alan; Krumwiede, Dennis W.

    2015-01-01

    Fractal vertical polarization (FVP) has joined leader-member exchange (LMX) and team member exchange (TMX) as one of the available models of communication dynamics based on complexity theory, which now all benefit from valid scales for use in organizational settings. The purpose of these models is to assess the quality of interpersonal information…

  19. Assessing Validity of Measurement in Learning Disabilities Using Hierarchical Generalized Linear Modeling: The Roles of Anxiety and Motivation

    ERIC Educational Resources Information Center

    Sideridis, Georgios D.

    2016-01-01

    The purpose of the present studies was to test the hypothesis that the psychometric characteristics of ability scales may be significantly distorted if one accounts for emotional factors during test taking. Specifically, the present studies evaluate the effects of anxiety and motivation on the item difficulties of the Rasch model. In Study 1, the…

  20. A National Survey on the Taxonomy of Community Living Skills. Working Paper 87-4. COMPETE: Community-Based Model for Public-School Exit and Transition to Employment.

    ERIC Educational Resources Information Center

    Dever, Richard B.

    This paper is a product of Project COMPETE, a service demonstration project undertaken for the purpose of developing and validating a model and training sequence to improve transition services for moderately, severely, and profoundly retarded youth. The paper describes the Taxonomy of Community Living Skills, an organized statement of…

  1. Assessing Music Students' Motivation Using the Music Model of Academic Motivation Inventory

    ERIC Educational Resources Information Center

    Parkes, Kelly A.; Jones, Brett D.; Wilkins, Jesse L. M.

    2017-01-01

    The purpose of this study was to investigate the reliability and validity of using a motivation inventory with music students in upper-elementary, middle, and high school. We used the middle/high school version of the MUSIC Model of Academic Motivation Inventory to survey 93 students in the 5th to 12th grades in one school. Our analysis revealed…

  2. Verification and Validation Studies for the LAVA CFD Solver

    NASA Technical Reports Server (NTRS)

    Moini-Yekta, Shayan; Barad, Michael F; Sozer, Emre; Brehm, Christoph; Housman, Jeffrey A.; Kiris, Cetin C.

    2013-01-01

    The verification and validation of the Launch Ascent and Vehicle Aerodynamics (LAVA) computational fluid dynamics (CFD) solver is presented. A modern strategy for verification and validation is described incorporating verification tests, validation benchmarks, continuous integration and version control methods for automated testing in a collaborative development environment. The purpose of the approach is to integrate the verification and validation process into the development of the solver and improve productivity. This paper uses the Method of Manufactured Solutions (MMS) for the verification of 2D Euler equations, 3D Navier-Stokes equations as well as turbulence models. A method for systematic refinement of unstructured grids is also presented. Verification using inviscid vortex propagation and flow over a flat plate is highlighted. Simulation results using laminar and turbulent flow past a NACA 0012 airfoil and ONERA M6 wing are validated against experimental and numerical data.

  3. Partial validation of the Dutch model for emission and transport of nutrients (STONE).

    PubMed

    Overbeek, G B; Tiktak, A; Beusen, A H; van Puijenbroek, P J

    2001-11-17

    The Netherlands has to cope with large losses of N and P to groundwater and surface water. Agriculture is the dominant source of these nutrients, particularly with reference to nutrient excretion due to intensive animal husbandry in combination with fertilizer use. The Dutch government has recently launched a stricter eutrophication abatement policy to comply with the EC nitrate directive. The Dutch consensus model for N and P emission to groundwater and surface water (STONE) has been developed to evaluate the environmental benefits of abatement plans. Due to the possibly severe socioeconomic consequences of eutrophication abatement plans, it is of utmost importance that the model is thoroughly validated. Because STONE is applied on a nationwide scale, the model validation has also been carried out on this scale. For this purpose the model outputs were compared with lumped results from monitoring networks in the upper groundwater and in surface waters. About 13,000 recent point source observations of nitrate in the upper groundwater were available, along with several hundreds of observations showing N and P in local surface water systems. Comparison of observations from the different spatial scales available showed the issue of scale to be important. Scale issues will be addressed in the next stages of the validation study.

  4. Statistical validity of using ratio variables in human kinetics research.

    PubMed

    Liu, Yuanlong; Schutz, Robert W

    2003-09-01

    The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.

  5. Parametric models of reflectance spectra for dyed fabrics

    NASA Astrophysics Data System (ADS)

    Aiken, Daniel C.; Ramsey, Scott; Mayo, Troy; Lambrakos, Samuel G.; Peak, Joseph

    2016-05-01

    This study examines parametric modeling of NIR reflectivity spectra for dyed fabrics, which provides for both their inverse and direct modeling. The dye considered for prototype analysis is triarylamine dye. The fabrics considered are camouflage textiles characterized by color variations. The results of this study provide validation of the constructed parametric models, within reasonable error tolerances for practical applications, including NIR spectral characteristics in camouflage textiles, for purposes of simulating NIR spectra corresponding to various dye concentrations in host fabrics, and potentially to mixtures of dyes.

  6. Validation of the SETOC Instrument--Student Evaluation of Teaching in Outpatient Clinics

    ERIC Educational Resources Information Center

    Zuberi, Rukhsana W.; Bordage, Georges; Norman, Geoffrey R.

    2007-01-01

    Purpose: There is a paucity of evaluation forms specifically developed and validated for outpatient settings. The purpose of this study was to develop and validate an instrument specifically for evaluating outpatient teaching, to provide reliable and valid ratings for individual and group feedback to faculty, and to identify outstanding teachers…

  7. Rater reliability and construct validity of a mobile application for posture analysis

    PubMed Central

    Szucs, Kimberly A.; Brown, Elena V. Donoso

    2018-01-01

    [Purpose] Measurement of posture is important for those with a clinical diagnosis as well as researchers aiming to understand the impact of faulty postures on the development of musculoskeletal disorders. A reliable, cost-effective and low tech posture measure may be beneficial for research and clinical applications. The purpose of this study was to determine rater reliability and construct validity of a posture screening mobile application in healthy young adults. [Subjects and Methods] Pictures of subjects were taken in three standing positions. Two raters independently digitized the static standing posture image twice. The app calculated posture variables, including sagittal and coronal plane translations and angulations. Intra- and inter-rater reliability were calculated using the appropriate ICC models for complete agreement. Construct validity was determined through comparison of known groups using repeated measures ANOVA. [Results] Intra-rater reliability ranged from 0.71 to 0.99. Inter-rater reliability was good to excellent for all translations. ICCs were stronger for translations versus angulations. The construct validity analysis found that the app was able to detect the change in the four variables selected. [Conclusion] The posture mobile application has demonstrated strong rater reliability and preliminary evidence of construct validity. This application may have utility in clinical and research settings. PMID:29410561

  8. Rater reliability and construct validity of a mobile application for posture analysis.

    PubMed

    Szucs, Kimberly A; Brown, Elena V Donoso

    2018-01-01

    [Purpose] Measurement of posture is important for those with a clinical diagnosis as well as researchers aiming to understand the impact of faulty postures on the development of musculoskeletal disorders. A reliable, cost-effective and low tech posture measure may be beneficial for research and clinical applications. The purpose of this study was to determine rater reliability and construct validity of a posture screening mobile application in healthy young adults. [Subjects and Methods] Pictures of subjects were taken in three standing positions. Two raters independently digitized the static standing posture image twice. The app calculated posture variables, including sagittal and coronal plane translations and angulations. Intra- and inter-rater reliability were calculated using the appropriate ICC models for complete agreement. Construct validity was determined through comparison of known groups using repeated measures ANOVA. [Results] Intra-rater reliability ranged from 0.71 to 0.99. Inter-rater reliability was good to excellent for all translations. ICCs were stronger for translations versus angulations. The construct validity analysis found that the app was able to detect the change in the four variables selected. [Conclusion] The posture mobile application has demonstrated strong rater reliability and preliminary evidence of construct validity. This application may have utility in clinical and research settings.

  9. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1987-01-01

    The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  10. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    PubMed

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  11. Reduced-order modeling for hyperthermia control.

    PubMed

    Potocki, J K; Tharp, H S

    1992-12-01

    This paper analyzes the feasibility of using reduced-order modeling techniques in the design of multiple-input, multiple-output (MIMO) hyperthermia temperature controllers. State space thermal models are created based upon a finite difference expansion of the bioheat transfer equation model of a scanned focused ultrasound system (SFUS). These thermal state space models are reduced using the balanced realization technique, and an order reduction criterion is tabulated. Results show that a drastic reduction in model dimension can be achieved using the balanced realization. The reduced-order model is then used to design a reduced-order optimal servomechanism controller for a two-scan input, two thermocouple output tissue model. In addition, a full-order optimal servomechanism controller is designed for comparison and validation purposes. These two controllers are applied to a variety of perturbed tissue thermal models to test the robust nature of the reduced-order controller. A comparison of the two controllers validates the use of open-loop balanced reduced-order models in the design of MIMO hyperthermia controllers.

  12. Communication guidelines as a learning tool: an exploration of user preferences in general practice.

    PubMed

    Veldhuijzen, Wemke; Ram, Paul M; van der Weijden, Trudy; van der Vleuten, Cees P M

    2013-02-01

    To explore characteristics of written communication guidelines that enhance the success of training aimed at the application of the recommendations in the guidelines. Seven mixed focus groups were held consisting of communication skill teachers and communication skill learners and three groups with only learners. Analysis was done in line with principles of grounded theory. Five key attributes of guidelines for communication skill training were identified: complexity, level of detail, format and organization, type of information, and trustworthiness/validity. The desired use of these attributes is related to specific educational purposes and learners' expertise. The low complexity of current communication guidelines is appreciated, but seems ad odds with the wish for more valid communication guidelines. Which guideline characteristics are preferred by users depends on the expertise of the learners and the educational purpose of the guideline. Communication guidelines can be improved by modifying the key attributes in line with specific educational functions and learner expertise. For example: the communication guidelines used in GP training in the Netherlands, seem to offer an oversimplified model of doctor patient communication. This model may be suited for undergraduate learning, but does not meet the validity demands of physicians in training. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  13. Scaling the Information Processing Demands of Occupations

    ERIC Educational Resources Information Center

    Haase, Richard F.; Jome, LaRae M.; Ferreira, Joaquim Armando; Santos, Eduardo J. R.; Connacher, Christopher C.; Sendrowitz, Kerrin

    2011-01-01

    The purpose of this study was to provide additional validity evidence for a model of person-environment fit based on polychronicity, stimulus load, and information processing capacities. In this line of research the confluence of polychronicity and information processing (e.g., the ability of individuals to process stimuli from the environment…

  14. Development and Evaluation of a Confidence-Weighting Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Yen, Yung-Chin; Ho, Rong-Guey; Chen, Li-Ju; Chou, Kun-Yi; Chen, Yan-Lin

    2010-01-01

    The purpose of this study was to examine whether the efficiency, precision, and validity of computerized adaptive testing (CAT) could be improved by assessing confidence differences in knowledge that examinees possessed. We proposed a novel polytomous CAT model called the confidence-weighting computerized adaptive testing (CWCAT), which combined a…

  15. Technical Data for Five Learning Style Instruments with Instructional Applications.

    ERIC Educational Resources Information Center

    Lemire, David

    This manual presents five learning styles instruments and presents data related to validity and reliability and descriptive statistics. The manual also discusses the implications for learning presented by each of these learning models. For purposes of this discussion, "learning style,""cognitive style," and "personal style" are used synonymously.…

  16. Influences of Tone on Vowel Articulation in Mandarin Chinese

    ERIC Educational Resources Information Center

    Shaw, Jason A.; Chen, Wei-rong; Proctor, Michael I.; Derrick, Donald

    2016-01-01

    Purpose: Models of speech production often abstract away from shared physiology in pitch control and lingual articulation, positing independent control of tone and vowel units. We assess the validity of this assumption in Mandarin Chinese by evaluating the stability of lingual articulation for vowels across variation in tone. Method:…

  17. Objective Measure of Nasal Air Emission Using Nasal Accelerometry

    ERIC Educational Resources Information Center

    Cler, Meredith J.; Lien, Yu-An, S.; Braden, Maia N.; Mittleman, Talia; Downing, Kerri; Stepp, Cara, E.

    2016-01-01

    Purpose: This article describes the development and initial validation of an objective measure of nasal air emission (NAE) using nasal accelerometry. Method: Nasal acceleration and nasal airflow signals were simultaneously recorded while an expert speech language pathologist modeled NAEs at a variety of severity levels. In addition, microphone and…

  18. Outdoor Leader Career Development: Exploration of a Career Path

    ERIC Educational Resources Information Center

    Wagstaff, Mark

    2016-01-01

    The purpose of this study was to assess the efficacy of the proposed Outdoor Leader Career Development Model (OLCDM) through the development of the Outdoor Leader Career Development Inventory (OLCDI). I assessed the reliability and validity of the OLCDI through exploratory factor analysis, principal component analysis, and varimax rotation, based…

  19. Measuring Therapeutic Alliance with Children in Residential Treatment and Therapeutic Day Care

    ERIC Educational Resources Information Center

    Roest, Jesse; van der Helm, Peer; Strijbosch, Eefje; van Brandenburg, Mariëtte; Stams, Geert Jan

    2016-01-01

    Purpose: This study examined the construct validity and reliability of a therapeutic alliance measure (Children's Alliance Questionnaire [CAQ]) for children with psychosocial and/or behavioral problems, receiving therapeutic residential care or day care in the Netherlands. Methods: Confirmatory factor analysis of a one-factor model ''therapeutic…

  20. Dynamic CFD Simulations of the Supersonic Inflatable Aerodynamic Decelerator (SIAD) Ballistic Range Tests

    NASA Technical Reports Server (NTRS)

    Brock, Joseph M; Stern, Eric

    2016-01-01

    Dynamic CFD simulations of the SIAD ballistic test model were performed using US3D flow solver. Motivation for performing these simulations is for the purpose of validation and verification of the US3D flow solver as a viable computational tool for predicting dynamic coefficients.

  1. The ALADIN System and its canonical model configurations AROME CY41T1 and ALARO CY40T1

    NASA Astrophysics Data System (ADS)

    Termonia, Piet; Fischer, Claude; Bazile, Eric; Bouyssel, François; Brožková, Radmila; Bénard, Pierre; Bochenek, Bogdan; Degrauwe, Daan; Derková, Mariá; El Khatib, Ryad; Hamdi, Rafiq; Mašek, Ján; Pottier, Patricia; Pristov, Neva; Seity, Yann; Smolíková, Petra; Španiel, Oldřich; Tudor, Martina; Wang, Yong; Wittmann, Christoph; Joly, Alain

    2018-01-01

    The ALADIN System is a numerical weather prediction (NWP) system developed by the international ALADIN consortium for operational weather forecasting and research purposes. It is based on a code that is shared with the global model IFS of the ECMWF and the ARPEGE model of Météo-France. Today, this system can be used to provide a multitude of high-resolution limited-area model (LAM) configurations. A few configurations are thoroughly validated and prepared to be used for the operational weather forecasting in the 16 partner institutes of this consortium. These configurations are called the ALADIN canonical model configurations (CMCs). There are currently three CMCs: the ALADIN baseline CMC, the AROME CMC and the ALARO CMC. Other configurations are possible for research, such as process studies and climate simulations. The purpose of this paper is (i) to define the ALADIN System in relation to the global counterparts IFS and ARPEGE, (ii) to explain the notion of the CMCs, (iii) to document their most recent versions, and (iv) to illustrate the process of the validation and the porting of these configurations to the operational forecast suites of the partner institutes of the ALADIN consortium. This paper is restricted to the forecast model only; data assimilation techniques and postprocessing techniques are part of the ALADIN System but they are not discussed here.

  2. Multi-gene genetic programming based predictive models for municipal solid waste gasification in a fluidized bed gasifier.

    PubMed

    Pandey, Daya Shankar; Pan, Indranil; Das, Saptarshi; Leahy, James J; Kwapinski, Witold

    2015-03-01

    A multi-gene genetic programming technique is proposed as a new method to predict syngas yield production and the lower heating value for municipal solid waste gasification in a fluidized bed gasifier. The study shows that the predicted outputs of the municipal solid waste gasification process are in good agreement with the experimental dataset and also generalise well to validation (untrained) data. Published experimental datasets are used for model training and validation purposes. The results show the effectiveness of the genetic programming technique for solving complex nonlinear regression problems. The multi-gene genetic programming are also compared with a single-gene genetic programming model to show the relative merits and demerits of the technique. This study demonstrates that the genetic programming based data-driven modelling strategy can be a good candidate for developing models for other types of fuels as well. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Psychometric properties of the Perceived Stress Scale (PSS): measurement invariance between athletes and non-athletes and construct validity

    PubMed Central

    Lin, Ju-Han; Nien, Chiao-Lin; Hsu, Ya-Wen; Liu, Hong-Yu

    2016-01-01

    Background Although Perceived Stress Scale (PSS, Cohen, Kamarack & Mermelstein, 1983) has been validated and widely used in many domains, there is still no validation in sports by comparing athletes and non-athletes and examining related psychometric indices. Purpose The purpose of this study was to examine the measurement invariance of PSS between athletes and non-athletes, and examine construct validity and reliability in the sports contexts. Methods Study 1 sampled 359 college student-athletes (males = 233; females = 126) and 242 non-athletes (males = 124; females = 118) and examined factorial structure, measurement invariance and internal consistency. Study 2 sampled 196 student-athletes (males = 139, females = 57, Mage = 19.88 yrs, SD = 1.35) and examined discriminant validity and convergent validity of PSS. Study 3 sampled 37 student-athletes to assess test-retest reliability of PSS. Results Results found that 2-factor PSS-10 fitted the model the best and had appropriate reliability. Also, there was a measurement invariance between athletes and non-athletes; and PSS positively correlated with athletic burnout and life stress but negatively correlated with coping efficacy provided evidence of discriminant validity and convergent validity. Further, the test-retest reliability for PSS subscales was significant (r = .66 and r = .50). Discussion It is suggested that 2-factor PSS-10 can be a useful tool in assessing perceived stress either in sports or non-sports settings. We suggest future study may use 2-factor PSS-10 in examining the effects of stress on the athletic injury, burnout, and psychiatry disorders. PMID:27994983

  4. Development and field validation of a regional, management-scale habitat model: A koala Phascolarctos cinereus case study.

    PubMed

    Law, Bradley; Caccamo, Gabriele; Roe, Paul; Truskinger, Anthony; Brassil, Traecey; Gonsalves, Leroy; McConville, Anna; Stanton, Matthew

    2017-09-01

    Species distribution models have great potential to efficiently guide management for threatened species, especially for those that are rare or cryptic. We used MaxEnt to develop a regional-scale model for the koala Phascolarctos cinereus at a resolution (250 m) that could be used to guide management. To ensure the model was fit for purpose, we placed emphasis on validating the model using independently-collected field data. We reduced substantial spatial clustering of records in coastal urban areas using a 2-km spatial filter and by modeling separately two subregions separated by the 500-m elevational contour. A bias file was prepared that accounted for variable survey effort. Frequency of wildfire, soil type, floristics and elevation had the highest relative contribution to the model, while a number of other variables made minor contributions. The model was effective in discriminating different habitat suitability classes when compared with koala records not used in modeling. We validated the MaxEnt model at 65 ground-truth sites using independent data on koala occupancy (acoustic sampling) and habitat quality (browse tree availability). Koala bellows ( n  = 276) were analyzed in an occupancy modeling framework, while site habitat quality was indexed based on browse trees. Field validation demonstrated a linear increase in koala occupancy with higher modeled habitat suitability at ground-truth sites. Similarly, a site habitat quality index at ground-truth sites was correlated positively with modeled habitat suitability. The MaxEnt model provided a better fit to estimated koala occupancy than the site-based habitat quality index, probably because many variables were considered simultaneously by the model rather than just browse species. The positive relationship of the model with both site occupancy and habitat quality indicates that the model is fit for application at relevant management scales. Field-validated models of similar resolution would assist in guiding management of conservation-dependent species.

  5. Computational fluid dynamics modeling of laboratory flames and an industrial flare.

    PubMed

    Singh, Kanwar Devesh; Gangadharan, Preeti; Chen, Daniel H; Lou, Helen H; Li, Xianchang; Richmond, Peyton

    2014-11-01

    A computational fluid dynamics (CFD) methodology for simulating the combustion process has been validated with experimental results. Three different types of experimental setups were used to validate the CFD model. These setups include an industrial-scale flare setups and two lab-scale flames. The CFD study also involved three different fuels: C3H6/CH/Air/N2, C2H4/O2/Ar and CH4/Air. In the first setup, flare efficiency data from the Texas Commission on Environmental Quality (TCEQ) 2010 field tests were used to validate the CFD model. In the second setup, a McKenna burner with flat flames was simulated. Temperature and mass fractions of important species were compared with the experimental data. Finally, results of an experimental study done at Sandia National Laboratories to generate a lifted jet flame were used for the purpose of validation. The reduced 50 species mechanism, LU 1.1, the realizable k-epsilon turbulence model, and the EDC turbulence-chemistry interaction model were usedfor this work. Flare efficiency, axial profiles of temperature, and mass fractions of various intermediate species obtained in the simulation were compared with experimental data and a good agreement between the profiles was clearly observed. In particular the simulation match with the TCEQ 2010 flare tests has been significantly improved (within 5% of the data) compared to the results reported by Singh et al. in 2012. Validation of the speciated flat flame data supports the view that flares can be a primary source offormaldehyde emission.

  6. Developing calculus textbook model that supported with GeoGebra to enhancing students’ mathematical problem solving and mathematical representation

    NASA Astrophysics Data System (ADS)

    Dewi, N. R.; Arini, F. Y.

    2018-03-01

    The main purpose of this research is developing and produces a Calculus textbook model that supported with GeoGebra. This book was designed to enhancing students’ mathematical problem solving and mathematical representation. There were three stages in this research i.e. define, design, and develop. The textbooks consisted of 6 chapters which each chapter contains introduction, core materials and include examples and exercises. The textbook developed phase begins with the early stages of designed the book (draft 1) which then validated by experts. Revision of draft 1 produced draft 2. The data were analyzed with descriptive statistics. The analysis showed that the Calculus textbook model that supported with GeoGebra, valid and fill up the criteria of practicality.

  7. Factorial validity of an abbreviated neighborhood environment walkability scale for seniors in the Nurses' Health Study.

    PubMed

    Starnes, Heather A; McDonough, Meghan H; Tamura, Kosuke; James, Peter; Laden, Francine; Troped, Philip J

    2014-10-10

    Using validated measures of individuals' perceptions of their neighborhood built environment is important for accurately estimating effects on physical activity. However, no studies to date have examined the factorial validity of a measure of perceived neighborhood environment among older adults in the United States. The purpose of this measurement study was to test the factorial validity of a version of the Abbreviated Neighborhood Environment Walkability Scale (NEWS-A) modified for seniors in the Nurses' Health Study (NHS). A random sample of 2,920 female nurses (mean age = 73 ± 7 years) in the NHS cohort from California, Massachusetts, and Pennsylvania completed a 36-item modified NEWS-A for seniors. Confirmatory factor analyses were conducted to test measurement models for both the modified NEWS-A for seniors and the original NEWS-A. Internal consistency within factors was examined using Cronbach's alpha. The hypothesized 7-factor measurement model was a poor fit for the modified NEWS-A for seniors. Overall, the best-fitting measurement model was the original 6-factor solution to the NEWS-A. Factors were correlated and internally consistent. This study provided support for the construct validity of the original NEWS-A for assessing perceptions of neighborhood environments in older women in the United States.

  8. An RL10A-3-3A rocket engine model using the rocket engine transient simulator (ROCETS) software

    NASA Technical Reports Server (NTRS)

    Binder, Michael

    1993-01-01

    Steady-state and transient computer models of the RL10A-3-3A rocket engine have been created using the Rocket Engine Transient Simulation (ROCETS) code. These models were created for several purposes. The RL10 engine is a critical component of past, present, and future space missions; the model will give NASA an in-house capability to simulate the performance of the engine under various operating conditions and mission profiles. The RL10 simulation activity is also an opportunity to further validate the ROCETS program. The ROCETS code is an important tool for modeling rocket engine systems at NASA Lewis. ROCETS provides a modular and general framework for simulating the steady-state and transient behavior of any desired propulsion system. Although the ROCETS code is being used in a number of different analysis and design projects within NASA, it has not been extensively validated for any system using actual test data. The RL10A-3-3A has a ten year history of test and flight applications; it should provide sufficient data to validate the ROCETS program capability. The ROCETS models of the RL10 system were created using design information provided by Pratt & Whitney, the engine manufacturer. These models are in the process of being validated using test-stand and flight data. This paper includes a brief description of the models and comparison of preliminary simulation output against flight and test-stand data.

  9. Measurement of Online Student Engagement: Utilization of Continuous Online Student Behavior Indicators as Items in a Partial Credit Rasch Model

    ERIC Educational Resources Information Center

    Anderson, Elizabeth

    2017-01-01

    Student engagement has been shown to be essential to the development of research-based best practices for K-12 education. It has been defined and measured in numerous ways. The purpose of this research study was to develop a measure of online student engagement for grades 3 through 8 using a partial credit Rasch model and validate the measure…

  10. A Taxonomy of Instructional Objectives for Developmentally Disabled Persons: Vocational Domain. Working Paper 85-1. COMPETE: Community-Based Model for Public-School Exit and Transition to Employment.

    ERIC Educational Resources Information Center

    Dever, Richard B.

    The purpose of Project COMPETE is to use previous research and exemplary practices to develop and validate a model and training sequence to assist retarded youth to make the transition from school to employment in the most competitive environment possible. This project working paper lists vocational goals and objectives that individuals with…

  11. A Cross-­Cultural Validation of the Music® Model of Academic Motivation Inventory: Evidence from Chinese-­ and Spanish- Speaking University Students

    ERIC Educational Resources Information Center

    Jones, Brett D.; Li, Ming; Cruz, Juan M.

    2017-01-01

    The purpose of this study was to examine the extent to which Chinese and Spanish translations of the College Student version of the MUSIC® Model of Academic Motivation Inventory (MUSIC Inventory; Jones, 2012) demonstrate acceptable psychometric properties. We surveyed 300 students at a university in China and 201 students at a university in…

  12. Nomogram Prediction of Overall Survival After Curative Irradiation for Uterine Cervical Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seo, YoungSeok; Yoo, Seong Yul; Kim, Mi-Sook

    Purpose: The purpose of this study was to develop a nomogram capable of predicting the probability of 5-year survival after radical radiotherapy (RT) without chemotherapy for uterine cervical cancer. Methods and Materials: We retrospectively analyzed 549 patients that underwent radical RT for uterine cervical cancer between March 1994 and April 2002 at our institution. Multivariate analysis using Cox proportional hazards regression was performed and this Cox model was used as the basis for the devised nomogram. The model was internally validated for discrimination and calibration by bootstrap resampling. Results: By multivariate regression analysis, the model showed that age, hemoglobin levelmore » before RT, Federation Internationale de Gynecologie Obstetrique (FIGO) stage, maximal tumor diameter, lymph node status, and RT dose at Point A significantly predicted overall survival. The survival prediction model demonstrated good calibration and discrimination. The bootstrap-corrected concordance index was 0.67. The predictive ability of the nomogram proved to be superior to FIGO stage (p = 0.01). Conclusions: The devised nomogram offers a significantly better level of discrimination than the FIGO staging system. In particular, it improves predictions of survival probability and could be useful for counseling patients, choosing treatment modalities and schedules, and designing clinical trials. However, before this nomogram is used clinically, it should be externally validated.« less

  13. Nursing Job Rotation Stress Scale development and psychometric evaluation.

    PubMed

    Huang, Shan; Lin, Yu-Hua; Kao, Chia-Chan; Yang, Hsing-Yu; Anne, Ya-Li; Wang, Cheng-Hua

    2016-01-01

    The aim of this study was to develop and assess the reliability and validity of the Nurse Job Rotation Stress Scale (NJRS). A convenience sampling method was utilized to recruit two groups of nurses (n = 150 and 253) from a 2751 bed medical center in southern Taiwan. The NJRS scale was developed and used to evaluate the NJRS. Explorative factor analysis revealed that three factors accounted for 74.11% of the explained variance. Confirmatory factor analysis validity testing supported the three factor structure and the construct validity. Cronbach's alpha for the 10 item model was 0.87 and had high linearity. The NJRS can be considered a reliable and valid scale for the measurement of nurse job rotation stress for nursing management and research purposes. © 2015 Japan Academy of Nursing Science.

  14. Earth as an Extrasolar Planet: Earth Model Validation Using EPOXI Earth Observations

    NASA Astrophysics Data System (ADS)

    Robinson, Tyler D.; Meadows, Victoria S.; Crisp, David; Deming, Drake; A'Hearn, Michael F.; Charbonneau, David; Livengood, Timothy A.; Seager, Sara; Barry, Richard K.; Hearty, Thomas; Hewagama, Tilak; Lisse, Carey M.; McFadden, Lucy A.; Wellnitz, Dennis D.

    2011-06-01

    The EPOXI Discovery Mission of Opportunity reused the Deep Impact flyby spacecraft to obtain spatially and temporally resolved visible photometric and moderate resolution near-infrared (NIR) spectroscopic observations of Earth. These remote observations provide a rigorous validation of whole-disk Earth model simulations used to better understand remotely detectable extrasolar planet characteristics. We have used these data to upgrade, correct, and validate the NASA Astrobiology Institute's Virtual Planetary Laboratory three-dimensional line-by-line, multiple-scattering spectral Earth model. This comprehensive model now includes specular reflectance from the ocean and explicitly includes atmospheric effects such as Rayleigh scattering, gas absorption, and temperature structure. We have used this model to generate spatially and temporally resolved synthetic spectra and images of Earth for the dates of EPOXI observation. Model parameters were varied to yield an optimum fit to the data. We found that a minimum spatial resolution of ∼100 pixels on the visible disk, and four categories of water clouds, which were defined by using observed cloud positions and optical thicknesses, were needed to yield acceptable fits. The validated model provides a simultaneous fit to Earth's lightcurve, absolute brightness, and spectral data, with a root-mean-square (RMS) error of typically less than 3% for the multiwavelength lightcurves and residuals of ∼10% for the absolute brightness throughout the visible and NIR spectral range. We have extended our validation into the mid-infrared by comparing the model to high spectral resolution observations of Earth from the Atmospheric Infrared Sounder, obtaining a fit with residuals of ∼7% and brightness temperature errors of less than 1 K in the atmospheric window. For the purpose of understanding the observable characteristics of the distant Earth at arbitrary viewing geometry and observing cadence, our validated forward model can be used to simulate Earth's time-dependent brightness and spectral properties for wavelengths from the far ultraviolet to the far infrared.

  15. Earth as an extrasolar planet: Earth model validation using EPOXI earth observations.

    PubMed

    Robinson, Tyler D; Meadows, Victoria S; Crisp, David; Deming, Drake; A'hearn, Michael F; Charbonneau, David; Livengood, Timothy A; Seager, Sara; Barry, Richard K; Hearty, Thomas; Hewagama, Tilak; Lisse, Carey M; McFadden, Lucy A; Wellnitz, Dennis D

    2011-06-01

    The EPOXI Discovery Mission of Opportunity reused the Deep Impact flyby spacecraft to obtain spatially and temporally resolved visible photometric and moderate resolution near-infrared (NIR) spectroscopic observations of Earth. These remote observations provide a rigorous validation of whole-disk Earth model simulations used to better understand remotely detectable extrasolar planet characteristics. We have used these data to upgrade, correct, and validate the NASA Astrobiology Institute's Virtual Planetary Laboratory three-dimensional line-by-line, multiple-scattering spectral Earth model. This comprehensive model now includes specular reflectance from the ocean and explicitly includes atmospheric effects such as Rayleigh scattering, gas absorption, and temperature structure. We have used this model to generate spatially and temporally resolved synthetic spectra and images of Earth for the dates of EPOXI observation. Model parameters were varied to yield an optimum fit to the data. We found that a minimum spatial resolution of ∼100 pixels on the visible disk, and four categories of water clouds, which were defined by using observed cloud positions and optical thicknesses, were needed to yield acceptable fits. The validated model provides a simultaneous fit to Earth's lightcurve, absolute brightness, and spectral data, with a root-mean-square (RMS) error of typically less than 3% for the multiwavelength lightcurves and residuals of ∼10% for the absolute brightness throughout the visible and NIR spectral range. We have extended our validation into the mid-infrared by comparing the model to high spectral resolution observations of Earth from the Atmospheric Infrared Sounder, obtaining a fit with residuals of ∼7% and brightness temperature errors of less than 1 K in the atmospheric window. For the purpose of understanding the observable characteristics of the distant Earth at arbitrary viewing geometry and observing cadence, our validated forward model can be used to simulate Earth's time-dependent brightness and spectral properties for wavelengths from the far ultraviolet to the far infrared. Key Words: Astrobiology-Extrasolar terrestrial planets-Habitability-Planetary science-Radiative transfer. Astrobiology 11, 393-408.

  16. Earth as an Extrasolar Planet: Earth Model Validation Using EPOXI Earth Observations

    NASA Technical Reports Server (NTRS)

    Robinson, Tyler D.; Meadows, Victoria S.; Crisp, David; Deming, Drake; A'Hearn, Michael F.; Charbonneau, David; Livengood, Timothy A.; Seager, Sara; Barry, Richard; Hearty, Thomas; hide

    2011-01-01

    The EPOXI Discovery Mission of Opportunity reused the Deep Impact flyby spacecraft to obtain spatially and temporally resolved visible photometric and moderate resolution near-infrared (NIR) spectroscopic observations of Earth. These remote observations provide a rigorous validation of whole disk Earth model simulations used to better under- stand remotely detectable extrasolar planet characteristics. We have used these data to upgrade, correct, and validate the NASA Astrobiology Institute s Virtual Planetary Laboratory three-dimensional line-by-line, multiple-scattering spectral Earth model (Tinetti et al., 2006a,b). This comprehensive model now includes specular reflectance from the ocean and explicitly includes atmospheric effects such as Rayleigh scattering, gas absorption, and temperature structure. We have used this model to generate spatially and temporally resolved synthetic spectra and images of Earth for the dates of EPOXI observation. Model parameters were varied to yield an optimum fit to the data. We found that a minimum spatial resolution of approx.100 pixels on the visible disk, and four categories of water clouds, which were defined using observed cloud positions and optical thicknesses, were needed to yield acceptable fits. The validated model provides a simultaneous fit to the Earth s lightcurve, absolute brightness, and spectral data, with a root-mean-square error of typically less than 3% for the multiwavelength lightcurves, and residuals of approx.10% for the absolute brightness throughout the visible and NIR spectral range. We extend our validation into the mid-infrared by comparing the model to high spectral resolution observations of Earth from the Atmospheric Infrared Sounder, obtaining a fit with residuals of approx.7%, and temperature errors of less than 1K in the atmospheric window. For the purpose of understanding the observable characteristics of the distant Earth at arbitrary viewing geometry and observing cadence, our validated forward model can be used to simulate Earth s time dependent brightness and spectral properties for wavelengths from the far ultraviolet to the far infrared.brightness

  17. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. System Engineering Strategy for Distributed Multi-Purpose Simulation Architectures

    NASA Technical Reports Server (NTRS)

    Bhula, Dlilpkumar; Kurt, Cindy Marie; Luty, Roger

    2007-01-01

    This paper describes the system engineering approach used to develop distributed multi-purpose simulations. The multi-purpose simulation architecture focuses on user needs, operations, flexibility, cost and maintenance. This approach was used to develop an International Space Station (ISS) simulator, which is called the International Space Station Integrated Simulation (ISIS)1. The ISIS runs unmodified ISS flight software, system models, and the astronaut command and control interface in an open system design that allows for rapid integration of multiple ISS models. The initial intent of ISIS was to provide a distributed system that allows access to ISS flight software and models for the creation, test, and validation of crew and ground controller procedures. This capability reduces the cost and scheduling issues associated with utilizing standalone simulators in fixed locations, and facilitates discovering unknowns and errors earlier in the development lifecycle. Since its inception, the flexible architecture of the ISIS has allowed its purpose to evolve to include ground operator system and display training, flight software modification testing, and as a realistic test bed for Exploration automation technology research and development.

  19. Validation of Multibody Program to Optimize Simulated Trajectories II Parachute Simulation with Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.

    2009-01-01

    A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.

  20. Applying the Health Belief Model to college students' health behavior

    PubMed Central

    Kim, Hak-Seon; Ahn, Joo

    2012-01-01

    The purpose of this research was to investigate how university students' nutrition beliefs influence their health behavioral intention. This study used an online survey engine (Qulatrics.com) to collect data from college students. Out of 253 questionnaires collected, 251 questionnaires (99.2%) were used for the statistical analysis. Confirmatory Factor Analysis (CFA) revealed that six dimensions, "Nutrition Confidence," "Susceptibility," "Severity," "Barrier," "Benefit," "Behavioral Intention to Eat Healthy Food," and "Behavioral Intention to do Physical Activity," had construct validity; Cronbach's alpha coefficient and composite reliabilities were tested for item reliability. The results validate that objective nutrition knowledge was a good predictor of college students' nutrition confidence. The results also clearly showed that two direct measures were significant predictors of behavioral intentions as hypothesized. Perceived benefit of eating healthy food and perceived barrier for eat healthy food to had significant effects on Behavioral Intentions and was a valid measurement to use to determine Behavioral Intentions. These findings can enhance the extant literature on the universal applicability of the model and serve as useful references for further investigations of the validity of the model within other health care or foodservice settings and for other health behavioral categories. PMID:23346306

  1. Validation of a RANS transition model using a high-order weighted compact nonlinear scheme

    NASA Astrophysics Data System (ADS)

    Tu, GuoHua; Deng, XiaoGang; Mao, MeiLiang

    2013-04-01

    A modified transition model is given based on the shear stress transport (SST) turbulence model and an intermittency transport equation. The energy gradient term in the original model is replaced by flow strain rate to saving computational costs. The model employs local variables only, and then it can be conveniently implemented in modern computational fluid dynamics codes. The fifth-order weighted compact nonlinear scheme and the fourth-order staggered scheme are applied to discrete the governing equations for the purpose of minimizing discretization errors, so as to mitigate the confusion between numerical errors and transition model errors. The high-order package is compared with a second-order TVD method on simulating the transitional flow of a flat plate. Numerical results indicate that the high-order package give better grid convergence property than that of the second-order method. Validation of the transition model is performed for transitional flows ranging from low speed to hypersonic speed.

  2. Fit for purpose? Validation of a conceptual framework for personal recovery with current mental health consumers.

    PubMed

    Bird, Victoria; Leamy, Mary; Tew, Jerry; Le Boutillier, Clair; Williams, Julie; Slade, Mike

    2014-07-01

    Mental health services in the UK, Australia and other Anglophone countries have moved towards supporting personal recovery as a primary orientation. To provide an empirically grounded foundation to identify and evaluate recovery-oriented interventions, we previously published a conceptual framework of personal recovery based on a systematic review and narrative synthesis of existing models. Our objective was to test the validity and relevance of this framework for people currently using mental health services. Seven focus groups were conducted with 48 current mental health consumers in three NHS trusts across England, as part of the REFOCUS Trial. Consumers were asked about the meaning and their experience of personal recovery. Deductive and inductive thematic analysis applying a constant comparison approach was used to analyse the data. The analysis aimed to explore the validity of the categories within the conceptual framework, and to highlight any areas of difference between the conceptual framework and the themes generated from new data collected from the focus groups. Both the inductive and deductive analysis broadly validated the conceptual framework, with the super-ordinate categories Connectedness, Hope and optimism, Identity, Meaning and purpose, and Empowerment (CHIME) evident in the analysis. Three areas of difference were, however, apparent in the inductive analysis. These included practical support; a greater emphasis on issues around diagnosis and medication; and scepticism surrounding recovery. This study suggests that the conceptual framework of personal recovery provides a defensible theoretical base for clinical and research purposes which is valid for use with current consumers. However, the three areas of difference further stress the individual nature of recovery and the need for an understanding of the population and context under investigation. © The Royal Australian and New Zealand College of Psychiatrists 2014.

  3. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  4. Validation of the Adolescent Concerns Measure (ACM): evidence from exploratory and confirmatory factor analysis.

    PubMed

    Ang, Rebecca P; Chong, Wan Har; Huan, Vivien S; Yeo, Lay See

    2007-01-01

    This article reports the development and initial validation of scores obtained from the Adolescent Concerns Measure (ACM), a scale which assesses concerns of Asian adolescent students. In Study 1, findings from exploratory factor analysis using 619 adolescents suggested a 24-item scale with four correlated factors--Family Concerns (9 items), Peer Concerns (5 items), Personal Concerns (6 items), and School Concerns (4 items). Initial estimates of convergent validity for ACM scores were also reported. The four-factor structure of ACM scores derived from Study 1 was confirmed via confirmatory factor analysis in Study 2 using a two-fold cross-validation procedure with a separate sample of 811 adolescents. Support was found for both the multidimensional and hierarchical models of adolescent concerns using the ACM. Internal consistency and test-retest reliability estimates were adequate for research purposes. ACM scores show promise as a reliable and potentially valid measure of Asian adolescents' concerns.

  5. Risk models for mortality following elective open and endovascular abdominal aortic aneurysm repair: a single institution experience.

    PubMed

    Choke, E; Lee, K; McCarthy, M; Nasim, A; Naylor, A R; Bown, M; Sayers, R

    2012-12-01

    To develop and validate an "in house" risk model for predicting perioperative mortality following elective AAA repair and to compare this with other models. Multivariate logistics regression analysis was used to identify risk factors for perioperative-day mortality from one tertiary institution's prospectively maintained database. Consecutive elective open (564) and endovascular (589) AAA repairs (2000-2010) were split randomly into development (810) and validation (343) data sets. The resultant model was compared to Glasgow Aneurysm Score (GAS), Modified Customised Probability Index (m-CPI), CPI, the Vascular Governance North West (VGNW) model and the Medicare model. Variables associated with perioperative mortality included: increasing age (P = 0.034), myocardial infarct within last 10 years (P = 0.0008), raised serum creatinine (P = 0.005) and open surgery (P = 0.0001). The areas under the receiver operating characteristic curve (AUC) for predicted probability of 30-day mortality in development and validation data sets were 0.79 and 0.82 respectively. AUCs for GAS, m-CPI and CPI were poor (0.63, 0.58 and 0.58 respectively), whilst VGNW and Medicare model were fair (0.73 and 0.79 respectively). In this study, an "in-house" developed and validated risk model has the most accurate discriminative value in predicting perioperative mortality after elective AAA repair. For purposes of comparative audit with case mix adjustments, national models such as the VGNW or Medicare models should be used. Copyright © 2012 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  6. Experimental validation of finite element modelling of a modular metal-on-polyethylene total hip replacement.

    PubMed

    Hua, Xijin; Wang, Ling; Al-Hajjar, Mazen; Jin, Zhongmin; Wilcox, Ruth K; Fisher, John

    2014-07-01

    Finite element models are becoming increasingly useful tools to conduct parametric analysis, design optimisation and pre-clinical testing for hip joint replacements. However, the verification of the finite element model is critically important. The purposes of this study were to develop a three-dimensional anatomic finite element model for a modular metal-on-polyethylene total hip replacement for predicting its contact mechanics and to conduct experimental validation for a simple finite element model which was simplified from the anatomic finite element model. An anatomic modular metal-on-polyethylene total hip replacement model (anatomic model) was first developed and then simplified with reasonable accuracy to a simple modular total hip replacement model (simplified model) for validation. The contact areas on the articulating surface of three polyethylene liners of modular metal-on-polyethylene total hip replacement bearings with different clearances were measured experimentally in the Leeds ProSim hip joint simulator under a series of loading conditions and different cup inclination angles. The contact areas predicted from the simplified model were then compared with that measured experimentally under the same conditions. The results showed that the simplification made for the anatomic model did not change the predictions of contact mechanics of the modular metal-on-polyethylene total hip replacement substantially (less than 12% for contact stresses and contact areas). Good agreements of contact areas between the finite element predictions from the simplified model and experimental measurements were obtained, with maximum difference of 14% across all conditions considered. This indicated that the simplification and assumptions made in the anatomic model were reasonable and the finite element predictions from the simplified model were valid. © IMechE 2014.

  7. FPGA implementation of predictive degradation model for engine oil lifetime

    NASA Astrophysics Data System (ADS)

    Idros, M. F. M.; Razak, A. H. A.; Junid, S. A. M. Al; Suliman, S. I.; Halim, A. K.

    2018-03-01

    This paper presents the implementation of linear regression model for degradation prediction on Register Transfer Logic (RTL) using QuartusII. A stationary model had been identified in the degradation trend for the engine oil in a vehicle in time series method. As for RTL implementation, the degradation model is written in Verilog HDL and the data input are taken at a certain time. Clock divider had been designed to support the timing sequence of input data. At every five data, a regression analysis is adapted for slope variation determination and prediction calculation. Here, only the negative value are taken as the consideration for the prediction purposes for less number of logic gate. Least Square Method is adapted to get the best linear model based on the mean values of time series data. The coded algorithm has been implemented on FPGA for validation purposes. The result shows the prediction time to change the engine oil.

  8. Experimental validation of a model for diffusion-controlled absorption of organic compounds in the trachea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerde, P.; Muggenburg, B.A.; Thornton-Manning, J.R.

    1995-12-01

    Most chemically induced lung cancer originates in the epithelial cells in the airways. Common conceptions are that chemicals deposited on the airway surface are rapidly absorbed through mucous membranes, limited primarily by the rate of blood perfusion in the mucosa. It is also commonly thought that for chemicals to induce toxicity at the site of entry, they must be either rapidly reactive, readily metabolizable, or especially toxic to the tissues at the site of entry. For highly lipophilic toxicants, there is a third option. Our mathematical model predicts that as lipophilicity increases, chemicals partition more readily into the cellular lipidmore » membranes and diffuse more slowly through the tissues. Therefore, absorption of very lipophilic compounds will be almost entirely limited by the rate of diffusion through the epithelium rather than by perfusion of the capillary bed in the subepithelium. We have reported on a preliminary model for absorption through mucous membranes of any substance with a lipid/aqueous partition coefficient larger than one. The purpose of this work was to experimentally validate the model in Beagle dogs. This validated model on toxicant absorption in the airway mucosa will improve risk assessment of inhaled« less

  9. Modelling Teacher Satisfaction: Findings from 892 Teaching Staff at 71 Schools.

    ERIC Educational Resources Information Center

    Dinham, Steve; Scott, Catherine

    This survey was undertaken to build upon and validate understanding of teacher satisfaction and dissatisfaction, orientation to teaching, teachers' values, and teacher health. The purpose of this endeavor was also to develop an instrument suitable for identifying and quantifying the sources and relative strength of factors contributing to teacher…

  10. The Development of Chorus Motivation Scale (CMS) for Prospective Music Teacher

    ERIC Educational Resources Information Center

    Ozgul, Ilhan; Yigit, Nalan

    2017-01-01

    The purpose of this study was to develop a Chorus Motivation Scale (CMS) that is tested in terms of reliability and construct validity by determining the student perceptions of effective motivation strategies in Chorus training in Turkish Music Teacher Training Model. In order to develop a Chorus Motivation Scale, Questionnaire-Effective…

  11. Assessing Acquiescence in Surveys Using Positively and Negatively Worded Questions

    ERIC Educational Resources Information Center

    Hutton, Amy Christine

    2017-01-01

    The purpose of this study was to assess the impact of acquiescence on both positively and negatively worded questions, both when unidimensionality was assumed and when it was not. To accomplish this, undergraduate student responses to a previously validated survey of student engagement were used to compare several models of acquiescence, using a…

  12. "ELIP-MARC" Activities via TPS of Cooperative Learning to Improve Student's Mathematical Reasoning

    ERIC Educational Resources Information Center

    Ulya, Wisulah Titah; Purwanto; Parta, I. Nengah; Mulyati, Sri

    2017-01-01

    The purpose of this study is to describe and generate interaction model of learning through "Elip-Marc" activity via "TPS" cooperative learning in order to improve student's mathematical reasoning who have valid, practical and effective criteria. "Elip-Marc" is an acronym of eliciting, inserting, pressing,…

  13. Multi-Level Alignment Model: Transforming Face-to-Face into E-Instructional Programs

    ERIC Educational Resources Information Center

    Byers, Celina

    2005-01-01

    Purpose--To suggest to others in the field an approach equally valid for transforming existing courses into online courses and for creating new online courses. Design/methodology/approach--Using the literature for substantiation, this article discusses the current rapid change within organizations, the role of technology in that change, and the…

  14. Athletes' Perceptions of Coaching Competency Scale II-High School Teams

    ERIC Educational Resources Information Center

    Myers, Nicholas D.; Chase, Melissa A.; Beauchamp, Mark R.; Jackson, Ben

    2010-01-01

    The purpose of this validity study was to improve measurement of athletes' evaluations of their head coach's coaching competency, an important multidimensional construct in models of coaching effectiveness. A revised version of the Coaching Competency Scale (CCS) was developed for athletes of high school teams (APCCS II-HST). Data were collected…

  15. Measures of Instruction for Creative Engagement: Making Metacognition, Modeling and Creative Thinking Visible

    ERIC Educational Resources Information Center

    Pitts, Christine; Anderson, Ross; Haney, Michele

    2018-01-01

    The purpose of the current study was to estimate reliability, internal consistency and construct validity of the Measure of Instruction for Creative Engagement (MICE) instrument. The MICE uses an iterative process of evidence collection and scoring through teacher observations to determine instructional domain ratings and overall scores. The…

  16. Three-dimensional localized coherent structures of surface turbulence: Model validation with experiments and further computations.

    PubMed

    Demekhin, E A; Kalaidin, E N; Kalliadasis, S; Vlaskin, S Yu

    2010-09-01

    We validate experimentally the Kapitsa-Shkadov model utilized in the theoretical studies by Demekhin [Phys. Fluids 19, 114103 (2007)10.1063/1.2793148; Phys. Fluids 19, 114104 (2007)]10.1063/1.2793149 of surface turbulence on a thin liquid film flowing down a vertical planar wall. For water at 15° , surface turbulence typically occurs at an inlet Reynolds number of ≃40 . Of particular interest is to assess experimentally the predictions of the model for three-dimensional nonlinear localized coherent structures, which represent elementary processes of surface turbulence. For this purpose we devise simple experiments to investigate the instabilities and transitions leading to such structures. Our experimental results are in good agreement with the theoretical predictions of the model. We also perform time-dependent computations for the formation of coherent structures and their interaction with localized structures of smaller amplitude on the surface of the film.

  17. Assessment of the validity of the CUDIT-R in a subpopulation of cannabis users.

    PubMed

    Loflin, Mallory; Babson, Kimberly; Browne, Kendall; Bonn-Miller, Marcel

    2018-01-01

    The Cannabis Use Disorders Identification Test-Revised (CUDIT-R) is an 8-item measure used to screen for cannabis use disorders (CUD). Despite widespread use of the tool, assessments of the CUDIT-R's validity in subpopulations are limited. The current study tested the structural validity and internal consistency of one of the most widely used screening measures for CUD (i.e., CUDIT-R) among a sample of military veterans who use cannabis for medicinal purposes. The present study used confirmatory factor analysis (CFA) to test the internal consistency and validity of the single-factor structure of the original screener among a sample of veterans who use cannabis for medicinal purposes (n = 90 [90% male]; M age  = 55.31, SD = 15.37). Measures included demographics and the CUDIT-R, obtained from the baseline assessment of an ongoing longitudinal study. The CFA revealed that the single-factor model previously validated in recreational using samples only accounted for 38.34% of total variance in responses on the CUDIT-R (χ 2  = 66.09, df = 28, p < 0.05; RMSEA = 0.06) and demonstrated acceptable but modest internal consistency (Cronbach's α = 0.73). More psychometric work is needed to determine the reliability and validity of using the CUDIT-R to screen for CUD among military veterans who use medicinal cannabis and other subpopulations of cannabis users.

  18. An Evaluation of the FLAG Friction Model frictmultiscale2 using the Experiments of Juanicotena and Szarynski

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zocher, Marvin Anthony; Hammerberg, James Edward

    The experiments of Juanicotena and Szarynski, namely T101, T102, and T105 are modeled for purposes of gaining a better understanding of the FLAG friction model frictmultiscale2. This exercise has been conducted as a first step toward model validation. It is shown that with inclusion of the friction model in the numerical analysis, the results of Juanicotena and Szarynski are predicted reasonably well. Without the friction model, simulation results do not match the experimental data nearly as well. Suggestions for follow-on work are included.

  19. Validation of mathematical model for CZ process using small-scale laboratory crystal growth furnace

    NASA Astrophysics Data System (ADS)

    Bergfelds, Kristaps; Sabanskis, Andrejs; Virbulis, Janis

    2018-05-01

    The present material is focused on the modelling of small-scale laboratory NaCl-RbCl crystal growth furnace. First steps towards fully transient simulations are taken in the form of stationary simulations that deal with the optimization of material properties to match the model to experimental conditions. For this purpose, simulation software primarily used for the modelling of industrial-scale silicon crystal growth process was successfully applied. Finally, transient simulations of the crystal growth are presented, giving a sufficient agreement to experimental results.

  20. Photons Revisited

    NASA Astrophysics Data System (ADS)

    Batic, Matej; Begalli, Marcia; Han, Min Cheol; Hauf, Steffen; Hoff, Gabriela; Kim, Chan Hyeong; Kim, Han Sung; Grazia Pia, Maria; Saracco, Paolo; Weidenspointner, Georg

    2014-06-01

    A systematic review of methods and data for the Monte Carlo simulation of photon interactions is in progress: it concerns a wide set of theoretical modeling approaches and data libraries available for this purpose. Models and data libraries are assessed quantitatively with respect to an extensive collection of experimental measurements documented in the literature to determine their accuracy; this evaluation exploits rigorous statistical analysis methods. The computational performance of the associated modeling algorithms is evaluated as well. An overview of the assessment of photon interaction models and results of the experimental validation are presented.

  1. Model testing for reliability and validity of the Outcome Expectations for Exercise Scale.

    PubMed

    Resnick, B; Zimmerman, S; Orwig, D; Furstenberg, A L; Magaziner, J

    2001-01-01

    Development of a reliable and valid measure of outcome expectations for exercise appropriate for older adults will help establish the relationship between outcome expectations and exercise. Once established, this measure can be used to facilitate the development of interventions to strengthen outcome expectations and improve adherence to regular exercise in older adults. Building on initial psychometrics of the Outcome Expectation for Exercise (OEE) Scale, the purpose of the current study was to use structural equation modeling to provide additional support for the reliability and validity of this measure. The OEE scale is a 9-item measure specifically focusing on the perceived consequences of exercise for older adults. The OEE scale was given to 191 residents in a continuing care retirement community. The mean age of the participants was 85 +/- 6.1 and the majority were female (76%), White (99%), and unmarried (76%). Using structural equation modeling, reliability was based on R2 values, and validity was based on a confirmatory factor analysis and path coefficients. There was continued evidence for reliability of the OEE based on R2 values ranging from .42 to .77, and validity with path coefficients ranging from .69 to .87, and evidence of model fit (X2 of 69, df = 27, p < .05, NFI = .98, RMSEA = .07). The evidence of reliability and validity of this measure has important implications for clinical work and research. The OEE scale can be used to identify older adults who have low outcome expectations for exercise, and interventions can then be implemented to strengthen these expectations and thereby improve exercise behavior.

  2. Pitfalls in Prediction Modeling for Normal Tissue Toxicity in Radiation Therapy: An Illustration With the Individual Radiation Sensitivity and Mammary Carcinoma Risk Factor Investigation Cohorts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbah, Chamberlain, E-mail: chamberlain.mbah@ugent.be; Department of Mathematical Modeling, Statistics, and Bioinformatics, Faculty of Bioscience Engineering, Ghent University, Ghent; Thierens, Hubert

    Purpose: To identify the main causes underlying the failure of prediction models for radiation therapy toxicity to replicate. Methods and Materials: Data were used from two German cohorts, Individual Radiation Sensitivity (ISE) (n=418) and Mammary Carcinoma Risk Factor Investigation (MARIE) (n=409), of breast cancer patients with similar characteristics and radiation therapy treatments. The toxicity endpoint chosen was telangiectasia. The LASSO (least absolute shrinkage and selection operator) logistic regression method was used to build a predictive model for a dichotomized endpoint (Radiation Therapy Oncology Group/European Organization for the Research and Treatment of Cancer score 0, 1, or ≥2). Internal areas undermore » the receiver operating characteristic curve (inAUCs) were calculated by a naïve approach whereby the training data (ISE) were also used for calculating the AUC. Cross-validation was also applied to calculate the AUC within the same cohort, a second type of inAUC. Internal AUCs from cross-validation were calculated within ISE and MARIE separately. Models trained on one dataset (ISE) were applied to a test dataset (MARIE) and AUCs calculated (exAUCs). Results: Internal AUCs from the naïve approach were generally larger than inAUCs from cross-validation owing to overfitting the training data. Internal AUCs from cross-validation were also generally larger than the exAUCs, reflecting heterogeneity in the predictors between cohorts. The best models with largest inAUCs from cross-validation within both cohorts had a number of common predictors: hypertension, normalized total boost, and presence of estrogen receptors. Surprisingly, the effect (coefficient in the prediction model) of hypertension on telangiectasia incidence was positive in ISE and negative in MARIE. Other predictors were also not common between the 2 cohorts, illustrating that overcoming overfitting does not solve the problem of replication failure of prediction models completely. Conclusions: Overfitting and cohort heterogeneity are the 2 main causes of replication failure of prediction models across cohorts. Cross-validation and similar techniques (eg, bootstrapping) cope with overfitting, but the development of validated predictive models for radiation therapy toxicity requires strategies that deal with cohort heterogeneity.« less

  3. Performance Comparison of NAMI DANCE and FLOW-3D® Models in Tsunami Propagation, Inundation and Currents using NTHMP Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Velioglu Sogut, Deniz; Yalciner, Ahmet Cevdet

    2018-06-01

    Field observations provide valuable data regarding nearshore tsunami impact, yet only in inundation areas where tsunami waves have already flooded. Therefore, tsunami modeling is essential to understand tsunami behavior and prepare for tsunami inundation. It is necessary that all numerical models used in tsunami emergency planning be subject to benchmark tests for validation and verification. This study focuses on two numerical codes, NAMI DANCE and FLOW-3D®, for validation and performance comparison. NAMI DANCE is an in-house tsunami numerical model developed by the Ocean Engineering Research Center of Middle East Technical University, Turkey and Laboratory of Special Research Bureau for Automation of Marine Research, Russia. FLOW-3D® is a general purpose computational fluid dynamics software, which was developed by scientists who pioneered in the design of the Volume-of-Fluid technique. The codes are validated and their performances are compared via analytical, experimental and field benchmark problems, which are documented in the ``Proceedings and Results of the 2011 National Tsunami Hazard Mitigation Program (NTHMP) Model Benchmarking Workshop'' and the ``Proceedings and Results of the NTHMP 2015 Tsunami Current Modeling Workshop". The variations between the numerical solutions of these two models are evaluated through statistical error analysis.

  4. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test.

    PubMed

    Ham, Joo-Ho; Park, Hun-Young; Kim, Youn-Ho; Bae, Sang-Kon; Ko, Byung-Hoon; Nam, Sang-Seok

    2017-09-30

    The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20-59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. ©2017 The Korean Society for Exercise Nutrition

  5. Validation of a physically based catchment model for application in post-closure radiological safety assessments of deep geological repositories for solid radioactive wastes.

    PubMed

    Thorne, M C; Degnan, P; Ewen, J; Parkin, G

    2000-12-01

    The physically based river catchment modelling system SHETRAN incorporates components representing water flow, sediment transport and radionuclide transport both in solution and bound to sediments. The system has been applied to simulate hypothetical future catchments in the context of post-closure radiological safety assessments of a potential site for a deep geological disposal facility for intermediate and certain low-level radioactive wastes at Sellafield, west Cumbria. In order to have confidence in the application of SHETRAN for this purpose, various blind validation studies have been undertaken. In earlier studies, the validation was undertaken against uncertainty bounds in model output predictions set by the modelling team on the basis of how well they expected the model to perform. However, validation can also be carried out with bounds set on the basis of how well the model is required to perform in order to constitute a useful assessment tool. Herein, such an assessment-based validation exercise is reported. This exercise related to a field plot experiment conducted at Calder Hollow, west Cumbria, in which the migration of strontium and lanthanum in subsurface Quaternary deposits was studied on a length scale of a few metres. Blind predictions of tracer migration were compared with experimental results using bounds set by a small group of assessment experts independent of the modelling team. Overall, the SHETRAN system performed well, failing only two out of seven of the imposed tests. Furthermore, of the five tests that were not failed, three were positively passed even when a pessimistic view was taken as to how measurement errors should be taken into account. It is concluded that the SHETRAN system, which is still being developed further, is a powerful tool for application in post-closure radiological safety assessments.

  6. Development and Analysis of Desiccant Enhanced Evaporative Air Conditioner Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozubal, E.; Woods, J.; Judkoff, R.

    2012-04-01

    This report documents the design of a desiccant enhanced evaporative air conditioner (DEVAP AC) prototype and the testing to prove its performance. Previous numerical modeling and building energy simulations indicate a DEVAP AC can save significant energy compared to a conventional vapor compression AC (Kozubal et al. 2011). The purposes of this research were to build DEVAP prototypes, test them to validate the numerical model, and identify potential commercialization barriers.

  7. Energy-Based Design of Reconfigurable Micro Air Vehicle (MAV) Flight Structures

    DTIC Science & Technology

    2014-02-01

    plate bending element derived herein. The purpose of the six degree-of-freedom model was to accommodate in-plane and out-of-plane aerodynamic loading...combinations. The FE model was validated and the MATLAB implementation was verified with classical beam and plate solutions. A compliance minimization...formulation was not found among the finite element literature. Therefore a formulation of such a bending element was derived using classic Kirchoff plate

  8. An Analysis of Occupational Requirements Relative to the Employment of Severely Handicapped Individuals. Working Paper 87-3. COMPETE: Community-Based Model for Public-School Exit and Transition to Employment.

    ERIC Educational Resources Information Center

    Easterday, Joseph R.

    This paper is a product of Project COMPETE, a service demonstration project undertaken for the purpose of developing and validating a model and training sequence to improve transition services for moderately, severely, and profoundly retarded youth. The paper reports on a study which examined the communication skills, critical academic skills, and…

  9. An Analysis of Employer Incentive Rankings Relative to the Employment of Retarded Persons. Working Paper 85-6. COMPETE: Community-Based Model for Public-School Exit and Transition to Employment.

    ERIC Educational Resources Information Center

    Sitlington, Patricia L.; Easterday, Joseph R.

    The purpose of Project COMPETE is to use previous research and exemplary practices to develop and validate a model and training sequence to assist retarded youth to make the transition from school to employment in the most competitive environment possible. The study reported in this project working paper sought to identify potential factors that…

  10. Does rational selection of training and test sets improve the outcome of QSAR modeling?

    PubMed

    Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander

    2012-10-22

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.

  11. Discriminant Validity Assessment: Use of Fornell & Larcker criterion versus HTMT Criterion

    NASA Astrophysics Data System (ADS)

    Hamid, M. R. Ab; Sami, W.; Mohmad Sidek, M. H.

    2017-09-01

    Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.

  12. Survey of Verification and Validation Techniques for Small Satellite Software Development

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    2015-01-01

    The purpose of this paper is to provide an overview of the current trends and practices in small-satellite software verification and validation. This document is not intended to promote a specific software assurance method. Rather, it seeks to present an unbiased survey of software assurance methods used to verify and validate small satellite software and to make mention of the benefits and value of each approach. These methods include simulation and testing, verification and validation with model-based design, formal methods, and fault-tolerant software design with run-time monitoring. Although the literature reveals that simulation and testing has by far the longest legacy, model-based design methods are proving to be useful for software verification and validation. Some work in formal methods, though not widely used for any satellites, may offer new ways to improve small satellite software verification and validation. These methods need to be further advanced to deal with the state explosion problem and to make them more usable by small-satellite software engineers to be regularly applied to software verification. Last, it is explained how run-time monitoring, combined with fault-tolerant software design methods, provides an important means to detect and correct software errors that escape the verification process or those errors that are produced after launch through the effects of ionizing radiation.

  13. Design and Validation of an Infrared Badal Optometer for Laser Speckle (IBOLS)

    PubMed Central

    Teel, Danielle F. W.; Copland, R. James; Jacobs, Robert J.; Wells, Thad; Neal, Daniel R.; Thibos, Larry N.

    2009-01-01

    Purpose To validate the design of an infrared wavefront aberrometer with a Badal optometer employing the principle of laser speckle generated by a spinning disk and infrared light. The instrument was designed for subjective meridional refraction in infrared light by human patients. Methods Validation employed a model eye with known refractive error determined with an objective infrared wavefront aberrometer. The model eye was used to produce a speckle pattern on an artificial retina with controlled amounts of ametropia introduced with auxiliary ophthalmic lenses. A human observer performed the psychophysical task of observing the speckle pattern (with the aid of a video camera sensitive to infrared radiation) formed on the artificial retina. Refraction was performed by adjusting the vergence of incident light with the Badal optometer to nullify the motion of laser speckle. Validation of the method was performed for different levels of spherical ametropia and for various configurations of an astigmatic model eye. Results Subjective measurements of meridional refractive error over the range −4D to + 4D agreed with astigmatic refractive errors predicted by the power of the model eye in the meridian of motion of the spinning disk. Conclusions Use of a Badal optometer to control laser speckle is a valid method for determining subjective refractive error at infrared wavelengths. Such an instrument will be useful for comparing objective measures of refractive error obtained for the human eye with autorefractors and wavefront aberrometers that employ infrared radiation. PMID:18772719

  14. FDA 2011 process validation guidance: lifecycle compliance model.

    PubMed

    Campbell, Cliff

    2014-01-01

    This article has been written as a contribution to the industry's efforts in migrating from a document-driven to a data-driven compliance mindset. A combination of target product profile, control engineering, and general sum principle techniques is presented as the basis of a simple but scalable lifecycle compliance model in support of modernized process validation. Unit operations and significant variables occupy pole position within the model, documentation requirements being treated as a derivative or consequence of the modeling process. The quality system is repositioned as a subordinate of system quality, this being defined as the integral of related "system qualities". The article represents a structured interpretation of the U.S. Food and Drug Administration's 2011 Guidance for Industry on Process Validation and is based on the author's educational background and his manufacturing/consulting experience in the validation field. The U.S. Food and Drug Administration's Guidance for Industry on Process Validation (2011) provides a wide-ranging and rigorous outline of compliant drug manufacturing requirements relative to its 20(th) century predecessor (1987). Its declared focus is patient safety, and it identifies three inter-related (and obvious) stages of the compliance lifecycle. Firstly, processes must be designed, both from a technical and quality perspective. Secondly, processes must be qualified, providing evidence that the manufacturing facility is fully "roadworthy" and fit for its intended purpose. Thirdly, processes must be verified, meaning that commercial batches must be monitored to ensure that processes remain in a state of control throughout their lifetime.

  15. Design of experiments in medical physics: Application to the AAA beam model validation.

    PubMed

    Dufreneix, S; Legrand, C; Di Bartolo, C; Bremaud, M; Mesgouez, J; Tiplica, T; Autret, D

    2017-09-01

    The purpose of this study is to evaluate the usefulness of the design of experiments in the analysis of multiparametric problems related to the quality assurance in radiotherapy. The main motivation is to use this statistical method to optimize the quality assurance processes in the validation of beam models. Considering the Varian Eclipse system, eight parameters with several levels were selected: energy, MLC, depth, X, Y 1 and Y 2 jaw dimensions, wedge and wedge jaw. A Taguchi table was used to define 72 validation tests. Measurements were conducted in water using a CC04 on a TrueBeam STx, a TrueBeam Tx, a Trilogy and a 2300IX accelerator matched by the vendor. Dose was computed using the AAA algorithm. The same raw data was used for all accelerators during the beam modelling. The mean difference between computed and measured doses was 0.1±0.5% for all beams and all accelerators with a maximum difference of 2.4% (under the 3% tolerance level). For all beams, the measured doses were within 0.6% for all accelerators. The energy was found to be an influencing parameter but the deviations observed were smaller than 1% and not considered clinically significant. Designs of experiment can help define the optimal measurement set to validate a beam model. The proposed method can be used to identify the prognostic factors of dose accuracy. The beam models were validated for the 4 accelerators which were found dosimetrically equivalent even though the accelerator characteristics differ. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. Analytical procedure validation and the quality by design paradigm.

    PubMed

    Rozet, Eric; Lebrun, Pierre; Michiels, Jean-François; Sondag, Perceval; Scherder, Tara; Boulanger, Bruno

    2015-01-01

    Since the adoption of the ICH Q8 document concerning the development of pharmaceutical processes following a quality by design (QbD) approach, there have been many discussions on the opportunity for analytical procedure developments to follow a similar approach. While development and optimization of analytical procedure following QbD principles have been largely discussed and described, the place of analytical procedure validation in this framework has not been clarified. This article aims at showing that analytical procedure validation is fully integrated into the QbD paradigm and is an essential step in developing analytical procedures that are effectively fit for purpose. Adequate statistical methodologies have also their role to play: such as design of experiments, statistical modeling, and probabilistic statements. The outcome of analytical procedure validation is also an analytical procedure design space, and from it, control strategy can be set.

  17. Testing for the validity of purchasing power parity theory both in the long-run and the short-run for ASEAN-5

    NASA Astrophysics Data System (ADS)

    Choji, Niri Martha; Sek, Siok Kun

    2017-11-01

    The purchasing power parity theory says that the trade rates among two nations ought to be equivalent to the proportion of the total price levels between the two nations. For more than a decade, there has been substantial interest in testing for the validity of the Purchasing Power Parity (PPP) empirically. This paper performs a series of tests to see if PPP is valid for ASEAN-5 nations for the period of 2000-2016 using monthly data. For this purpose, we conducted four different tests of stationarity, two cointegration tests (Pedroni and Westerlund), and also the VAR model. The stationarity (unit root) tests reveal that the variables are not stationary at levels however stationary at first difference. Cointegration test results did not reject the H0 of no cointegration implying the absence long-run association among the variables and results of the VAR model did not reveal a strong short-run relationship. Based on the data, we, therefore, conclude that PPP is not valid in long-and short-run for ASEAN-5 during 2000-2016.

  18. Developing CORE model-based worksheet with recitation task to facilitate students’ mathematical communication skills in linear algebra course

    NASA Astrophysics Data System (ADS)

    Risnawati; Khairinnisa, S.; Darwis, A. H.

    2018-01-01

    The purpose of this study was to develop a CORE model-based worksheet with recitation task that were valid and practical and could facilitate students’ communication skills in Linear Algebra course. This study was conducted in mathematics education department of one public university in Riau, Indonesia. Participants of the study were media and subject matter experts as validators as well as students from mathematics education department. The objects of this study are students’ worksheet and students’ mathematical communication skills. The results of study showed that: (1) based on validation of the experts, the developed students’ worksheet was valid and could be applied for students in Linear Algebra courses; (2) based on the group trial, the practicality percentage was 92.14% in small group and 90.19% in large group, so the worksheet was very practical and could attract students to learn; and (3) based on the post test, the average percentage of ideals was 87.83%. In addition, the results showed that the students’ worksheet was able to facilitate students’ mathematical communication skills in linear algebra course.

  19. Opportunity integrated assessment facilitating critical thinking and science process skills measurement on acid base matter

    NASA Astrophysics Data System (ADS)

    Sari, Anggi Ristiyana Puspita; Suyanta, LFX, Endang Widjajanti; Rohaeti, Eli

    2017-05-01

    Recognizing the importance of the development of critical thinking and science process skills, the instrument should give attention to the characteristics of chemistry. Therefore, constructing an accurate instrument for measuring those skills is important. However, the integrated instrument assessment is limited in number. The purpose of this study is to validate an integrated assessment instrument for measuring students' critical thinking and science process skills on acid base matter. The development model of the test instrument adapted McIntire model. The sample consisted of 392 second grade high school students in the academic year of 2015/2016 in Yogyakarta. Exploratory Factor Analysis (EFA) was conducted to explore construct validity, whereas content validity was substantiated by Aiken's formula. The result shows that the KMO test is 0.714 which indicates sufficient items for each factor and the Bartlett test is significant (a significance value of less than 0.05). Furthermore, content validity coefficient which is based on 8 experts is obtained at 0.85. The findings support the integrated assessment instrument to measure critical thinking and science process skills on acid base matter.

  20. External Validation of a Case-Mix Adjustment Model for the Standardized Reporting of 30-Day Stroke Mortality Rates in China

    PubMed Central

    Yu, Ping; Pan, Yuesong; Wang, Yongjun; Wang, Xianwei; Liu, Liping; Ji, Ruijun; Meng, Xia; Jing, Jing; Tong, Xu; Guo, Li; Wang, Yilong

    2016-01-01

    Background and Purpose A case-mix adjustment model has been developed and externally validated, demonstrating promise. However, the model has not been thoroughly tested among populations in China. In our study, we evaluated the performance of the model in Chinese patients with acute stroke. Methods The case-mix adjustment model A includes items on age, presence of atrial fibrillation on admission, National Institutes of Health Stroke Severity Scale (NIHSS) score on admission, and stroke type. Model B is similar to Model A but includes only the consciousness component of the NIHSS score. Both model A and B were evaluated to predict 30-day mortality rates in 13,948 patients with acute stroke from the China National Stroke Registry. The discrimination of the models was quantified by c-statistic. Calibration was assessed using Pearson’s correlation coefficient. Results The c-statistic of model A in our external validation cohort was 0.80 (95% confidence interval, 0.79–0.82), and the c-statistic of model B was 0.82 (95% confidence interval, 0.81–0.84). Excellent calibration was reported in the two models with Pearson’s correlation coefficient (0.892 for model A, p<0.001; 0.927 for model B, p = 0.008). Conclusions The case-mix adjustment model could be used to effectively predict 30-day mortality rates in Chinese patients with acute stroke. PMID:27846282

  1. Microscale Obstacle Resolving Air Quality Model Evaluation with the Michelstadt Case

    PubMed Central

    Rakai, Anikó; Kristóf, Gergely

    2013-01-01

    Modelling pollutant dispersion in cities is challenging for air quality models as the urban obstacles have an important effect on the flow field and thus the dispersion. Computational Fluid Dynamics (CFD) models with an additional scalar dispersion transport equation are a possible way to resolve the flowfield in the urban canopy and model dispersion taking into consideration the effect of the buildings explicitly. These models need detailed evaluation with the method of verification and validation to gain confidence in their reliability and use them as a regulatory purpose tool in complex urban geometries. This paper shows the performance of an open source general purpose CFD code, OpenFOAM for a complex urban geometry, Michelstadt, which has both flow field and dispersion measurement data. Continuous release dispersion results are discussed to show the strengths and weaknesses of the modelling approach, focusing on the value of the turbulent Schmidt number, which was found to give best statistical metric results with a value of 0.7. PMID:24027450

  2. Microscale obstacle resolving air quality model evaluation with the Michelstadt case.

    PubMed

    Rakai, Anikó; Kristóf, Gergely

    2013-01-01

    Modelling pollutant dispersion in cities is challenging for air quality models as the urban obstacles have an important effect on the flow field and thus the dispersion. Computational Fluid Dynamics (CFD) models with an additional scalar dispersion transport equation are a possible way to resolve the flowfield in the urban canopy and model dispersion taking into consideration the effect of the buildings explicitly. These models need detailed evaluation with the method of verification and validation to gain confidence in their reliability and use them as a regulatory purpose tool in complex urban geometries. This paper shows the performance of an open source general purpose CFD code, OpenFOAM for a complex urban geometry, Michelstadt, which has both flow field and dispersion measurement data. Continuous release dispersion results are discussed to show the strengths and weaknesses of the modelling approach, focusing on the value of the turbulent Schmidt number, which was found to give best statistical metric results with a value of 0.7.

  3. KINEROS2-AGWA: Model Use, Calibration, and Validation

    NASA Technical Reports Server (NTRS)

    Goodrich, D C.; Burns, I. S.; Unkrich, C. L.; Semmens, D. J.; Guertin, D. P.; Hernandez, M.; Yatheendradas, S.; Kennedy, J. R.; Levick, L. R..

    2013-01-01

    KINEROS (KINematic runoff and EROSion) originated in the 1960s as a distributed event-based model that conceptualizes a watershed as a cascade of overland flow model elements that flow into trapezoidal channel model elements. KINEROS was one of the first widely available watershed models that interactively coupled a finite difference approximation of the kinematic overland flow equations to a physically based infiltration model. Development and improvement of KINEROS continued from the 1960s on a variety of projects for a range of purposes, which has resulted in a suite of KINEROS-based modeling tools. This article focuses on KINEROS2 (K2), a spatially distributed, event-based watershed rainfall-runoff and erosion model, and the companion ArcGIS-based Automated Geospatial Watershed Assessment (AGWA) tool. AGWA automates the time-consuming tasks of watershed delineation into distributed model elements and initial parameterization of these elements using commonly available, national GIS data layers. A variety of approaches have been used to calibrate and validate K2 successfully across a relatively broad range of applications (e.g., urbanization, pre- and post-fire, hillslope erosion, erosion from roads, runoff and recharge, and manure transport). The case studies presented in this article (1) compare lumped to stepwise calibration and validation of runoff and sediment at plot, hillslope, and small watershed scales; and (2) demonstrate an uncalibrated application to address relative change in watershed response to wildfire.

  4. KINEROS2/AGWA: Model use, calibration and validation

    USGS Publications Warehouse

    Goodrich, D.C.; Burns, I.S.; Unkrich, C.L.; Semmens, Darius J.; Guertin, D.P.; Hernandez, M.; Yatheendradas, S.; Kennedy, Jeffrey R.; Levick, Lainie R.

    2012-01-01

    KINEROS (KINematic runoff and EROSion) originated in the 1960s as a distributed event-based model that conceptualizes a watershed as a cascade of overland flow model elements that flow into trapezoidal channel model elements. KINEROS was one of the first widely available watershed models that interactively coupled a finite difference approximation of the kinematic overland flow equations to a physically based infiltration model. Development and improvement of KINEROS continued from the 1960s on a variety of projects for a range of purposes, which has resulted in a suite of KINEROS-based modeling tools. This article focuses on KINEROS2 (K2), a spatially distributed, event-based watershed rainfall-runoff and erosion model, and the companion ArcGIS-based Automated Geospatial Watershed Assessment (AGWA) tool. AGWA automates the time-consuming tasks of watershed delineation into distributed model elements and initial parameterization of these elements using commonly available, national GIS data layers. A variety of approaches have been used to calibrate and validate K2 successfully across a relatively broad range of applications (e.g., urbanization, pre- and post-fire, hillslope erosion, erosion from roads, runoff and recharge, and manure transport). The case studies presented in this article (1) compare lumped to stepwise calibration and validation of runoff and sediment at plot, hillslope, and small watershed scales; and (2) demonstrate an uncalibrated application to address relative change in watershed response to wildfire.

  5. A KLM-circuit model of a multi-layer transducer for acoustic bladder volume measurements.

    PubMed

    Merks, E J W; Borsboom, J M G; Bom, N; van der Steen, A F W; de Jong, N

    2006-12-22

    In a preceding study a new technique to non-invasively measure the bladder volume on the basis of non-linear wave propagation was validated. It was shown that the harmonic level generated at the posterior bladder wall increases for larger bladder volumes. A dedicated transducer is needed to further verify and implement this approach. This transducer must be capable of both transmission of high-pressure waves at fundamental frequency and reception of up to the third harmonic. For this purpose, a multi-layer transducer was constructed using a single element PZT transducer for transmission and a PVDF top-layer for reception. To determine feasibility of the multi-layer concept for bladder volume measurements, and to ensure optimal performance, an equivalent mathematical model on the basis of KLM-circuit modeling was generated. This model was obtained in two subsequent steps. Firstly, the PZT transducer was modeled without PVDF-layer attached by means of matching the model with the measured electrical input impedance. It was validated using pulse-echo measurements. Secondly, the model was extended with the PVDF-layer. The total model was validated by considering the PVDF-layer as a hydrophone on the PZT transducer surface and comparing the measured and simulated PVDF responses on a wave transmitted by the PZT transducer. The obtained results indicated that a valid model for the multi-layer transducer was constructed. The model showed feasibility of the multi-layer concept for bladder volume measurements. It also allowed for further optimization with respect to electrical matching and transmit waveform. Additionally, the model demonstrated the effect of mechanical loading of the PVDF-layer on the PZT transducer.

  6. Development and validation of the positive affect and well-being scale for the neurology quality of life (Neuro-QOL) measurement system.

    PubMed

    Salsman, John M; Victorson, David; Choi, Seung W; Peterman, Amy H; Heinemann, Allen W; Nowinski, Cindy; Cella, David

    2013-11-01

    To develop and validate an item-response theory-based patient-reported outcomes assessment tool of positive affect and well-being (PAW). This is part of a larger NINDS-funded study to develop a health-related quality of life measurement system across major neurological disorders, called Neuro-QOL. Informed by a literature review and qualitative input from clinicians and patients, item pools were created to assess PAW concepts. Items were administered to a general population sample (N = 513) and a group of individuals with a variety of neurologic conditions (N = 581) for calibration and validation purposes, respectively. A 23-item calibrated bank and a 9-item short form of PAW was developed, reflecting components of positive affect, life satisfaction, or an overall sense of purpose and meaning. The Neuro-QOL PAW measure demonstrated sufficient unidimensionality and displayed good internal consistency, test-retest reliability, model fit, convergent and discriminant validity, and responsiveness. The Neuro-QOL PAW measure was designed to aid clinicians and researchers to better evaluate and understand the potential role of positive health processes for individuals with chronic neurological conditions. Further psychometric testing within and between neurological conditions, as well as testing in non-neurologic chronic diseases, will help evaluate the generalizability of this new tool.

  7. Development of a Detailed Volumetric Finite Element Model of the Spine to Simulate Surgical Correction of Spinal Deformities

    PubMed Central

    Driscoll, Mark; Mac-Thiong, Jean-Marc; Labelle, Hubert; Parent, Stefan

    2013-01-01

    A large spectrum of medical devices exists; it aims to correct deformities associated with spinal disorders. The development of a detailed volumetric finite element model of the osteoligamentous spine would serve as a valuable tool to assess, compare, and optimize spinal devices. Thus the purpose of the study was to develop and initiate validation of a detailed osteoligamentous finite element model of the spine with simulated correction from spinal instrumentation. A finite element of the spine from T1 to L5 was developed using properties and geometry from the published literature and patient data. Spinal instrumentation, consisting of segmental translation of a scoliotic spine, was emulated. Postoperative patient and relevant published data of intervertebral disc stress, screw/vertebra pullout forces, and spinal profiles was used to evaluate the models validity. Intervertebral disc and vertebral reaction stresses respected published in vivo, ex vivo, and in silico values. Screw/vertebra reaction forces agreed with accepted pullout threshold values. Cobb angle measurements of spinal deformity following simulated surgical instrumentation corroborated with patient data. This computational biomechanical analysis validated a detailed volumetric spine model. Future studies seek to exploit the model to explore the performance of corrective spinal devices. PMID:23991426

  8. Simulation for Prediction of Entry Article Demise (SPEAD): An Analysis Tool for Spacecraft Safety Analysis and Ascent/Reentry Risk Assessment

    NASA Technical Reports Server (NTRS)

    Ling, Lisa

    2014-01-01

    For the purpose of performing safety analysis and risk assessment for a potential off-nominal atmospheric reentry resulting in vehicle breakup, a synthesis of trajectory propagation coupled with thermal analysis and the evaluation of node failure is required to predict the sequence of events, the timeline, and the progressive demise of spacecraft components. To provide this capability, the Simulation for Prediction of Entry Article Demise (SPEAD) analysis tool was developed. The software and methodology have been validated against actual flights, telemetry data, and validated software, and safety/risk analyses were performed for various programs using SPEAD. This report discusses the capabilities, modeling, validation, and application of the SPEAD analysis tool.

  9. Characterising the perceived value of mathematics educational apps in preservice teachers

    NASA Astrophysics Data System (ADS)

    Handal, Boris; Campbell, Chris; Cavanagh, Michael; Petocz, Peter

    2016-03-01

    This study validated the semantic items of three related scales aimed at characterising the perceived worth of mathematics-education-related mobile applications (apps). The technological pedagogical content knowledge (TPACK) model was used as the conceptual framework for the analysis. Three hundred and seventy-three preservice students studying primary school education from two public and one private Australian universities participated in the study. The respondents examined three different apps using a purposively designed instrument in regard to either their explorative, productive or instructive instructional role. While construct validity could not be established due to a broad range of variability in responses implying a high degree of subjectivity in respondents' judgments, the qualitative analysis was effective in establishing content validity.

  10. Development and validation of a general purpose linearization program for rigid aircraft models

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Antoniewicz, R. F.

    1985-01-01

    A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.

  11. Validation of Hardware in the Loop (HIL) Simulation for Use in Heavy Truck Stability Control System Effectiveness Research

    DOT National Transportation Integrated Search

    2009-04-27

    A Hardware in the Loop (HiL) system was developed to investigate heavy truck instability due to loss of control and rollover situations with and without ESC/RSC systems for a wide range of maneuvers and speeds. The purpose of this HiL model is to exa...

  12. Reconnecting Youth: A Peer Group Approach to Building Life Skills.

    ERIC Educational Resources Information Center

    Eggert, Leona L.; Nicholas, Liela J.; Owen, Linda M.

    The Reconnecting Youth (RY) program offers a carefully designed, research-validated model for building strength and competence in at-risk students, transforming risk into resilience. This manual was written to assist group leaders in implementing the RY program. The overall purpose of the program is to reach high-risk youth, who are on a potential…

  13. Exploring the Factors Affecting Learners' Continuance Intention of MOOCs for Online Collaborative Learning: An Extended ECM Perspective

    ERIC Educational Resources Information Center

    Junjie, Zhou

    2017-01-01

    The purpose of this paper was to investigate what factors influence learners' continuance intention in massive open online courses (MOOCs) for online collaborative learning. An extended expectation confirmation model (ECM) was adopted as the theoretical foundation. A total of 435 valid samples were collected in mainland China and structural…

  14. An Investigation of the Internal Structure of the Biggs Study Process Questionnaire.

    ERIC Educational Resources Information Center

    Watkins, David; Hattie, John

    1980-01-01

    Results of an Australian study of the Biggs Study Process Questionnaire (SPQ) are presented. The purposes of the research were to: (1) re-examine the SPQ's internal consistency; (2) explore dimensionality of the SPQ scales; and (3) investigate validity of Bigg's model of the study process complex through factor analysis. (Author/GK)

  15. How place attachments influence recreation conflict and coping behavior

    Treesearch

    Cheng-Ping Wang; Yin-Hsun Chang

    2012-01-01

    The purpose of this study was to explore how place attachment influences recreation conflict and coping behaviors based on the Transactional Stress/Coping Model. The interference between bikers and walkers in Bali Zon-An Park in Taipei County, Taiwan was investigated in May and June of 2007. A total of 384 valid questionnaires were collected.

  16. Evolution of Occupant Survivability Simulation Framework Using FEM-SPH Coupling

    DTIC Science & Technology

    2011-08-10

    Conference (Oral only). • [5] K. Williams, et. al, “Validation of a Loading Model for Simulating Blast Mine Effects on Armoured Vehicles”, 7th...necessarily state or reflect those of the United States Government or the Department of the Army (DoA), and shall not be used for advertising or product endorsement purposes. 24

  17. Developing Attitude Scale, Reliability and Validity for Pre-Service Teachers towards Drama Lesson

    ERIC Educational Resources Information Center

    Çelik, Özkan; Bozdemir, Hafife; Uyanik, Gökhan

    2016-01-01

    The purpose of this study is to develop an attitude scale for pre-service teachers towards drama lesson. Survey model was used in study. The sample of study consisted of 258 pre-service teachers. "Attitude scale towards drama lesson for pre-service teachers" was developed and used as data collection tool. Exploratory and confirmatory…

  18. Measuring engagement in nurses: the psychometric properties of the Persian version of Utrecht Work Engagement Scale

    PubMed Central

    Torabinia, Mansour; Mahmoudi, Sara; Dolatshahi, Mojtaba; Abyaz, Mohamad Reza

    2017-01-01

    Background: Considering the overall tendency in psychology, researchers in the field of work and organizational psychology have become progressively interested in employees’ effective and optimistic experiments at work such as work engagement. This study was conducted to investigate 2 main purposes: assessing the psychometric properties of the Utrecht Work Engagement Scale, and finding any association between work engagement and burnout in nurses. Methods: The present methodological study was conducted in 2015 and included 248 females and 34 males with 6 months to 30 years of job experience. After the translation process, face and content validity were calculated by qualitative and quantitative methods. Moreover, content validation ratio, scale-level content validity index and item-level content validity index were measured for this scale. Construct validity was determined by factor analysis. Moreover, internal consistency and stability reliability were assessed. Factor analysis, test-retest, Cronbach’s alpha, and association analysis were used as statistical methods. Results: Face and content validity were acceptable. Exploratory factor analysis suggested a new 3- factor model. In this new model, some items from the construct model of the original version were dislocated with the same 17 items. The new model was confirmed by divergent Copenhagen Burnout Inventory as the Persian version of UWES. Internal consistency reliability for the total scale and the subscales was 0.76 to 0.89. Results from Pearson correlation test indicated a high degree of test-retest reliability (r = 0. 89). ICC was also 0.91. Engagement was negatively related to burnout and overtime per month, whereas it was positively related with age and job experiment. Conclusion: The Persian 3– factor model of Utrecht Work Engagement Scale is a valid and reliable instrument to measure work engagement in Iranian nurses as well as in other medical professionals. PMID:28955665

  19. A fuzzy set preference model for market share analysis

    NASA Technical Reports Server (NTRS)

    Turksen, I. B.; Willson, Ian A.

    1992-01-01

    Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share prediction).

  20. Validity of a Self-Administered 3-Day Physical Activity Recall in Young Adults

    ERIC Educational Resources Information Center

    Han, Jennifer L.; Dinger, Mary K.

    2009-01-01

    Background: Most physical activity recall questionnaires assess activity over a 7-day period. However, questionnaires have been validated in adolescents and adults using shorter recall timeframes. Purpose: The purpose of this study was to assess the validity of a self-administered 3-day physical activity recall instrument (3DR) in young adults.…

  1. Hydrological Relevant Parameters from Remote Sensing - Spatial Modelling Input and Validation Basis

    NASA Astrophysics Data System (ADS)

    Hochschild, V.

    2012-12-01

    This keynote paper will demonstrate how multisensoral remote sensing data is used as spatial input for mesoscale hydrological modeling as well as for sophisticated validation purposes. The tasks of Water Resources Management are subject as well as the role of remote sensing in regional catchment modeling. Parameters derived from remote sensing discussed in this presentation will be land cover, topographical information from digital elevation models, biophysical vegetation parameters, surface soil moisture, evapotranspiration estimations, lake level measurements, determination of snow covered area, lake ice cycles, soil erosion type, mass wasting monitoring, sealed area, flash flood estimation. The actual possibilities of recent satellite and airborne systems are discussed, as well as the data integration into GIS and hydrological modeling, scaling issues and quality assessment will be mentioned. The presentation will provide an overview of own research examples from Germany, Tibet and Africa (Ethiopia, South Africa) as well as other international research activities. Finally the paper gives an outlook on upcoming sensors and concludes the possibilities of remote sensing in hydrology.

  2. Causal Relationships between the Psychological Acceptance Process of Athletic Injury and Athletic Rehabilitation Behavior

    PubMed Central

    Tatsumi, Tomonori; Takenouchi, Takashi

    2014-01-01

    [Purpose] The purpose of this study was to examine the causal relationships between the psychological acceptance process of athletic injury and athletic-rehabilitation behavior. [Subjects] One hundred forty-four athletes who had injury experiences participated in this study, and 133 (mean age = 20.21 years, SD = 1.07; mean weeks without playing sports = 7.97 weeks, SD = 11.26) of them provided valid questionnaire responses which were subjected to analysis. [Methods] The subjects were asked to answer our originally designed questionnaire, the Psychosocial Recovery Factor Scale (PSRF-S), and two other pre-existing scales, the Athletic Injury Psychological Acceptance Scale and the Athletic-Rehabilitation Dedication Scale. [Results] The results of factor analysis indicate “emotional stability”, “social competence in the team”, “temporal perspective”, and “communication with the teammates” are factors of the PSRF-S. Lastly, the causal model in which psychosocial recovery factors are mediated by psychological acceptance of athletic injury, and influence on rehabilitation behaviors, was examined using structural equation modeling (SEM). The results of SEM indicate that the factors of emotional stability and temporal perspective are mediated by the psychological acceptance of the injury, which positively influences athletic-rehabilitation dedication. [Conclusion] The causal model was confirmed to be valid. PMID:25202190

  3. Examining the Predictive Validity of a Dynamic Assessment of Decoding to Forecast Response Tier 2 to Intervention

    PubMed Central

    Cho, Eunsoo; Compton, Donald L.; Fuchs, Doug; Fuchs, Lynn S.; Bouton, Bobette

    2013-01-01

    The purpose of this study was to examine the role of a dynamic assessment (DA) of decoding in predicting responsiveness to Tier 2 small group tutoring in a response-to-intervention model. First-grade students (n=134) who did not show adequate progress in Tier 1 based on 6 weeks of progress monitoring received Tier 2 small-group tutoring in reading for 14 weeks. Student responsiveness to Tier 2 was assessed weekly with word identification fluency (WIF). A series of conditional individual growth curve analyses were completed that modeled the correlates of WIF growth (final level of performance and growth). Its purpose was to examine the predictive validity of DA in the presence of 3 sets of variables: static decoding measures, Tier 1 responsiveness indicators, and pre-reading variables (phonemic awareness, rapid letter naming, oral vocabulary, and IQ). DA was a significant predictor of final level and growth, uniquely explaining 3% – 13% of the variance in Tier 2 responsiveness depending on the competing predictors in the model and WIF outcome (final level of performance or growth). Although the additional variances explained uniquely by DA were relatively small, results indicate the potential of DA in identifying Tier 2 nonresponders. PMID:23213050

  4. Examining the predictive validity of a dynamic assessment of decoding to forecast response to tier 2 intervention.

    PubMed

    Cho, Eunsoo; Compton, Donald L; Fuchs, Douglas; Fuchs, Lynn S; Bouton, Bobette

    2014-01-01

    The purpose of this study was to examine the role of a dynamic assessment (DA) of decoding in predicting responsiveness to Tier 2 small-group tutoring in a response-to-intervention model. First grade students (n = 134) who did not show adequate progress in Tier 1 based on 6 weeks of progress monitoring received Tier 2 small-group tutoring in reading for 14 weeks. Student responsiveness to Tier 2 was assessed weekly with word identification fluency (WIF). A series of conditional individual growth curve analyses were completed that modeled the correlates of WIF growth (final level of performance and growth). Its purpose was to examine the predictive validity of DA in the presence of three sets of variables: static decoding measures, Tier 1 responsiveness indicators, and prereading variables (phonemic awareness, rapid letter naming, oral vocabulary, and IQ). DA was a significant predictor of final level and growth, uniquely explaining 3% to 13% of the variance in Tier 2 responsiveness depending on the competing predictors in the model and WIF outcome (final level of performance or growth). Although the additional variances explained uniquely by DA were relatively small, results indicate the potential of DA in identifying Tier 2 nonresponders. © Hammill Institute on Disabilities 2012.

  5. Effect of Anisotropic Yield Function Evolution on Estimation of Forming Limit Diagram

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, K.; Basak, S.; Choi, H. J.; Panda, S. K.; Lee, M. G.

    2017-09-01

    In case of theoretical prediction of the FLD, the variations in yield stress and R-values along different material directions, were long been implemented to enhance the accuracy. Although influences of different yield models and hardening laws on formability were well addressed, anisotropic evolution of yield loci under monotonic loading with different deformation modes is yet to be explored. In the present study, Marciniak-Kuckzinsky (M-K) model was modified to incorporate the change in the shape of the initial yield function with evolution due to anisotropic hardening. Swift’s hardening law along with two different anisotropic yield criteria, namely Hill48 and Yld2000-2d were implemented in the model. The Hill48 yield model was applied with non-associated flow rule to comprehend the effect of variations in both yield stress and R-values. The numerically estimated FLDs were validated after comparing with FLD evaluated through experiments. A low carbon steel was selected, and hemispherical punch stretching test was performed for FLD evaluation. Additionally, the numerically estimated FLDs were incorporated in FE simulations to predict limiting dome heights for validation purpose. Other formability performances like strain distributions over the deformed cup surface were validated with experimental results.

  6. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry.

    PubMed

    Mathieu, Kelsey B; Kappadath, S Cheenu; White, R Allen; Atkinson, E Neely; Cody, Dianna D

    2011-08-01

    The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semi-logarithmic (exponential) and linear interpolation]. The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).

  7. Information system end-user satisfaction and continuance intention: A unified modeling approach.

    PubMed

    Hadji, Brahim; Degoulet, Patrice

    2016-06-01

    Permanent evaluation of end-user satisfaction and continuance intention is a critical issue at each phase of a clinical information system (CIS) project, but most validation studies are concerned with the pre- or early post-adoption phases. The purpose of this study was twofold: to validate at the Pompidou University Hospital (HEGP) an information technology late post-adoption model built from four validated models and to propose a unified metamodel of evaluation that could be adapted to each context or deployment phase of a CIS project. Five dimensions, i.e., CIS quality (CISQ), perceived usefulness (PU), confirmation of expectations (CE), user satisfaction (SAT), and continuance intention (CI) were selected to constitute the CI evaluation model. The validity of the model was tested using the combined answers to four surveys performed between 2011 and 2015, i.e., more than ten years after the opening of HEGP in July 2000. Structural equation modeling was used to test the eight model-associated hypotheses. The multi-professional study group of 571 responders consisted of 158 doctors, 282 nurses, and 131 secretaries. The evaluation model accounted for 84% of variance of satisfaction and 53% of CI variance for the period 2011-2015 and for 92% and 69% for the period 2014-2015. In very late post adoption, CISQ appears to be the major determinant of satisfaction and CI. Combining the results obtained at various phases of CIS deployment, a Unified Model of Information System Continuance (UMISC) is proposed. In a meaningful CIS use situation at HEGP, this study confirms the importance of CISQ in explaining satisfaction and CI. The proposed UMISC model that can be adapted to each phase of CIS deployment could facilitate the necessary efforts of permanent CIS acceptance and continuance evaluation. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Assessing the Role of Spirituality in Coping Among African Americans Diagnosed with Cancer

    PubMed Central

    Schulz, Emily; Caplan, Lee; Blake, Victor; Southward, Vivian L.; Buckner, Ayanna V.

    2013-01-01

    Spirituality plays an important role in cancer coping among African Americans. The purpose of this study was to report on the initial psychometric properties of instruments specific to the cancer context, assessing the role of spirituality in coping. Items were developed based on a theoretical model of spirituality and qualitative patient interviews. The instruments reflected connections to self, others, God, and the world. One hundred African American cancer survivors completed the instruments by telephone. The instruments showed adequate internal reliability, mixed convergent validity, discriminant validity, and interpretable factor structures. PMID:21246282

  9. Translation and validation of the German version of the Bournemouth Questionnaire for Neck Pain.

    PubMed

    Soklic, Marina; Peterson, Cynthia; Humphreys, B Kim

    2012-01-25

    Clinical outcome measures are important tools to monitor patient improvement during treatment as well as to document changes for research purposes. The short-form Bournemouth questionnaire for neck pain patients (BQN) was developed from the biopsychosocial model and measures pain, disability, cognitive and affective domains. It has been shown to be a valid and reliable outcome measure in English, French and Dutch and more sensitive to change compared to other questionnaires. The purpose of this study was to translate and validate a German version of the Bournemouth questionnaire for neck pain patients. German translation and back translation into English of the BQN was done independently by four persons and overseen by an expert committee. Face validity of the German BQN was tested on 30 neck pain patients in a single chiropractic practice. Test-retest reliability was evaluated on 31 medical students and chiropractors before and after a lecture. The German BQN was then assessed on 102 first time neck pain patients at two chiropractic practices for internal consistency, external construct validity, external longitudinal construct validity and sensitivity to change compared to the German versions of the Neck Disability Index (NDI) and the Neck Pain and Disability Scale (NPAD). Face validity testing lead to minor changes to the German BQN. The Intraclass Correlation Coefficient for the test-retest reliability was 0.99. The internal consistency was strong for all 7 items of the BQN with Cronbach α's of .79 and .80 for the pre and post-treatment total scores. External construct validity and external longitudinal construct validity using Pearson's correlation coefficient showed statistically significant correlations for all 7 scales of the BQN with the other questionnaires. The German BQN showed greater responsiveness compared to the other questionnaires for all scales. The German BQN is a valid and reliable outcome measure that has been successfully translated and culturally adapted. It is shorter, easier to use, and more responsive to change than the NDI and NPAD.

  10. Validity and Reliability of the 8-Item Work Limitations Questionnaire.

    PubMed

    Walker, Timothy J; Tullar, Jessica M; Diamond, Pamela M; Kohl, Harold W; Amick, Benjamin C

    2017-12-01

    Purpose To evaluate factorial validity, scale reliability, test-retest reliability, convergent validity, and discriminant validity of the 8-item Work Limitations Questionnaire (WLQ) among employees from a public university system. Methods A secondary analysis using de-identified data from employees who completed an annual Health Assessment between the years 2009-2015 tested research aims. Confirmatory factor analysis (CFA) (n = 10,165) tested the latent structure of the 8-item WLQ. Scale reliability was determined using a CFA-based approach while test-retest reliability was determined using the intraclass correlation coefficient. Convergent/discriminant validity was tested by evaluating relations between the 8-item WLQ with health/performance variables for convergent validity (health-related work performance, number of chronic conditions, and general health) and demographic variables for discriminant validity (gender and institution type). Results A 1-factor model with three correlated residuals demonstrated excellent model fit (CFI = 0.99, TLI = 0.99, RMSEA = 0.03, and SRMR = 0.01). The scale reliability was acceptable (0.69, 95% CI 0.68-0.70) and the test-retest reliability was very good (ICC = 0.78). Low-to-moderate associations were observed between the 8-item WLQ and the health/performance variables while weak associations were observed between the demographic variables. Conclusions The 8-item WLQ demonstrated sufficient reliability and validity among employees from a public university system. Results suggest the 8-item WLQ is a usable alternative for studies when the more comprehensive 25-item WLQ is not available.

  11. Validation of the Saskatoon Falls Prevention Consortium's Falls Screening and Referral Algorithm

    PubMed Central

    Lawson, Sara Nicole; Zaluski, Neal; Petrie, Amanda; Arnold, Cathy; Basran, Jenny

    2013-01-01

    ABSTRACT Purpose: To investigate the concurrent validity of the Saskatoon Falls Prevention Consortium's Falls Screening and Referral Algorithm (FSRA). Method: A total of 29 older adults (mean age 77.7 [SD 4.0] y) residing in an independent-living senior's complex who met inclusion criteria completed a demographic questionnaire and the components of the FSRA and Berg Balance Scale (BBS). The FSRA consists of the Elderly Fall Screening Test (EFST) and the Multi-factor Falls Questionnaire (MFQ); it is designed to categorize individuals into low, moderate, or high fall-risk categories to determine appropriate management pathways. A predictive model for probability of fall risk, based on previous research, was used to determine concurrent validity of the FSRI. Results: The FSRA placed 79% of participants into the low-risk category, whereas the predictive model found the probability of fall risk to range from 0.04 to 0.74, with a mean of 0.35 (SD 0.25). No statistically significant correlation was found between the FSRA and the predictive model for probability of fall risk (Spearman's ρ=0.35, p=0.06). Conclusion: The FSRA lacks concurrent validity relative to to a previously established model of fall risk and appears to over-categorize individuals into the low-risk group. Further research on the FSRA as an adequate tool to screen community-dwelling older adults for fall risk is recommended. PMID:24381379

  12. Estimating the Geoelectric Field Using Precomputed EMTFs: Effect of Magnetometer Cadence

    NASA Astrophysics Data System (ADS)

    Grawe, M.; Butala, M.; Makela, J. J.; Kamalabadi, F.

    2017-12-01

    Studies that make use of electromagnetic transfer functions (EMTFs) to calculate the surface electric field from a specified surface magnetic field often use historical magnetometer information for validation and comparison purposes. Depending on the data source, the magnetometer cadence is typically between 1 and 60 seconds. It is often implied that a 60 (and sometimes 10) second cadence is acceptable for purposes of geoelectric field calculation using a geophysical model. Here, we quantitatively assess this claim under different geological settings and using models of varying complexity (using uniform/1D/3D EMTFs) across several different space weather events. Conclusions are made about sampling rate sufficiency as a function of local geology and the spectral content of the surface magnetic field.

  13. HyPEP FY06 Report: Models and Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DOE report

    2006-09-01

    The Department of Energy envisions the next generation very high-temperature gas-cooled reactor (VHTR) as a single-purpose or dual-purpose facility that produces hydrogen and electricity. The Ministry of Science and Technology (MOST) of the Republic of Korea also selected VHTR for the Nuclear Hydrogen Development and Demonstration (NHDD) Project. This research project aims at developing a user-friendly program for evaluating and optimizing cycle efficiencies of producing hydrogen and electricity in a Very-High-Temperature Reactor (VHTR). Systems for producing electricity and hydrogen are complex and the calculations associated with optimizing these systems are intensive, involving a large number of operating parameter variations andmore » many different system configurations. This research project will produce the HyPEP computer model, which is specifically designed to be an easy-to-use and fast running tool for evaluating nuclear hydrogen and electricity production facilities. The model accommodates flexible system layouts and its cost models will enable HyPEP to be well-suited for system optimization. Specific activities of this research are designed to develop the HyPEP model into a working tool, including (a) identifying major systems and components for modeling, (b) establishing system operating parameters and calculation scope, (c) establishing the overall calculation scheme, (d) developing component models, (e) developing cost and optimization models, and (f) verifying and validating the program. Once the HyPEP model is fully developed and validated, it will be used to execute calculations on candidate system configurations. FY-06 report includes a description of reference designs, methods used in this study, models and computational strategies developed for the first year effort. Results from computer codes such as HYSYS and GASS/PASS-H used by Idaho National Laboratory and Argonne National Laboratory, respectively will be benchmarked with HyPEP results in the following years.« less

  14. A process-oriented measure of habit strength for moderate-to-vigorous physical activity

    PubMed Central

    Grove, J. Robert; Zillich, Irja; Medic, Nikola

    2014-01-01

    Purpose: Habitual action is an important aspect of health behaviour, but the relevance of various habit strength indicators continues to be debated. This study focused specifically on moderate-to-vigorous physical activity (MVPA) and evaluated the construct validity of a framework emphasizing patterned action, stimulus-response bonding, automaticity, and negative consequences for nonperformance as indicators of habit strength for this form of exercise. Methods: Upper-level undergraduates (N = 124) provided demographic information and responded to questionnaire items assessing historical MVPA involvement, current MVPA involvement, and the four proposed habit strength dimensions. Factor analyses were used to examine the latent structure of the habit strength indicators, and the model's construct validity was evaluated via an examination of relationships with repetition history and current behaviour. Results: At a measurement level, findings indicated that the proposed four-component model possessed psychometric integrity as a coherent set of factors. Criterion-related validity was also demonstrated via significant changes in three of the four factors as a function of past involvement in MVPA and significant correlations with the frequency, duration, and intensity of current MVPA. Conclusions: These findings support the construct validity of this exercise habit strength model and suggest that it could provide a template for future research on how MVPA habits are developed and maintained. PMID:25750789

  15. A process-oriented measure of habit strength for moderate-to-vigorous physical activity.

    PubMed

    Grove, J Robert; Zillich, Irja; Medic, Nikola

    2014-01-01

    Purpose : Habitual action is an important aspect of health behaviour, but the relevance of various habit strength indicators continues to be debated. This study focused specifically on moderate-to-vigorous physical activity (MVPA) and evaluated the construct validity of a framework emphasizing patterned action, stimulus-response bonding, automaticity, and negative consequences for nonperformance as indicators of habit strength for this form of exercise. Methods : Upper-level undergraduates ( N  = 124) provided demographic information and responded to questionnaire items assessing historical MVPA involvement, current MVPA involvement, and the four proposed habit strength dimensions. Factor analyses were used to examine the latent structure of the habit strength indicators, and the model's construct validity was evaluated via an examination of relationships with repetition history and current behaviour. Results : At a measurement level, findings indicated that the proposed four-component model possessed psychometric integrity as a coherent set of factors. Criterion-related validity was also demonstrated via significant changes in three of the four factors as a function of past involvement in MVPA and significant correlations with the frequency, duration, and intensity of current MVPA. Conclusions : These findings support the construct validity of this exercise habit strength model and suggest that it could provide a template for future research on how MVPA habits are developed and maintained.

  16. Experimental psychiatric illness and drug abuse models: from human to animal, an overview.

    PubMed

    Edwards, Scott; Koob, George F

    2012-01-01

    Preclinical animal models have supported much of the recent rapid expansion of neuroscience research and have facilitated critical discoveries that undoubtedly benefit patients suffering from psychiatric disorders. This overview serves as an introduction for the following chapters describing both in vivo and in vitro preclinical models of psychiatric disease components and briefly describes models related to drug dependence and affective disorders. Although there are no perfect animal models of any psychiatric disorder, models do exist for many elements of each disease state or stage. In many cases, the development of certain models is essentially restricted to the human clinical laboratory domain for the purpose of maximizing validity, whereas the use of in vitro models may best represent an adjunctive, well-controlled means to model specific signaling mechanisms associated with psychiatric disease states. The data generated by preclinical models are only as valid as the model itself, and the development and refinement of animal models for human psychiatric disorders continues to be an important challenge. Collaborative relationships between basic neuroscience and clinical modeling could greatly benefit the development of new and better models, in addition to facilitating medications development.

  17. Cross-cultural adaptation, reliability and construct validity of the Tampa scale for kinesiophobia for temporomandibular disorders (TSK/TMD-Br) into Brazilian Portuguese.

    PubMed

    Aguiar, A S; Bataglion, C; Visscher, C M; Bevilaqua Grossi, D; Chaves, T C

    2017-07-01

    Fear of movement (kinesiophobia) seems to play an important role in the development of chronic pain. However, for temporomandibular disorders (TMD), there is a scarcity of studies about this topic. The Tampa Scale for Kinesiophobia for TMD (TSK/TMD) is the most widely used instrument to measure fear of movement and it is not available in Brazilian Portuguese. The purpose of this study was to culturally adapt the TSK/TMD to Brazilian Portuguese and to assess its psychometric properties regarding internal consistency, reliability, and construct and structural validity. A total of 100 female patients with chronic TMD participated in the validation process of the TSK/TMD-Br. The intraclass correlation coefficient (ICC) was used for statistical analysis of reliability (test-retest), Cronbach's alpha for internal consistency, Spearman's rank correlation for construct validity and confirmatory factor analysis (CFA) for structural validity. CFA endorsed the pre-specified model with two domains and 12-items (Activity Avoidance - AA/Somatic Focus - SF) and all items obtained a loading factor greater than 0·4. Acceptable levels of reliability were found (ICC > 0·75) for all questions and domains of the TSK/TMD-Br. For internal consistency, Cronbach's α of 0·78 for both domains were found. Moderate correlations (0·40 < r < 0.60) were observed for 84% of the analyses conducted between TSK/TMD-Br scores versus catastrophising, depression and jaw functional limitation. TSK/TMD-Br 12 items and two-factor demonstrated sound psychometric properties (transcultural validity, reliability, internal consistency and structural validity). In such a way, the instrument can be used in clinical settings and for research purposes. © 2017 John Wiley & Sons Ltd.

  18. MO-C-17A-03: A GPU-Based Method for Validating Deformable Image Registration in Head and Neck Radiotherapy Using Biomechanical Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neylon, J; Min, Y; Qi, S

    2014-06-15

    Purpose: Deformable image registration (DIR) plays a pivotal role in head and neck adaptive radiotherapy but a systematic validation of DIR algorithms has been limited by a lack of quantitative high-resolution groundtruth. We address this limitation by developing a GPU-based framework that provides a systematic DIR validation by generating (a) model-guided synthetic CTs representing posture and physiological changes, and (b) model-guided landmark-based validation. Method: The GPU-based framework was developed to generate massive mass-spring biomechanical models from patient simulation CTs and contoured structures. The biomechanical model represented soft tissue deformations for known rigid skeletal motion. Posture changes were simulated by articulatingmore » skeletal anatomy, which subsequently applied elastic corrective forces upon the soft tissue. Physiological changes such as tumor regression and weight loss were simulated in a biomechanically precise manner. Synthetic CT data was then generated from the deformed anatomy. The initial and final positions for one hundred randomly-chosen mass elements inside each of the internal contoured structures were recorded as ground truth data. The process was automated to create 45 synthetic CT datasets for a given patient CT. For instance, the head rotation was varied between +/− 4 degrees along each axis, and tumor volumes were systematically reduced up to 30%. Finally, the original CT and deformed synthetic CT were registered using an optical flow based DIR. Results: Each synthetic data creation took approximately 28 seconds of computation time. The number of landmarks per data set varied between two and three thousand. The validation method is able to perform sub-voxel analysis of the DIR, and report the results by structure, giving a much more in depth investigation of the error. Conclusions: We presented a GPU based high-resolution biomechanical head and neck model to validate DIR algorithms by generating CT equivalent 3D volumes with simulated posture changes and physiological regression.« less

  19. Rasch validation of the Arabic version of the lower extremity functional scale.

    PubMed

    Alnahdi, Ali H

    2018-02-01

    The purpose of this study was to examine the internal construct validity of the Arabic version of the Lower Extremity Functional Scale (20-item Arabic LEFS) using Rasch analysis. Patients (n = 170) with lower extremity musculoskeletal dysfunction were recruited. Rasch analysis of 20-item Arabic LEFS was performed. Once the initial Rasch analysis indicated that the 20-item Arabic LEFS did not fit the Rasch model, follow-up analyses were conducted to improve the fit of the scale to the Rasch measurement model. These modifications included removing misfitting individuals, changing item scoring structure, removing misfitting items, addressing bias caused by response dependency between items and differential item functioning (DIF). Initial analysis indicated deviation of the 20-item Arabic LEFS from the Rasch model. Disordered thresholds in eight items and response dependency between six items were detected with the scale as a whole did not meet the requirement of unidimensionality. Refinements led to a 15-item Arabic LEFS that demonstrated excellent internal consistency (person separation index [PSI] = 0.92) and satisfied all the requirement of the Rasch model. Rasch analysis did not support the 20-item Arabic LEFS as a unidimensional measure of lower extremity function. The refined 15-item Arabic LEFS met all the requirement of the Rasch model and hence is a valid objective measure of lower extremity function. The Rasch-validated 15-item Arabic LEFS needs to be further tested in an independent sample to confirm its fit to the Rasch measurement model. Implications for Rehabilitation The validity of the 20-item Arabic Lower Extremity Functional Scale to measure lower extremity function is not supported. The 15-item Arabic version of the LEFS is a valid measure of lower extremity function and can be used to quantify lower extremity function in patients with lower extremity musculoskeletal disorders.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rainer, Leo I.; Hoeschele, Marc A.; Apte, Michael G.

    This report addresses the results of detailed monitoring completed under Program Element 6 of Lawrence Berkeley National Laboratory's High Performance Commercial Building Systems (HPCBS) PIER program. The purpose of the Energy Simulations and Projected State-Wide Energy Savings project is to develop reasonable energy performance and cost models for high performance relocatable classrooms (RCs) across California climates. A key objective of the energy monitoring was to validate DOE2 simulations for comparison to initial DOE2 performance projections. The validated DOE2 model was then used to develop statewide savings projections by modeling base case and high performance RC operation in the 16 Californiamore » climate zones. The primary objective of this phase of work was to utilize detailed field monitoring data to modify DOE2 inputs and generate performance projections based on a validated simulation model. Additional objectives include the following: (1) Obtain comparative performance data on base case and high performance HVAC systems to determine how they are operated, how they perform, and how the occupants respond to the advanced systems. This was accomplished by installing both HVAC systems side-by-side (i.e., one per module of a standard two module, 24 ft by 40 ft RC) on the study RCs and switching HVAC operating modes on a weekly basis. (2) Develop projected statewide energy and demand impacts based on the validated DOE2 model. (3) Develop cost effectiveness projections for the high performance HVAC system in the 16 California climate zones.« less

  1. Graph-based real-time fault diagnostics

    NASA Technical Reports Server (NTRS)

    Padalkar, S.; Karsai, G.; Sztipanovits, J.

    1988-01-01

    A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.

  2. The Dartmouth Database of Children’s Faces: Acquisition and Validation of a New Face Stimulus Set

    PubMed Central

    Dalrymple, Kirsten A.; Gomez, Jesse; Duchaine, Brad

    2013-01-01

    Facial identity and expression play critical roles in our social lives. Faces are therefore frequently used as stimuli in a variety of areas of scientific research. Although several extensive and well-controlled databases of adult faces exist, few databases include children’s faces. Here we present the Dartmouth Database of Children’s Faces, a set of photographs of 40 male and 40 female Caucasian children between 6 and 16 years-of-age. Models posed eight facial expressions and were photographed from five camera angles under two lighting conditions. Models wore black hats and black gowns to minimize extra-facial variables. To validate the images, independent raters identified facial expressions, rated their intensity, and provided an age estimate for each model. The Dartmouth Database of Children’s Faces is freely available for research purposes and can be downloaded by contacting the corresponding author by email. PMID:24244434

  3. Multiphysics Simulation of Welding-Arc and Nozzle-Arc System: Mathematical-Model, Solution-Methodology and Validation

    NASA Astrophysics Data System (ADS)

    Pawar, Sumedh; Sharma, Atul

    2018-01-01

    This work presents mathematical model and solution methodology for a multiphysics engineering problem on arc formation during welding and inside a nozzle. A general-purpose commercial CFD solver ANSYS FLUENT 13.0.0 is used in this work. Arc formation involves strongly coupled gas dynamics and electro-dynamics, simulated by solution of coupled Navier-Stoke equations, Maxwell's equations and radiation heat-transfer equation. Validation of the present numerical methodology is demonstrated with an excellent agreement with the published results. The developed mathematical model and the user defined functions (UDFs) are independent of the geometry and are applicable to any system that involves arc-formation, in 2D axisymmetric coordinates system. The high-pressure flow of SF6 gas in the nozzle-arc system resembles arc chamber of SF6 gas circuit breaker; thus, this methodology can be extended to simulate arcing phenomenon during current interruption.

  4. Software phantom with realistic speckle modeling for validation of image analysis methods in echocardiography

    NASA Astrophysics Data System (ADS)

    Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten

    2014-03-01

    Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.

  5. Validation of a Monte Carlo simulation of the Inveon PET scanner using GATE

    NASA Astrophysics Data System (ADS)

    Lu, Lijun; Zhang, Houjin; Bian, Zhaoying; Ma, Jianhua; Feng, Qiangjin; Chen, Wufan

    2016-08-01

    The purpose of this study is to validate the application of GATE (Geant4 Application for Tomographic Emission) Monte Carlo simulation toolkit in order to model the performance characteristics of Siemens Inveon small animal PET system. The simulation results were validated against experimental/published data in accordance with the NEMA NU-4 2008 protocol for standardized evaluation of spatial resolution, sensitivity, scatter fraction (SF) and noise equivalent counting rate (NECR) of a preclinical PET system. An agreement of less than 18% was obtained between the radial, tangential and axial spatial resolutions of the simulated and experimental results. The simulated peak NECR of mouse-size phantom agreed with the experimental result, while for the rat-size phantom simulated value was higher than experimental result. The simulated and experimental SFs of mouse- and rat- size phantom both reached an agreement of less than 2%. It has been shown the feasibility of our GATE model to accurately simulate, within certain limits, all major performance characteristics of Inveon PET system.

  6. Trusting Teachers' Judgement: Research Evidence of the Reliability and Validity of Teachers' Assessment Used for Summative Purposes

    ERIC Educational Resources Information Center

    Harlen, Wynne

    2005-01-01

    This paper summarizes the findings of a systematic review of research on the reliability and validity of teachers' assessment used for summative purposes. In addition to the main question, the review also addressed the question "What conditions affect the reliability and validity of teachers' summative assessment?" The initial search for studies…

  7. Validation of an Instrument to Measure High School Students' Attitudes toward Fitness Testing

    ERIC Educational Resources Information Center

    Mercier, Kevin; Silverman, Stephen

    2014-01-01

    Purpose: The purpose of this investigation was to develop an instrument that has scores that are valid and reliable for measuring students' attitudes toward fitness testing. Method: The method involved the following steps: (a) an elicitation study, (b) item development, (c) a pilot study, and (d) a validation study. The pilot study included 427…

  8. Earth as an Extrasolar Planet: Earth Model Validation Using EPOXI Earth Observations

    PubMed Central

    Meadows, Victoria S.; Crisp, David; Deming, Drake; A'Hearn, Michael F.; Charbonneau, David; Livengood, Timothy A.; Seager, Sara; Barry, Richard K.; Hearty, Thomas; Hewagama, Tilak; Lisse, Carey M.; McFadden, Lucy A.; Wellnitz, Dennis D.

    2011-01-01

    Abstract The EPOXI Discovery Mission of Opportunity reused the Deep Impact flyby spacecraft to obtain spatially and temporally resolved visible photometric and moderate resolution near-infrared (NIR) spectroscopic observations of Earth. These remote observations provide a rigorous validation of whole-disk Earth model simulations used to better understand remotely detectable extrasolar planet characteristics. We have used these data to upgrade, correct, and validate the NASA Astrobiology Institute's Virtual Planetary Laboratory three-dimensional line-by-line, multiple-scattering spectral Earth model. This comprehensive model now includes specular reflectance from the ocean and explicitly includes atmospheric effects such as Rayleigh scattering, gas absorption, and temperature structure. We have used this model to generate spatially and temporally resolved synthetic spectra and images of Earth for the dates of EPOXI observation. Model parameters were varied to yield an optimum fit to the data. We found that a minimum spatial resolution of ∼100 pixels on the visible disk, and four categories of water clouds, which were defined by using observed cloud positions and optical thicknesses, were needed to yield acceptable fits. The validated model provides a simultaneous fit to Earth's lightcurve, absolute brightness, and spectral data, with a root-mean-square (RMS) error of typically less than 3% for the multiwavelength lightcurves and residuals of ∼10% for the absolute brightness throughout the visible and NIR spectral range. We have extended our validation into the mid-infrared by comparing the model to high spectral resolution observations of Earth from the Atmospheric Infrared Sounder, obtaining a fit with residuals of ∼7% and brightness temperature errors of less than 1 K in the atmospheric window. For the purpose of understanding the observable characteristics of the distant Earth at arbitrary viewing geometry and observing cadence, our validated forward model can be used to simulate Earth's time-dependent brightness and spectral properties for wavelengths from the far ultraviolet to the far infrared. Key Words: Astrobiology—Extrasolar terrestrial planets—Habitability—Planetary science—Radiative transfer. Astrobiology 11, 393–408. PMID:21631250

  9. Factorial Validity of the Decisional Involvement Scale as a Measure of Content and Context of Nursing Practice.

    PubMed

    Yurek, Leo A; Havens, Donna S; Hays, Spencer; Hughes, Linda C

    2015-10-01

    Decisional involvement is widely recognized as an essential component of a professional nursing practice environment. In recent years, researchers have added to the conceptualization of nurses' role in decision-making to differentiate between the content and context of nursing practice. Yet, instruments that clearly distinguish between these two dimensions of practice are lacking. The purpose of this study was to examine the factorial validity of the Decisional Involvement Scale (DIS) as a measure of both the content and context of nursing practice. This secondary analysis was conducted using data from a longitudinal action research project to improve the quality of nursing practice and patient care in six hospitals (N = 1,034) in medically underserved counties of Pennsylvania. A cross-sectional analysis of baseline data from the parent study was used to compare the factor structure of two models (one nested within the other) using confirmatory factor analysis. Although a comparison of the two models indicated that the addition of second-order factors for the content and context of nursing practice improved model fit, neither model provided optimal fit to the data. Additional model-generating research is needed to develop the DIS as a valid measure of decisional involvement for both the content and context of nursing practice. © 2015 Wiley Periodicals, Inc.

  10. Developing physics learning media using 3D cartoon

    NASA Astrophysics Data System (ADS)

    Wati, M.; Hartini, S.; Hikmah, N.; Mahtari, S.

    2018-03-01

    This study focuses on developing physics learning media using 3D cartoon on the static fluid topic. The purpose of this study is to describe: (1) the validity of the learning media, (2) the practicality of the learning media, and (3) the effectiveness of the learning media. This study is a research and development using ADDIE model. The subject of the implementation of media used class XI Science of SMAN 1 Pulau Laut Timur. The data were obtained from the validation sheet of the learning media, questionnaire, and the test of learning outcomes. The results showed that: (1) the validity of the media category is valid, (2) the practicality of the media category is practice, and (3) the effectiveness of the media category is effective. It is concluded that the learning using 3D cartoon on the static fluid topic is eligible to use in learning.

  11. Does IQ Really Predict Job Performance?

    PubMed Central

    Richardson, Ken; Norgate, Sarah H.

    2015-01-01

    IQ has played a prominent part in developmental and adult psychology for decades. In the absence of a clear theoretical model of internal cognitive functions, however, construct validity for IQ tests has always been difficult to establish. Test validity, therefore, has always been indirect, by correlating individual differences in test scores with what are assumed to be other criteria of intelligence. Job performance has, for several reasons, been one such criterion. Correlations of around 0.5 have been regularly cited as evidence of test validity, and as justification for the use of the tests in developmental studies, in educational and occupational selection and in research programs on sources of individual differences. Here, those correlations are examined together with the quality of the original data and the many corrections needed to arrive at them. It is concluded that considerable caution needs to be exercised in citing such correlations for test validation purposes. PMID:26405429

  12. Navigation of guidewires and catheters in the body during intervention procedures: a review of computer-based models.

    PubMed

    Sharei, Hoda; Alderliesten, Tanja; van den Dobbelsteen, John J; Dankelman, Jenny

    2018-01-01

    Guidewires and catheters are used during minimally invasive interventional procedures to traverse in vascular system and access the desired position. Computer models are increasingly being used to predict the behavior of these instruments. This information can be used to choose the right instrument for each case and increase the success rate of the procedure. Moreover, a designer can test the performance of instruments before the manufacturing phase. A precise model of the instrument is also useful for a training simulator. Therefore, to identify the strengths and weaknesses of different approaches used to model guidewires and catheters, a literature review of the existing techniques has been performed. The literature search was carried out in Google Scholar and Web of Science and limited to English for the period 1960 to 2017. For a computer model to be used in practice, it should be sufficiently realistic and, for some applications, real time. Therefore, we compared different modeling techniques with regard to these requirements, and the purposes of these models are reviewed. Important factors that influence the interaction between the instruments and the vascular wall are discussed. Finally, different ways used to evaluate and validate the models are described. We classified the developed models based on their formulation into finite-element method (FEM), mass-spring model (MSM), and rigid multibody links. Despite its numerical stability, FEM requires a very high computational effort. On the other hand, MSM is faster but there is a risk of numerical instability. The rigid multibody links method has a simple structure and is easy to implement. However, as the length of the instrument is increased, the model becomes slower. For the level of realism of the simulation, friction and collision were incorporated as the most influential forces applied to the instrument during the propagation within a vascular system. To evaluate the accuracy, most of the studies compared the simulation results with the outcome of physical experiments on a variety of phantom models, and only a limited number of studies have done face validity. Although a subset of the validated models is considered to be sufficiently accurate for the specific task for which they were developed and, therefore, are already being used in practice, these models are still under an ongoing development for improvement. Realism and computation time are two important requirements in catheter and guidewire modeling; however, the reviewed studies made a trade-off depending on the purpose of their model. Moreover, due to the complexity of the interaction with the vascular system, some assumptions have been made regarding the properties of both instruments and vascular system. Some validation studies have been reported but without a consistent experimental methodology.

  13. Development and Construct Validation of the Interprofessional Attitudes Scale

    PubMed Central

    Norris, Jeffrey; Carpenter, Joan G.; Eaton, Jacqueline; Guo, Jia-Wen; Lassche, Madeline; Pett, Marjorie A.; Blumenthal, Donald K.

    2015-01-01

    Purpose Training of health professionals requires development of interprofessional competencies and assessment of these competencies. No validated tools exist to assess all four competency domains described in the 2011 Core Competencies for Interprofessional Collaborative Practice (the IPEC Report). The purpose of this study was to develop and validate a scale based on the IPEC competency domains that assesses interprofessional attitudes of students in the health professions. Method In 2012, a survey tool was developed and administered to 1,549 students from the University of Utah Health Science Center, an academic health center composed of four schools and colleges (Health, Medicine, Nursing, and Pharmacy). Exploratory and confirmatory factor analyses (EFA and CFA) were performed to validate the assessment tool, eliminate redundant questions, and to identify subscales. Results The EFA and CFA focused on aligning subscales with IPEC core competencies, and demonstrating good construct validity and internal consistency reliability. A response rate of 45% (n = 701) was obtained. Responses with complete data (n=678) were randomly split into two datasets which were independently analyzed using EFA and CFA. The EFA produced a 27-item scale, with five subscales (Cronbach’s alpha coefficients: 0.62 to 0.92). CFA indicated the content of the five subscales was consistent with the EFA model. Conclusions The Interprofessional Attitudes Scale (IPAS) is a novel tool that, compared to previous tools, better reflects current trends in interprofessional competencies. The IPAS should be useful to health sciences educational institutions and others training people to work collaboratively in interprofessional teams. PMID:25993280

  14. External validation of EPIWIN biodegradation models.

    PubMed

    Posthumus, R; Traas, T P; Peijnenburg, W J G M; Hulzebos, E M

    2005-01-01

    The BIOWIN biodegradation models were evaluated for their suitability for regulatory purposes. BIOWIN includes the linear and non-linear BIODEG and MITI models for estimating the probability of rapid aerobic biodegradation and an expert survey model for primary and ultimate biodegradation estimation. Experimental biodegradation data for 110 newly notified substances were compared with the estimations of the different models. The models were applied separately and in combinations to determine which model(s) showed the best performance. The results of this study were compared with the results of other validation studies and other biodegradation models. The BIOWIN models predict not-readily biodegradable substances with high accuracy in contrast to ready biodegradability. In view of the high environmental concern of persistent chemicals and in view of the large number of not-readily biodegradable chemicals compared to the readily ones, a model is preferred that gives a minimum of false positives without a corresponding high percentage false negatives. A combination of the BIOWIN models (BIOWIN2 or BIOWIN6) showed the highest predictive value for not-readily biodegradability. However, the highest score for overall predictivity with lowest percentage false predictions was achieved by applying BIOWIN3 (pass level 2.75) and BIOWIN6.

  15. Validity of the Malay version of the Internet Addiction Test: a study on a group of medical students in Malaysia.

    PubMed

    Guan, Ng Chong; Isa, Saramah Mohammed; Hashim, Aili Hanim; Pillai, Subash Kumar; Harbajan Singh, Manveen Kaur

    2015-03-01

    The use of the Internet has been increasing dramatically over the decade in Malaysia. Excessive usage of the Internet has lead to a phenomenon called Internet addiction. There is a need for a reliable, valid, and simple-to-use scale to measure Internet addiction in the Malaysian population for clinical practice and research purposes. The aim of this study was to validate the Malay version of the Internet Addiction Test, using a sample of 162 medical students. The instrument displayed good internal consistency (Cronbach's α = .91), parallel reliability (intraclass coefficient = .88, P < .001), and concurrent validity with the Compulsive Internet Use Scale (Pearson's correlation = .84, P < .001). Receiver operating characteristic analysis showed that 43 was the optimal cutoff score to discriminate students with and without Internet dependence. Principal component analysis with varimax rotation identified a 5-factor model. The Malay version of the Internet Addiction Test appeared to be a valid instrument for assessing Internet addiction in Malaysian university students. © 2012 APJPH.

  16. Content and Face Validation of a Curriculum for Ultrasonic Propulsion of Calculi in a Human Renal Model

    PubMed Central

    Dunmire, Barbrina; Cunitz, Bryan W.; He, Xuemei; Sorensen, Mathew D.; Harper, Jonathan D.; Bailey, Michael R.; Lendvay, Thomas S.

    2014-01-01

    Abstract Purpose: Ultrasonic propulsion to reposition urinary tract calculi requires knowledge about ultrasound image capture, device manipulation, and interpretation. The purpose of this study was to validate a cognitive and technical skills curriculum to teach urologists ultrasonic propulsion to reposition kidney stones in tissue phantoms. Materials and Methods: Ten board-certified urologists recruited from a single institution underwent a didactic session on renal ultrasound imaging. Subjects completed technical skills modules in tissue phantoms, including kidney imaging, pushing a stone through a translucent maze, and repositioning a lower pole calyceal stone. Objective cognitive and technical performance metrics were recorded. Subjects completed a questionnaire to ascertain face and content validity on a five-point Likert scale. Results: Eight urologists (80%) had never attended a previous ultrasound course, and nine (90%) performed renal ultrasounds less frequently than every 6 months. Mean cognitive skills scores improved from 55% to 91% (p<0.0001) on pre- and post-didactic tests. In the kidney phantom, 10 subjects (100%) repositioned the lower pole calyceal stone to at least the lower pole infundibulum, while 9 (90%) successfully repositioned the stone to the renal pelvis. A mean±SD (15.7±13.3) pushes were required to complete the task over an average of 4.6±2.2 minutes. Urologists rated the curriculum's effectiveness and realism as a training tool at a mean score of 4.6/5.0 and 4.1/5.0, respectively. Conclusions: The curriculum for ultrasonic propulsion is effective and useful for training urologists with limited ultrasound proficiency in stone repositioning technique. Further studies in animate and human models will be required to assess predictive validity. PMID:24228719

  17. Clinical Nomograms to Predict Stone-Free Rates after Shock-Wave Lithotripsy: Development and Internal-Validation

    PubMed Central

    Kim, Jung Kwon; Ha, Seung Beom; Jeon, Chan Hoo; Oh, Jong Jin; Cho, Sung Yong; Oh, Seung-June; Kim, Hyeon Hoe; Jeong, Chang Wook

    2016-01-01

    Purpose Shock-wave lithotripsy (SWL) is accepted as the first line treatment modality for uncomplicated upper urinary tract stones; however, validated prediction models with regards to stone-free rates (SFRs) are still needed. We aimed to develop nomograms predicting SFRs after the first and within the third session of SWL. Computed tomography (CT) information was also modeled for constructing nomograms. Materials and Methods From March 2006 to December 2013, 3028 patients were treated with SWL for ureter and renal stones at our three tertiary institutions. Four cohorts were constructed: Total-development, Total-validation, CT-development, and CT-validation cohorts. The nomograms were developed using multivariate logistic regression models with selected significant variables in a univariate logistic regression model. A C-index was used to assess the discrimination accuracy of nomograms and calibration plots were used to analyze the consistency of prediction. Results The SFR, after the first and within the third session, was 48.3% and 68.8%, respectively. Significant variables were sex, stone location, stone number, and maximal stone diameter in the Total-development cohort, and mean Hounsfield unit (HU) and grade of hydronephrosis (HN) were additional parameters in the CT-development cohort. The C-indices were 0.712 and 0.723 for after the first and within the third session of SWL in the Total-development cohort, and 0.755 and 0.756, in the CT-development cohort, respectively. The calibration plots showed good correspondences. Conclusions We constructed and validated nomograms to predict SFR after SWL. To the best of our knowledge, these are the first graphical nomograms to be modeled with CT information. These may be useful for patient counseling and treatment decision-making. PMID:26890006

  18. A Taxonomy of Instructional Objectives for Developmentally Disabled Persons: Personal Maintenance and Development: Homemaking and Community Life; Leisure; and Travel Domains. Working Paper 85-2. COMPETE: Community-Based Model for Public-School Exit and Transition to Employment.

    ERIC Educational Resources Information Center

    Dever, Richard B.

    The purpose of Project COMPETE is to use previous research and exemplary practices to develop and validate a model and training sequence to assist retarded youth to make the transition from school to employment in the most competitive environment possible. The taxonomy described in this project working paper focuses on instructional objectives in…

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynoso, F; Cho, S

    Purpose: To develop and validate a Monte Carlo (MC) model of a Phillips RT-250 orthovoltage unit to test various beam spectrum modulation strategies for in vitro/vivo studies. A model of this type would enable the production of unconventional beams from a typical orthovoltage unit for novel therapeutic applications such as gold nanoparticle-aided radiotherapy. Methods: The MCNP5 code system was used to create a MC model of the head of RT-250 and a 30 × 30 × 30 cm{sup 3} water phantom. For the x-ray machine head, the current model includes the vacuum region, beryllium window, collimators, inherent filters and exteriormore » steel housing. For increased computational efficiency, the primary x-ray spectrum from the target was calculated from a well-validated analytical software package. Calculated percentage-depth-dose (PDD) values and photon spectra were validated against experimental data from film and Compton-scatter spectrum measurements. Results: The model was validated for three common settings of the machine namely, 250 kVp (0.25 mm Cu), 125 kVp (2 mm Al), and 75 kVp (2 mm Al). The MC results for the PDD curves were compared with film measurements and showed good agreement for all depths with a maximum difference of 4 % around dmax and under 2.5 % for all other depths. The primary photon spectra were also measured and compared with the MC results showing reasonable agreement between the two, validating the input spectra and the final spectra as predicted by the current MC model. Conclusion: The current MC model accurately predicted the dosimetric and spectral characteristics of each beam from the RT-250 orthovoltage unit, demonstrating its applicability and reliability for beam spectrum modulation tasks. It accomplished this without the need to model the bremsstrahlung xray production from the target, while significantly improved on computational efficiency by at least two orders of magnitude. Supported by DOD/PCRP grant W81XWH-12-1-0198.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walsh, Seán, E-mail: walshsharp@gmail.com; Department of Oncology, Gray Institute for Radiation Oncology and Biology, University of Oxford, Oxford OX3 7DQ; Roelofs, Erik

    Purpose: A fully heterogeneous population averaged mechanistic tumor control probability (TCP) model is appropriate for the analysis of external beam radiotherapy (EBRT). This has been accomplished for EBRT photon treatment of intermediate-risk prostate cancer. Extending the TCP model for low and high-risk patients would be beneficial in terms of overall decision making. Furthermore, different radiation treatment modalities such as protons and carbon-ions are becoming increasingly available. Consequently, there is a need for a complete TCP model. Methods: A TCP model was fitted and validated to a primary endpoint of 5-year biological no evidence of disease clinical outcome data obtained frommore » a review of the literature for low, intermediate, and high-risk prostate cancer patients (5218 patients fitted, 1088 patients validated), treated by photons, protons, or carbon-ions. The review followed the preferred reporting item for systematic reviews and meta-analyses statement. Treatment regimens include standard fractionation and hypofractionation treatments. Residual analysis and goodness of fit statistics were applied. Results: The TCP model achieves a good level of fit overall, linear regression results in a p-value of <0.000 01 with an adjusted-weighted-R{sup 2} value of 0.77 and a weighted root mean squared error (wRMSE) of 1.2%, to the fitted clinical outcome data. Validation of the model utilizing three independent datasets obtained from the literature resulted in an adjusted-weighted-R{sup 2} value of 0.78 and a wRMSE of less than 1.8%, to the validation clinical outcome data. The weighted mean absolute residual across the entire dataset is found to be 5.4%. Conclusions: This TCP model fitted and validated to clinical outcome data, appears to be an appropriate model for the inclusion of all clinical prostate cancer risk categories, and allows evaluation of current EBRT modalities with regard to tumor control prediction.« less

  1. Development and testing of the CALDs and CLES+T scales for international nursing students' clinical learning environments.

    PubMed

    Mikkonen, Kristina; Elo, Satu; Miettunen, Jouko; Saarikoski, Mikko; Kääriäinen, Maria

    2017-08-01

    The purpose of this study was to develop and test the psychometric properties of the new Cultural and Linguistic Diversity scale, which is designed to be used with the newly validated Clinical Learning Environment, Supervision and Nurse Teacher scale for assessing international nursing students' clinical learning environments. In various developed countries, clinical placements are known to present challenges in the professional development of international nursing students. A cross-sectional survey. Data were collected from eight Finnish universities of applied sciences offering nursing degree courses taught in English during 2015-2016. All the relevant students (N = 664) were invited and 50% chose to participate. Of the total data submitted by the participants, 28% were used for scale validation. The construct validity of the two scales was tested by exploratory factor analysis, while their validity with respect to convergence and discriminability was assessed using Spearman's correlation. Construct validation of the Clinical Learning Environment, Supervision and Nurse Teacher scale yielded an eight-factor model with 34 items, while validation of the Cultural and Linguistic Diversity scale yielded a five-factor model with 21 items. A new scale was developed to improve evidence-based mentorship of international nursing students in clinical learning environments. The instrument will be useful to educators seeking to identify factors that affect the learning of international students. © 2017 John Wiley & Sons Ltd.

  2. Factors Affecting Acceptance & Use of ReWIND: Validating the Extended Unified Theory of Acceptance and Use of Technology

    ERIC Educational Resources Information Center

    Nair, Pradeep Kumar; Ali, Faizan; Leong, Lim Chee

    2015-01-01

    Purpose: This study aims to explain the factors affecting students' acceptance and usage of a lecture capture system (LCS)--ReWIND--in a Malaysian university based on the extended unified theory of acceptance and use of technology (UTAUT2) model. Technological advances have become an important feature of universities' plans to improve the…

  3. Rasch Analysis of the Bruininks-Oseretsky Test of Motor Proficiency--Second Edition in Intellectual Disabilities

    ERIC Educational Resources Information Center

    Wuang, Yee-Pay; Lin, Yueh-Hsien; Su, Chwen-Yng

    2009-01-01

    The Bruininks-Oseretsky Test of Motor Proficiency-Second Edition (BOT-2) is widely used to assess motor skills for both clinical and research purposes; however, its validity has not been adequately assessed in intellectual disabilities (ID). This study used partial credit Rasch model to examine the measurement properties of the BOT-2 among 446…

  4. Adaptation of Teachers' Conceptions and Practices of Formative Assessment Scale into Turkish Culture and a Structural Equation Modeling

    ERIC Educational Resources Information Center

    Karaman, Pinar; Sahin, Çavus

    2017-01-01

    The purpose of this study was to adapt Teachers' Conceptions and Practices of Formative Assessment Scale (TCPFS) based on the Theory of Planned Behavior (TPB) into Turkish culture and apply the TPB to examine teachers' intentions and behaviors regarding formative assessment. After examining linguistic validity of the scale, Turkish scale was…

  5. Differences in How Mothers and Fathers Monitor Sugar-Sweetened Beverages for Their Young Children (7-12 Years)

    ERIC Educational Resources Information Center

    Branscum, Paul; Housely, Alexandra

    2018-01-01

    The purpose of this study was to evaluate differences between how mothers and fathers monitor their children's sugar-sweetened beverages (SSBs; 7-12 years) using constructs from the integrated behavioral model (IBM). Mothers (n = 167) and fathers (n = 117) completed a valid and reliable survey evaluating the extent that they monitored their…

  6. Further Evidence for a Multifaceted Model of Mental Speed: Factor Structure and Validity of Computerized Measures

    ERIC Educational Resources Information Center

    Danthiir, Vanessa; Wilhelm, Oliver; Roberts, Richard D.

    2012-01-01

    The purpose of this study was to replicate the structure of mental speed and relations evidenced with fluid intelligence (Gf) found in a number of recent studies. Specifically, a battery of computerized tasks examined whether results with paper-and-pencil assessments held across different test media. Participants (N = 186) completed the battery,…

  7. Measuring Students' Perceptions of Personal and Social Responsibility and the Relationship to Intrinsic Motivation in Urban Physical Education

    ERIC Educational Resources Information Center

    Li, Weidong; Wright, Paul M.; Rukavina, Paul Bernard; Pickering, Molly

    2008-01-01

    The purpose of the current study was to test the validity and reliability of a two-factor model of the Personal and Social Responsibility Questionnaire (PSRQ) and examine the relationships between perceptions of personal and social responsibility and intrinsic motivation in physical education. Participants were 253 middle school students who…

  8. Improvements to the Sandia CTH Hydro-Code to Support Blast Analysis and Protective Design of Military Vehicles

    DTIC Science & Technology

    2014-04-15

    used for advertising or product endorsement purposes. 6.0 REFERENCES [1] McGlaun, J., Thompson, S. and Elrick, M. “CTH: A Three-Dimensional Shock-Wave...Validation of a Loading Model for Simulating Blast Mine Effects on Armoured Vehicles,” 7 th International LS-DYNA Users Conference, Detroit, MI 2002. [14

  9. Development and Validation of a Brief Version of the Dyadic Adjustment Scale With a Nonparametric Item Analysis Model

    ERIC Educational Resources Information Center

    Sabourin, Stephane; Valois, Pierre; Lussier, Yvan

    2005-01-01

    The main purpose of the current research was to develop an abbreviated form of the Dyadic Adjustment Scale (DAS) with nonparametric item response theory. The authors conducted 5 studies, with a total participation of 8,256 married or cohabiting individuals. Results showed that the item characteristic curves behaved in a monotonically increasing…

  10. Teaching Behavior and Well-Being in Students: Development and Concurrent Validity of an Instrument to Measure Student-Reported Teaching Behavior

    ERIC Educational Resources Information Center

    Pössel, Patrick; Moritz Rudasill, Kathleen; Adelson, Jill L.; Bjerg, Annie C.; Wooldridge, Don T.; Black, Stephanie Winkeljohn

    2013-01-01

    Teaching behavior has important implications for students' emotional well-being. Multiple models suggest students' perceptions of teaching behaviors are more critical than other measures for predicting well-being, yet student-report instruments that measure concrete and specific teaching behavior are limited. The purpose of the present studies is…

  11. The Learning and Study Strategies Inventory-High School Version: Issues of Factorial Invariance Across Gender and Ethnicity

    ERIC Educational Resources Information Center

    Stevens, Tara; Tallent-Runnels, Mary K.

    2004-01-01

    The purpose of this study was to investigate the latent structure of the Learning and Study Strategies Inventory-High School (LASSI-HS) through confirmatory factor analysis and factorial invariance models. A simple modification of the three-factor structure was considered. Using a larger sample, cross-validation was completed and the equality of…

  12. Teaching neurophysiology, neuropharmacology, and experimental design using animal models of psychiatric and neurological disorders.

    PubMed

    Morsink, Maarten C; Dukers, Danny F

    2009-03-01

    Animal models have been widely used for studying the physiology and pharmacology of psychiatric and neurological diseases. The concepts of face, construct, and predictive validity are used as indicators to estimate the extent to which the animal model mimics the disease. Currently, we used these three concepts to design a theoretical assignment to integrate the teaching of neurophysiology, neuropharmacology, and experimental design. For this purpose, seven case studies were developed in which animal models for several psychiatric and neurological diseases were described and in which neuroactive drugs used to treat or study these diseases were introduced. Groups of undergraduate students were assigned to one of these case studies and asked to give a classroom presentation in which 1) the disease and underlying pathophysiology are described, 2) face and construct validity of the animal model are discussed, and 3) a pharmacological experiment with the associated neuroactive drug to assess predictive validity is presented. After evaluation of the presentations, we found that the students had gained considerable insight into disease phenomenology, its underlying neurophysiology, and the mechanism of action of the neuroactive drug. Moreover, the assignment was very useful in the teaching of experimental design, allowing an in-depth discussion of experimental control groups and the prediction of outcomes in these groups if the animal model were to display predictive validity. Finally, the highly positive responses in the student evaluation forms indicated that the assignment was of great interest to the students. Hence, the currently developed case studies constitute a very useful tool for teaching neurophysiology, neuropharmacology, and experimental design.

  13. Development and Validation of Capabilities to Measure Thermal Properties of Layered Monolithic U-Mo Alloy Plate-Type Fuel

    NASA Astrophysics Data System (ADS)

    Burkes, Douglas E.; Casella, Andrew M.; Buck, Edgar C.; Casella, Amanda J.; Edwards, Matthew K.; MacFarlan, Paul J.; Pool, Karl N.; Smith, Frances N.; Steen, Franciska H.

    2014-07-01

    The uranium-molybdenum (U-Mo) alloy in a monolithic form has been proposed as one fuel design capable of converting some of the world's highest power research reactors from the use of high enriched uranium to low enriched uranium. One aspect of the fuel development and qualification process is to demonstrate appropriate understanding of the thermal-conductivity behavior of the fuel system as a function of temperature and expected irradiation conditions. The purpose of this paper is to verify functionality of equipment installed in hot cells for eventual measurements on irradiated uranium-molybdenum (U-Mo) monolithic fuel specimens, refine procedures to operate the equipment, and validate models to extract the desired thermal properties. The results presented here demonstrate the adequacy of the equipment, procedures, and models that have been developed for this purpose based on measurements conducted on surrogate depleted uranium-molybdenum (DU-Mo) alloy samples containing a Zr diffusion barrier and clad in aluminum alloy 6061 (AA6061). The results are in excellent agreement with thermal property data reported in the literature for similar U-Mo alloys as a function of temperature.

  14. A Multi-Purpose, Detector-Based Photometric Calibration System for Luminous Intensity, Illuminance and Luminance

    NASA Astrophysics Data System (ADS)

    Lam, Brenda H. S.; Yang, Steven S. L.; Chau, Y. C.

    2018-02-01

    A multi-purpose detector based calibration system for luminous intensity, illuminance and luminance has been developed at the Government of the Hong Kong Special Administrative Region, Standards and Calibration Laboratory (SCL). In this paper, the measurement system and methods are described. The measurement models and contributory uncertainties were validated using the Guide to the Expression of Uncertainty in Measurement (GUM) framework and Supplement 1 to the GUM - Propagation of distributions using a Monte Carlo method in accordance with the JCGM 100:2008 and JCGM 101:2008 at the intended precision level.

  15. External validation of a 5-year survival prediction model after elective abdominal aortic aneurysm repair.

    PubMed

    DeMartino, Randall R; Huang, Ying; Mandrekar, Jay; Goodney, Philip P; Oderich, Gustavo S; Kalra, Manju; Bower, Thomas C; Cronenwett, Jack L; Gloviczki, Peter

    2018-01-01

    The benefit of prophylactic repair of abdominal aortic aneurysms (AAAs) is based on the risk of rupture exceeding the risk of death from other comorbidities. The purpose of this study was to validate a 5-year survival prediction model for patients undergoing elective repair of asymptomatic AAA <6.5 cm to assist in optimal selection of patients. All patients undergoing elective repair for asymptomatic AAA <6.5 cm (open or endovascular) from 2002 to 2011 were identified from a single institutional database (validation group). We assessed the ability of a prior published Vascular Study Group of New England (VSGNE) model (derivation group) to predict survival in our cohort. The model was assessed for discrimination (concordance index), calibration (calibration slope and calibration in the large), and goodness of fit (score test). The VSGNE derivation group consisted of 2367 patients (70% endovascular). Major factors associated with survival in the derivation group were age, coronary disease, chronic obstructive pulmonary disease, renal function, and antiplatelet and statin medication use. Our validation group consisted of 1038 patients (59% endovascular). The validation group was slightly older (74 vs 72 years; P < .01) and had a higher proportion of men (76% vs 68%; P < .01). In addition, the derivation group had higher rates of advanced cardiac disease, chronic obstructive pulmonary disease, and baseline creatinine concentration (1.2 vs 1.1 mg/dL; P < .01). Despite slight differences in preoperative patient factors, 5-year survival was similar between validation and derivation groups (75% vs 77%; P = .33). The concordance index of the validation group was identical between derivation and validation groups at 0.659 (95% confidence interval, 0.63-0.69). Our validation calibration in the large value was 1.02 (P = .62, closer to 1 indicating better calibration), calibration slope of 0.84 (95% confidence interval, 0.71-0.97), and score test of P = .57 (>.05 indicating goodness of fit). Across different populations of patients, assessment of age and level of cardiac, pulmonary, and renal disease can accurately predict 5-year survival in patients with AAA <6.5 cm undergoing repair. This risk prediction model is a valid method to assess mortality risk in determining potential overall survival benefit from elective AAA repair. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  16. The ESA SMOS Validation Rehearsal Campaign at the Valencia Anchor Station Area in the Framework of the SMOS Cal/Val AO Project no. 3252

    NASA Astrophysics Data System (ADS)

    Lopez-Baeza, E.

    2009-04-01

    Since 2001, the Valencia Anchor Station is currently being prepared for the validation of SMOS land products. The site has recently been selected by the Mission as a core validation site, mainly due to the reasonable homogeneous characteristics of the area which make it appropriate to undertake the validation of SMOS Level 2 land products during the Mission Commissioning Phase, before attempting more complex areas. Close to SMOS launch, ESA defined and designed the SMOS Validation Rehearsal Campaign Plan with the purpose of repeating the Commissioning Phase execution with all centers, all tools, all participants, all structures, all data available, assuming that all tools and structures are ready and trying to produce as close as possible the post-launch conditions. The aim was to test the readiness, the ensemble coordination and the speed of operations to be able to avoid as far as possible any unexpected deficiencies of the plan and procedure during the real Commissioning Phase campaigns. For the rehearsal activity which successfully took place in April 2008, a control area of 10 x 10 km2 was chosen at the Valencia Anchor Station study area where a network of ground soil moisture measuring stations is being set up based on the definition of homogeneous physio-hydrological units, attending to climatic, soil type, lithology, geology, elevation, slope and vegetation cover conditions. These stations are linked via a wireless communication system to a master post accessible via internet. Complementary to the ground measurements, flight operations were performed over the control area using the Helsinki University of Technology TKK Short Skyvan research aircraft. The payload for the campaign consisted of the following instruments: (i) L-band radiometer EMIRAD (Technical University of Denmark, TUD), (ii) HUT-2D L-band imaging interferometric radiometer (TKK), (iii) PARIS GPS reflectrometry system (Institute for Space Studies of Catalonia, IEEC), (iv) IR sensor (Finnish Institute of Maritime Research, FIMR). Together with the ground soil moisture measurements, other ground and meteorological measurements from the Valencia Anchor Station area, kindly provided by other institutions, are currently been used to simulate passive microwave brightness temperature to have satellite "match ups" for validation purposes and to test the retrieval algorithms. The spatialization of the ground measurements up to a SMOS pixel is carried out by using a Soil-Vegetation-Atmosphere-Transfer (SVAT) model (SURFEX, SURFace EXternalisée) from Météo France. Output data, particularly soil moisture, will then be used to simulate the L-band surface emission through the use of the L-MEB (L-band Microwave Emission of the Biosphere) model. For that purpose, the microwave model uses specific ground information regarding the soil and vegetation properties provided by the validation teams. The aggregation of the brightness temperatures at the SMOS pixel scale is then carried out in an operational way taking into account the SMOS viewing configuration and antenna properties. This paper presents an overview of the ESA SMOS Validation Rehearsal Campaign at the Valencia Anchor Station area making more emphasis on the development of the ground activities which are significant for the performance of the different validation components and giving an outline of the methodology to be used for the whole SMOS Reference Pixel.

  17. Validity of Gō models: comparison with a solvent-shielded empirical energy decomposition.

    PubMed

    Paci, Emanuele; Vendruscolo, Michele; Karplus, Martin

    2002-12-01

    Do Gō-type model potentials provide a valid approach for studying protein folding? They have been widely used for this purpose because of their simplicity and the speed of simulations based on their use. The essential assumption in such models is that only contact interactions existing in the native state determine the energy surface of a polypeptide chain, even for non-native configurations sampled along folding trajectories. Here we use an all-atom molecular mechanics energy function to investigate the adequacy of Gō-type potentials. We show that, although the contact approximation is accurate, non-native contributions to the energy can be significant. The assumed relation between residue-residue interaction energies and the number of contacts between them is found to be only approximate. By contrast, individual residue energies correlate very well with the number of contacts. The results demonstrate that models based on the latter should give meaningful results (e.g., as used to interpret phi values), whereas those that depend on the former are only qualitative, at best.

  18. Psychometric properties of the Social Physique Anxiety Scale (SPAS-7) in Spanish adolescents.

    PubMed

    Sáenz-Alvarez, Piedad; Sicilia, Álvaro; González-Cutre, David; Ferriz, Roberto

    2013-01-01

    The purpose of this study was to validate the Spanish version of Motl and Conroy's model of the Social Physique Anxiety Scale (SPAS-7). To achieve this goal, a sample of 398 secondary school students was used, and the psychometric properties of the SPAS-7 were examined through different analyses. The results supported the seven-item model, although the item 5 did not show any significant correlation with two items from this model and had a lower factor loading than the rest of items. The structure of the model was invariant across gender and Body Mass Index (BMI). Alpha value over .70 and suitable levels of temporal stability were obtained. Girls and students classified according to the BMI as overweight and obese had higher scores in social physique anxiety than boys and the group classified as underweight and normal range. The findings of this study provided reliability and validity for the SPAS-7 in a Spanish adolescent sample.

  19. The Cyber Aggression in Relationships Scale: A New Multidimensional Measure of Technology-Based Intimate Partner Aggression.

    PubMed

    Watkins, Laura E; Maldonado, Rosalita C; DiLillo, David

    2018-07-01

    The purpose of this study was to develop and provide initial validation for a measure of adult cyber intimate partner aggression (IPA): the Cyber Aggression in Relationships Scale (CARS). Drawing on recent conceptual models of cyber IPA, items from previous research exploring general cyber aggression and cyber IPA were modified and new items were generated for inclusion in the CARS. Two samples of adults 18 years or older were recruited online. We used item factor analysis to test the factor structure, model fit, and invariance of the measure structure across women and men. Results confirmed that three-factor models for both perpetration and victimization demonstrated good model fit, and that, in general, the CARS measures partner cyber aggression similarly for women and men. The CARS also demonstrated validity through significant associations with in-person IPA, trait anger, and jealousy. Findings suggest the CARS is a useful tool for assessing cyber IPA in both research and clinical settings.

  20. Assessing Requirements Quality through Requirements Coverage

    NASA Technical Reports Server (NTRS)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software requirements and existing model coverage metrics such as the Modified Condition and Decision Coverage (MC/DC) used when testing highly critical software in the avionics industry [8]. Our work is related to Chockler et al. [2], but we base our work on traditional testing techniques as opposed to verification techniques.

  1. Presenting an Evaluation Model for the Cancer Registry Software.

    PubMed

    Moghaddasi, Hamid; Asadi, Farkhondeh; Rabiei, Reza; Rahimi, Farough; Shahbodaghi, Reihaneh

    2017-12-01

    As cancer is increasingly growing, cancer registry is of great importance as the main core of cancer control programs, and many different software has been designed for this purpose. Therefore, establishing a comprehensive evaluation model is essential to evaluate and compare a wide range of such software. In this study, the criteria of the cancer registry software have been determined by studying the documents and two functional software of this field. The evaluation tool was a checklist and in order to validate the model, this checklist was presented to experts in the form of a questionnaire. To analyze the results of validation, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved, the final version of the evaluation model for the cancer registry software was presented. The evaluation model of this study contains tool and method of evaluation. The evaluation tool is a checklist including the general and specific criteria of the cancer registry software along with their sub-criteria. The evaluation method of this study was chosen as a criteria-based evaluation method based on the findings. The model of this study encompasses various dimensions of cancer registry software and a proper method for evaluating it. The strong point of this evaluation model is the separation between general criteria and the specific ones, while trying to fulfill the comprehensiveness of the criteria. Since this model has been validated, it can be used as a standard to evaluate the cancer registry software.

  2. Deep Learning to Classify Radiology Free-Text Reports.

    PubMed

    Chen, Matthew C; Ball, Robyn L; Yang, Lingyao; Moradzadeh, Nathaniel; Chapman, Brian E; Larson, David B; Langlotz, Curtis P; Amrhein, Timothy J; Lungren, Matthew P

    2018-03-01

    Purpose To evaluate the performance of a deep learning convolutional neural network (CNN) model compared with a traditional natural language processing (NLP) model in extracting pulmonary embolism (PE) findings from thoracic computed tomography (CT) reports from two institutions. Materials and Methods Contrast material-enhanced CT examinations of the chest performed between January 1, 1998, and January 1, 2016, were selected. Annotations by two human radiologists were made for three categories: the presence, chronicity, and location of PE. Classification of performance of a CNN model with an unsupervised learning algorithm for obtaining vector representations of words was compared with the open-source application PeFinder. Sensitivity, specificity, accuracy, and F1 scores for both the CNN model and PeFinder in the internal and external validation sets were determined. Results The CNN model demonstrated an accuracy of 99% and an area under the curve value of 0.97. For internal validation report data, the CNN model had a statistically significant larger F1 score (0.938) than did PeFinder (0.867) when classifying findings as either PE positive or PE negative, but no significant difference in sensitivity, specificity, or accuracy was found. For external validation report data, no statistical difference between the performance of the CNN model and PeFinder was found. Conclusion A deep learning CNN model can classify radiology free-text reports with accuracy equivalent to or beyond that of an existing traditional NLP model. © RSNA, 2017 Online supplemental material is available for this article.

  3. Evaluation of the Gratitude Questionnaire in a Chinese Sample of Adults: Factorial Validity, Criterion-Related Validity, and Measurement Invariance Across Sex

    PubMed Central

    Kong, Feng; You, Xuqun; Zhao, Jingjing

    2017-01-01

    The Gratitude Questionnaire (GQ; McCullough et al., 2002) is one of the most widely used instruments to assess dispositional gratitude. The purpose of this study was to validate a Chinese version of the GQ by examining internal consistency, factor structure, convergent validity, and measurement invariance across sex. A total of 1151 Chinese adults were recruited to complete the GQ, Positive Affect and Negative Affect Scales, and Satisfaction with Life Scale. Confirmatory factor analysis indicated that the original unidimensional model fitted well, which is in accordance with the findings in Western populations. Furthermore, the GQ had satisfactory composite reliability and criterion-related validity with measures of life satisfaction and affective well-being. Evidence of configural, metric and scalar invariance across sex was obtained. Tests of the latent mean differences found females had higher latent mean scores than males. These findings suggest that the Chinese version of GQ is a reliable and valid tool for measuring dispositional gratitude and can generally be utilized across sex in the Chinese context. PMID:28919873

  4. Evaluation of the Gratitude Questionnaire in a Chinese Sample of Adults: Factorial Validity, Criterion-Related Validity, and Measurement Invariance Across Sex.

    PubMed

    Kong, Feng; You, Xuqun; Zhao, Jingjing

    2017-01-01

    The Gratitude Questionnaire (GQ; McCullough et al., 2002) is one of the most widely used instruments to assess dispositional gratitude. The purpose of this study was to validate a Chinese version of the GQ by examining internal consistency, factor structure, convergent validity, and measurement invariance across sex. A total of 1151 Chinese adults were recruited to complete the GQ, Positive Affect and Negative Affect Scales, and Satisfaction with Life Scale. Confirmatory factor analysis indicated that the original unidimensional model fitted well, which is in accordance with the findings in Western populations. Furthermore, the GQ had satisfactory composite reliability and criterion-related validity with measures of life satisfaction and affective well-being. Evidence of configural, metric and scalar invariance across sex was obtained. Tests of the latent mean differences found females had higher latent mean scores than males. These findings suggest that the Chinese version of GQ is a reliable and valid tool for measuring dispositional gratitude and can generally be utilized across sex in the Chinese context.

  5. Development and Validation of a Multimedia-based Assessment of Scientific Inquiry Abilities

    NASA Astrophysics Data System (ADS)

    Kuo, Che-Yu; Wu, Hsin-Kai; Jen, Tsung-Hau; Hsu, Ying-Shao

    2015-09-01

    The potential of computer-based assessments for capturing complex learning outcomes has been discussed; however, relatively little is understood about how to leverage such potential for summative and accountability purposes. The aim of this study is to develop and validate a multimedia-based assessment of scientific inquiry abilities (MASIA) to cover a more comprehensive construct of inquiry abilities and target secondary school students in different grades while this potential is leveraged. We implemented five steps derived from the construct modeling approach to design MASIA. During the implementation, multiple sources of evidence were collected in the steps of pilot testing and Rasch modeling to support the validity of MASIA. Particularly, through the participation of 1,066 8th and 11th graders, MASIA showed satisfactory psychometric properties to discriminate students with different levels of inquiry abilities in 101 items in 29 tasks when Rasch models were applied. Additionally, the Wright map indicated that MASIA offered accurate information about students' inquiry abilities because of the comparability of the distributions of student abilities and item difficulties. The analysis results also suggested that MASIA offered precise measures of inquiry abilities when the components (questioning, experimenting, analyzing, and explaining) were regarded as a coherent construct. Finally, the increased mean difficulty thresholds of item responses along with three performance levels across all sub-abilities supported the alignment between our scoring rubrics and our inquiry framework. Together with other sources of validity in the pilot testing, the results offered evidence to support the validity of MASIA.

  6. Nonlinear system identification of smart structures under high impact loads

    NASA Astrophysics Data System (ADS)

    Sarp Arsava, Kemal; Kim, Yeesock; El-Korchi, Tahar; Park, Hyo Seon

    2013-05-01

    The main purpose of this paper is to develop numerical models for the prediction and analysis of the highly nonlinear behavior of integrated structure control systems subjected to high impact loading. A time-delayed adaptive neuro-fuzzy inference system (TANFIS) is proposed for modeling of the complex nonlinear behavior of smart structures equipped with magnetorheological (MR) dampers under high impact forces. Experimental studies are performed to generate sets of input and output data for training and validation of the TANFIS models. The high impact load and current signals are used as the input disturbance and control signals while the displacement and acceleration responses from the structure-MR damper system are used as the output signals. The benchmark adaptive neuro-fuzzy inference system (ANFIS) is used as a baseline. Comparisons of the trained TANFIS models with experimental results demonstrate that the TANFIS modeling framework is an effective way to capture nonlinear behavior of integrated structure-MR damper systems under high impact loading. In addition, the performance of the TANFIS model is much better than that of ANFIS in both the training and the validation processes.

  7. Development of a framework for international certification by OIE of diagnostic tests validated as fit for purpose.

    PubMed

    Wright, P; Edwards, S; Diallo, A; Jacobson, R

    2006-01-01

    Historically, the OIE has focused on test methods applicable to trade and the international movement of animals and animal products. With its expanding role as the World Organisation for Animal Health, the OIE has recognised the need to evaluate test methods relative to specific diagnostic applications other than trade. In collaboration with its international partners, the OIE solicited input from experts through consultants' meetings on the development of guidelines for validation and certification of diagnostic assays for infectious animal diseases. Recommendations from the first meeting were formally adopted and have subsequently been acted upon by the OIE. A validation template has been developed that specifically requires a test to be fit or suited for its intended purpose (e.g. as a screening or a confirmatory test). This is a key criterion for validation. The template incorporates four distinct stages of validation, each of which has bearing on the evaluation of fitness for purpose. The OIE has just recently created a registry for diagnostic tests that fulfil these validation requirements. Assay developers are invited to submit validation dossiers to the OIE for evaluation by a panel of experts. Recognising that validation is an incremental process, tests methods achieving at least the first stages of validation may be provisionally accepted. To provide additional confidence in assay performance, the OIE, through its network of Reference Laboratories, has embarked on the development of evaluation panels. These panels would contain specially selected test samples that would assist in verifying fitness for purpose.

  8. Development of a framework for international certification by the OIE of diagnostic tests validated as fit for purpose.

    PubMed

    Wright, P; Edwards, S; Diallo, A; Jacobson, R

    2007-01-01

    Historically, the OIE has focussed on test methods applicable to trade and the international movement of animals and animal products. With its expanding role as the World Organisation for Animal Health, the OIE has recognised the need to evaluate test methods relative to specific diagnostic applications other than trade. In collaboration with its international partners, the OIE solicited input from experts through consultants meetings on the development of guidelines for validation and certification of diagnostic assays for infectious animal diseases. Recommendations from the first meeting were formally adopted and have subsequently been acted upon by the OIE. A validation template has been developed that specifically requires a test to be fit or suited for its intended purpose (e.g. as a screening or a confirmatory test). This is a key criterion for validation. The template incorporates four distinct stages of validation, each of which has bearing on the evaluation of fitness for purpose. The OIE has just recently created a registry for diagnostic tests that fulfil these validation requirements. Assay developers are invited to submit validation dossiers to the OIE for evaluation by a panel of experts. Recognising that validation is an incremental process, tests methods achieving at least the first stages of validation may be provisionally accepted. To provide additional confidence in assay performance, the OIE, through its network of Reference Laboratories, has embarked on the development of evaluation panels. These panels would contain specially selected test samples that would assist in verifying fitness for purpose.

  9. Validation of a numerical method for interface-resolving simulation of multicomponent gas-liquid mass transfer and evaluation of multicomponent diffusion models

    NASA Astrophysics Data System (ADS)

    Woo, Mino; Wörner, Martin; Tischer, Steffen; Deutschmann, Olaf

    2018-03-01

    The multicomponent model and the effective diffusivity model are well established diffusion models for numerical simulation of single-phase flows consisting of several components but are seldom used for two-phase flows so far. In this paper, a specific numerical model for interfacial mass transfer by means of a continuous single-field concentration formulation is combined with the multicomponent model and effective diffusivity model and is validated for multicomponent mass transfer. For this purpose, several test cases for one-dimensional physical or reactive mass transfer of ternary mixtures are considered. The numerical results are compared with analytical or numerical solutions of the Maxell-Stefan equations and/or experimental data. The composition-dependent elements of the diffusivity matrix of the multicomponent and effective diffusivity model are found to substantially differ for non-dilute conditions. The species mole fraction or concentration profiles computed with both diffusion models are, however, for all test cases very similar and in good agreement with the analytical/numerical solutions or measurements. For practical computations, the effective diffusivity model is recommended due to its simplicity and lower computational costs.

  10. Data Quality in Institutional Arthroplasty Registries: Description of a Model of Validation and Report of Preliminary Results.

    PubMed

    Bautista, Maria P; Bonilla, Guillermo A; Mieth, Klaus W; Llinás, Adolfo M; Rodríguez, Fernanda; Cárdenas, Laura L

    2017-07-01

    Arthroplasty registries are a relevant source of information for research and quality improvement in patient care and its value depends on the quality of the recorded data. The purpose of this study is to describe a model of validation and present the findings of validation of an Institutional Arthroplasty Registry (IAR). Information from 209 primary arthroplasties and revision surgeries of the hip, knee, and shoulder recorded in the IAR between March and September 2015 were analyzed in the following domains. Adherence is defined as the proportion of patients included in the registry, completeness is defined as the proportion of data effectively recorded, and accuracy is defined as the proportion of data consistent with medical records. A random sample of 53 patients (25.4%) was selected to assess the latest 2 domains. A direct comparison between the registry's database and medical records was performed. In total, 324 variables containing information on demographic data, surgical procedure, clinical outcomes, and key performance indicators were analyzed. Two hundred nine of 212 patients who underwent surgery during the study period were included in the registry, accounting for an adherence of 98.6%. Completeness was 91.7% and accuracy was 85.8%. Most errors were found in the preoperative range of motion and timely administration of prophylactic antibiotics and thromboprophylaxis. This model provides useful information regarding the quality of the recorded data since it identified deficient areas within the IAR. We recommend that institutional arthroplasty registries be constantly monitored for data quality before using their information for research or quality improvement purposes. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. A measurement model for general noise reaction in response to aircraft noise.

    PubMed

    Kroesen, Maarten; Schreckenberg, Dirk

    2011-01-01

    In this paper a measurement model for general noise reaction (GNR) in response to aircraft noise is developed to assess the performance of aircraft noise annoyance and a direct measure of general reaction as indicators of this concept. For this purpose GNR is conceptualized as a superordinate latent construct underlying particular manifestations. This conceptualization is empirically tested through estimation of a second-order factor model. Data from a community survey at Frankfurt Airport are used for this purpose (N=2206). The data fit the hypothesized factor structure well and support the conceptualization of GNR as a superordinate construct. It is concluded that noise annoyance and a direct measure of general reaction to noise capture a large part of the negative feelings and emotions in response to aircraft noise but are unable to capture all relevant variance. The paper concludes with recommendations for the valid measurement of community reaction and several directions for further research.

  12. Updated Prognostic Model for Predicting Overall Survival in First-Line Chemotherapy for Patients With Metastatic Castration-Resistant Prostate Cancer

    PubMed Central

    Halabi, Susan; Lin, Chen-Yen; Kelly, W. Kevin; Fizazi, Karim S.; Moul, Judd W.; Kaplan, Ellen B.; Morris, Michael J.; Small, Eric J.

    2014-01-01

    Purpose Prognostic models for overall survival (OS) for patients with metastatic castration-resistant prostate cancer (mCRPC) are dated and do not reflect significant advances in treatment options available for these patients. This work developed and validated an updated prognostic model to predict OS in patients receiving first-line chemotherapy. Methods Data from a phase III trial of 1,050 patients with mCRPC were used (Cancer and Leukemia Group B CALGB-90401 [Alliance]). The data were randomly split into training and testing sets. A separate phase III trial served as an independent validation set. Adaptive least absolute shrinkage and selection operator selected eight factors prognostic for OS. A predictive score was computed from the regression coefficients and used to classify patients into low- and high-risk groups. The model was assessed for its predictive accuracy using the time-dependent area under the curve (tAUC). Results The model included Eastern Cooperative Oncology Group performance status, disease site, lactate dehydrogenase, opioid analgesic use, albumin, hemoglobin, prostate-specific antigen, and alkaline phosphatase. Median OS values in the high- and low-risk groups, respectively, in the testing set were 17 and 30 months (hazard ratio [HR], 2.2; P < .001); in the validation set they were 14 and 26 months (HR, 2.9; P < .001). The tAUCs were 0.73 (95% CI, 0.70 to 0.73) and 0.76 (95% CI, 0.72 to 0.76) in the testing and validation sets, respectively. Conclusion An updated prognostic model for OS in patients with mCRPC receiving first-line chemotherapy was developed and validated on an external set. This model can be used to predict OS, as well as to better select patients to participate in trials on the basis of their prognosis. PMID:24449231

  13. Confirmatory factor analysis of teaching and learning guiding principles instrument among teacher educators in higher education institutions

    NASA Astrophysics Data System (ADS)

    Masuwai, Azwani; Tajudin, Nor'ain Mohd; Saad, Noor Shah

    2017-05-01

    The purpose of this study is to develop and establish the validity and reliability of an instrument to generate teaching and learning guiding principles using Teaching and Learning Guiding Principles Instrument (TLGPI). Participants consisted of 171 Malaysian teacher educators. It is an essential instrument to reflect in generating the teaching and learning guiding principles in higher education level in Malaysia. Confirmatory Factor Analysis has validated all 19 items of TLGPI whereby all items indicated high reliability and internal consistency. A Confirmatory Factor Analysis also confirmed that a single factor model was used to generate teaching and learning guiding principles.

  14. Measuring the post-adoption customer perception of mobile banking services.

    PubMed

    Yu, Tai-Kuei; Fang, Kwoting

    2009-02-01

    With liberalization and internalization in the financial market and progress in information technology, banks face dual competitive pressures to provide service quality and administrative efficiency. That these recent developments are fueled by technology might misleadingly suggest that the adoption of mobile banking is largely based on technological criteria. The purpose of this study is to establish a better measurement model for postadoption user perception of mobile banking services. Based on 458 valid responses of mobile banking users, the results show that the instrument, consisting of 21 items and 6 factors, is a reliable, valid, and useful measurement for assessing the postadoption perception of mobile banking.

  15. Teachers' Perceptions of Fairness, Well-Being and Burnout: A Contribution to the Validation of the Organizational Justice Index by Hoy and Tarter

    ERIC Educational Resources Information Center

    Capone, Vincenza; Petrillo, Giovanna

    2016-01-01

    Purpose: The purpose of this paper is to contribute to the validation of the Organizational Justice Index (OJI) by Hoy and Tarter (2004), a self-report questionnaire for teachers' perceptions of fairness in the operation and administration of schools. Design/methodology/approach: In two studies the authors validated the Italian version of the OJI.…

  16. Computational Fluid Dynamics Best Practice Guidelines in the Analysis of Storage Dry Cask

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zigh, A.; Solis, J.

    2008-07-01

    Computational fluid dynamics (CFD) methods are used to evaluate the thermal performance of a dry cask under long term storage conditions in accordance with NUREG-1536 [NUREG-1536, 1997]. A three-dimensional CFD model was developed and validated using data for a ventilated storage cask (VSC-17) collected by Idaho National Laboratory (INL). The developed Fluent CFD model was validated to minimize the modeling and application uncertainties. To address modeling uncertainties, the paper focused on turbulence modeling of buoyancy driven air flow. Similarly, in the application uncertainties, the pressure boundary conditions used to model the air inlet and outlet vents were investigated and validated.more » Different turbulence models were used to reduce the modeling uncertainty in the CFD simulation of the air flow through the annular gap between the overpack and the multi-assembly sealed basket (MSB). Among the chosen turbulence models, the validation showed that the low Reynolds k-{epsilon} and the transitional k-{omega} turbulence models predicted the measured temperatures closely. To assess the impact of pressure boundary conditions used at the air inlet and outlet channels on the application uncertainties, a sensitivity analysis of operating density was undertaken. For convergence purposes, all available commercial CFD codes include the operating density in the pressure gradient term of the momentum equation. The validation showed that the correct operating density corresponds to the density evaluated at the air inlet condition of pressure and temperature. Next, the validated CFD method was used to predict the thermal performance of an existing dry cask storage system. The evaluation uses two distinct models: a three-dimensional and an axisymmetrical representation of the cask. In the 3-D model, porous media was used to model only the volume occupied by the rodded region that is surrounded by the BWR channel box. In the axisymmetric model, porous media was used to model the entire region that encompasses the fuel assemblies as well as the gaps in between. Consequently, a larger volume is represented by porous media in the second model; hence, a higher frictional flow resistance is introduced in the momentum equations. The conservatism and the safety margins of these models were compared to assess the applicability and the realism of these two models. The three-dimensional model included fewer geometry simplifications and is recommended as it predicted less conservative fuel cladding temperature values, while still assuring the existence of adequate safety margins. (authors)« less

  17. Bridging groundwater models and decision support with a Bayesian network

    USGS Publications Warehouse

    Fienen, Michael N.; Masterson, John P.; Plant, Nathaniel G.; Gutierrez, Benjamin T.; Thieler, E. Robert

    2013-01-01

    Resource managers need to make decisions to plan for future environmental conditions, particularly sea level rise, in the face of substantial uncertainty. Many interacting processes factor in to the decisions they face. Advances in process models and the quantification of uncertainty have made models a valuable tool for this purpose. Long-simulation runtimes and, often, numerical instability make linking process models impractical in many cases. A method for emulating the important connections between model input and forecasts, while propagating uncertainty, has the potential to provide a bridge between complicated numerical process models and the efficiency and stability needed for decision making. We explore this using a Bayesian network (BN) to emulate a groundwater flow model. We expand on previous approaches to validating a BN by calculating forecasting skill using cross validation of a groundwater model of Assateague Island in Virginia and Maryland, USA. This BN emulation was shown to capture the important groundwater-flow characteristics and uncertainty of the groundwater system because of its connection to island morphology and sea level. Forecast power metrics associated with the validation of multiple alternative BN designs guided the selection of an optimal level of BN complexity. Assateague island is an ideal test case for exploring a forecasting tool based on current conditions because the unique hydrogeomorphological variability of the island includes a range of settings indicative of past, current, and future conditions. The resulting BN is a valuable tool for exploring the response of groundwater conditions to sea level rise in decision support.

  18. Studies of MGS TES and MPF MET Data

    NASA Technical Reports Server (NTRS)

    Barnes, Jeff R.

    2003-01-01

    The work supported by this grant was divided into two broad areas: (1) mesoscale modeling of atmospheric circulations and analyses of Pathfinder, Viking, and other Mars data, and (2) analyses of MGS TES temperature data. The mesoscale modeling began with the development of a suitable Mars mesoscale model based upon the terrestrial MM5 model, which was then applied to the simulation of the meteorological observations at the Pathfinder and Viking Lander 1 sites during northern summer. This extended study served a dual purpose: to validate the new mesoscale model with the best of the available in-situ data, and to use the model to aid in the interpretation of the surface meteorological data.

  19. OTEC Cold Water Pipe-Platform Subsystem Dynamic Interaction Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varley, Robert; Halkyard, John; Johnson, Peter

    A commercial floating 100-megawatt (MW) ocean thermal energy conversion (OTEC) power plant will require a cold water pipe (CWP) with a diameter of 10-meter (m) and length of up to 1,000 m. The mass of the cold water pipe, including entrained water, can exceed the mass of the platform supporting it. The offshore industry uses software-modeling tools to develop platform and riser (pipe) designs to survive the offshore environment. These tools are typically validated by scale model tests in facilities able to replicate real at-sea meteorological and ocean (metocean) conditions to provide the understanding and confidence to proceed to finalmore » design and full-scale fabrication. However, today’s offshore platforms (similar to and usually larger than those needed for OTEC applications) incorporate risers (or pipes) with diameters well under one meter. Secondly, the preferred construction method for large diameter OTEC CWPs is the use of composite materials, primarily a form of fiber-reinforced plastic (FRP). The use of these material results in relatively low pipe stiffness and large strains compared to steel construction. These factors suggest the need for further validation of offshore industry software tools. The purpose of this project was to validate the ability to model numerically the dynamic interaction between a large cold water-filled fiberglass pipe and a floating OTEC platform excited by metocean weather conditions using measurements from a scale model tested in an ocean basin test facility.« less

  20. Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilches-Freixas, Gloria; Létang, Jean Michel; Rit,

    2016-09-15

    Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performedmore » at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in most clinical environments.« less

  1. Assessment of validity with polytrauma Veteran populations.

    PubMed

    Bush, Shane S; Bass, Carmela

    2015-01-01

    Veterans with polytrauma have suffered injuries to multiple body parts and organs systems, including the brain. The injuries can generate a triad of physical, neurologic/cognitive, and emotional symptoms. Accurate diagnosis is essential for the treatment of these conditions and for fair allocation of benefits. To accurately diagnose polytrauma disorders and their related problems, clinicians take into account the validity of reported history and symptoms, as well as clinical presentations. The purpose of this article is to describe the assessment of validity with polytrauma Veteran populations. Review of scholarly and other relevant literature and clinical experience are utilized. A multimethod approach to validity assessment that includes objective, standardized measures increases the confidence that can be placed in the accuracy of self-reported symptoms and physical, cognitive, and emotional test results. Due to the multivariate nature of polytrauma and the multiple disciplines that play a role in diagnosis and treatment, an ideal model of validity assessment with polytrauma Veteran populations utilizes neurocognitive, neurological, neuropsychiatric, and behavioral measures of validity. An overview of these validity assessment approaches as applied to polytrauma Veteran populations is presented. Veterans, the VA, and society are best served when accurate diagnoses are made.

  2. Lessons Learned on Operating and Preparing Operations for a Technology Mission from the Perspective of the Earth Observing-1 Mission

    NASA Technical Reports Server (NTRS)

    Mandl, Dan; Howard, Joseph

    2000-01-01

    The New Millennium Program's first Earth-observing mission (EO-1) is a technology validation mission. It is managed by the NASA Goddard Space Flight Center in Greenbelt, Maryland and is scheduled for launch in the summer of 2000. The purpose of this mission is to flight-validate revolutionary technologies that will contribute to the reduction of cost and increase of capabilities for future land imaging missions. In the EO-1 mission, there are five instrument, five spacecraft, and three supporting technologies to flight-validate during a year of operations. EO-1 operations and the accompanying ground system were intended to be simple in order to maintain low operational costs. For purposes of formulating operations, it was initially modeled as a small science mission. However, it quickly evolved into a more complex mission due to the difficulties in effectively integrating all of the validation plans of the individual technologies. As a consequence, more operational support was required to confidently complete the on-orbit validation of the new technologies. This paper will outline the issues and lessons learned applicable to future technology validation missions. Examples of some of these include the following: (1) operational complexity encountered in integrating all of the validation plans into a coherent operational plan, (2) initial desire to run single shift operations subsequently growing to 6 "around-the-clock" operations, (3) managing changes in the technologies that ultimately affected operations, (4) necessity for better team communications within the project to offset the effects of change on the Ground System Developers, Operations Engineers, Integration and Test Engineers, S/C Subsystem Engineers, and Scientists, and (5) the need for a more experienced Flight Operations Team to achieve the necessary operational flexibility. The discussion will conclude by providing several cost comparisons for developing operations from previous missions to EO-1 and discuss some details that might be done differently for future technology validation missions.

  3. Validated simulator for space debris removal with nets and other flexible tethers applications

    NASA Astrophysics Data System (ADS)

    Gołębiowski, Wojciech; Michalczyk, Rafał; Dyrek, Michał; Battista, Umberto; Wormnes, Kjetil

    2016-12-01

    In the context of active debris removal technologies and preparation activities for the e.Deorbit mission, a simulator for net-shaped elastic bodies dynamics and their interactions with rigid bodies, has been developed. Its main application is to aid net design and test scenarios for space debris deorbitation. The simulator can model all the phases of the debris capturing process: net launch, flight and wrapping around the target. It handles coupled simulation of rigid and flexible bodies dynamics. Flexible bodies were implemented using Cosserat rods model. It allows to simulate flexible threads or wires with elasticity and damping for stretching, bending and torsion. Threads may be combined into structures of any topology, so the software is able to simulate nets, pure tethers, tether bundles, cages, trusses, etc. Full contact dynamics was implemented. Programmatic interaction with simulation is possible - i.e. for control implementation. The underlying model has been experimentally validated and due to significant gravity influence, experiment had to be performed in microgravity conditions. Validation experiment for parabolic flight was a downscaled process of Envisat capturing. The prepacked net was launched towards the satellite model, it expanded, hit the model and wrapped around it. The whole process was recorded with 2 fast stereographic camera sets for full 3D trajectory reconstruction. The trajectories were used to compare net dynamics to respective simulations and then to validate the simulation tool. The experiments were performed on board of a Falcon-20 aircraft, operated by National Research Council in Ottawa, Canada. Validation results show that model reflects phenomenon physics accurately enough, so it may be used for scenario evaluation and mission design purposes. The functionalities of the simulator are described in detail in the paper, as well as its underlying model, sample cases and methodology behind validation. Results are presented and typical use cases are discussed showing that the software may be used to design throw nets for space debris capturing, but also to simulate deorbitation process, chaser control system or general interactions between rigid and elastic bodies - all in convenient and efficient way. The presented work was led by SKA Polska under the ESA contract, within the CleanSpace initiative.

  4. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model.

    PubMed

    Wang, Yuhe; Mazur, Thomas R; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H Harold

    2016-07-01

    The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian's kmc. An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.

  5. Validation of the Monte Carlo simulator GATE for indium-111 imaging.

    PubMed

    Assié, K; Gardin, I; Véra, P; Buvat, I

    2005-07-07

    Monte Carlo simulations are useful for optimizing and assessing single photon emission computed tomography (SPECT) protocols, especially when aiming at measuring quantitative parameters from SPECT images. Before Monte Carlo simulated data can be trusted, the simulation model must be validated. The purpose of this work was to validate the use of GATE, a new Monte Carlo simulation platform based on GEANT4, for modelling indium-111 SPECT data, the quantification of which is of foremost importance for dosimetric studies. To that end, acquisitions of (111)In line sources in air and in water and of a cylindrical phantom were performed, together with the corresponding simulations. The simulation model included Monte Carlo modelling of the camera collimator and of a back-compartment accounting for photomultiplier tubes and associated electronics. Energy spectra, spatial resolution, sensitivity values, images and count profiles obtained for experimental and simulated data were compared. An excellent agreement was found between experimental and simulated energy spectra. For source-to-collimator distances varying from 0 to 20 cm, simulated and experimental spatial resolution differed by less than 2% in air, while the simulated sensitivity values were within 4% of the experimental values. The simulation of the cylindrical phantom closely reproduced the experimental data. These results suggest that GATE enables accurate simulation of (111)In SPECT acquisitions.

  6. The optimal inventory policy for EPQ model under trade credit

    NASA Astrophysics Data System (ADS)

    Chung, Kun-Jen

    2010-09-01

    Huang and Huang [(2008), 'Optimal Inventory Replenishment Policy for the EPQ Model Under Trade Credit without Derivatives International Journal of Systems Science, 39, 539-546] use the algebraic method to determine the optimal inventory replenishment policy for the retailer in the extended model under trade credit. However, the algebraic method has its limit of application such that validities of proofs of Theorems 1-4 in Huang and Huang (2008) are questionable. The main purpose of this article is not only to indicate shortcomings but also to present the accurate proofs for Huang and Huang (2008).

  7. Validation of FFM PD counts for screening personality pathology and psychopathy in adolescence.

    PubMed

    Decuyper, Mieke; De Clercq, Barbara; De Bolle, Marleen; De Fruyt, Filip

    2009-12-01

    Miller and colleagues (Miller, Bagby, Pilkonis, Reynolds, & Lynam, 2005) recently developed a Five-Factor Model (FFM) personality disorder (PD) count technique for describing and diagnosing PDs and psychopathy in adulthood. This technique conceptualizes PDs relying on general trait models and uses facets from the expert-generated PD prototypes to score the FFM PDs. The present study corroborates on the study of Miller and colleagues (2005) and investigates in Study 1 whether the PD count technique shows discriminant validity to describe PDs in adolescence. Study 2 extends this objective to psychopathy. Results suggest that the FFM PD count technique is equally successful in adolescence as in adulthood to describe PD symptoms, supporting the use of this descriptive method in adolescence. The normative data and accompanying PD count benchmarks enable to use FFM scores for PD screening purposes in adolescence.

  8. Virtual screening studies on HIV-1 reverse transcriptase inhibitors to design potent leads.

    PubMed

    Vadivelan, S; Deeksha, T N; Arun, S; Machiraju, Pavan Kumar; Gundla, Rambabu; Sinha, Barij Nayan; Jagarlapudi, Sarma A R P

    2011-03-01

    The purpose of this study is to identify novel and potent inhibitors against HIV-1 reverse transcriptase (RT). The crystal structure of the most active ligand was converted into a feature-shaped query. This query was used to align molecules to generate statistically valid 3D-QSAR (r(2) = 0.873) and Pharmacophore models (HypoGen). The best HypoGen model consists of three Pharmacophore features (one hydrogen bond acceptor, one hydrophobic aliphatic and one ring aromatic) and further validated using known RT inhibitors. The designed novel inhibitors are further subjected to docking studies to reduce the number of false positives. We have identified and proposed some novel and potential lead molecules as reverse transcriptase inhibitors using analog and structure based studies. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  9. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  10. Cross-Cultural Validation of the Beck Depression Inventory-II across U.S. and Turkish Samples

    ERIC Educational Resources Information Center

    Canel-Cinarbas, Deniz; Cui, Ying; Lauridsen, Erica

    2011-01-01

    The purpose of this study was to test the Beck Depression Inventory-II (BDI-II) for factorial invariance across Turkish and U.S. college student samples. The results indicated that (a) a two-factor model has an adequate fit for both samples, thus providing evidence of configural invariance, and (b) there is a metric invariance but "no"…

  11. Explain the Behavior Intention to Use e-Learning Technologies: A Unified Theory of Acceptance and Use of Technology Perspective

    ERIC Educational Resources Information Center

    Shaqrah, Amin A.

    2015-01-01

    The purpose of this study is to explain the behavior intention to use e-learning technologies. In order to achieve a better view and validate the study, researcher attempts to give details of how technology acceptance models help Jordanian trainees firms in accepting e-learning technology, and how if applied will result more attention to usage…

  12. Effectiveness Monitoring Report, MWMF Tritium Phytoremediation Interim Measures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hitchcock, Dan; Blake, John, I.

    2003-02-10

    This report describes and presents the results of monitoring activities during irrigation operations for the calendar year 2001 of the MWMF Interim Measures Tritium Phytoremediation Project. The purpose of this effectiveness monitoring report is to provide the information on instrument performance, analysis of CY2001 measurements, and critical relationships needed to manage irrigation operations, estimate efficiency and validate the water and tritium balance model.

  13. An Extended Kalman Filter to Assimilate Altimetric Data into a Non-Linear Model of the Tropical Pacific

    NASA Technical Reports Server (NTRS)

    Gourdeau, L.; Verron, J.; Murtugudde, R.; Busalacchi, A. J.

    1997-01-01

    A new implementation of the extended Kaman filter is developed for the purpose of assimilating altimetric observations into a primitive equation model of the tropical Pacific. Its specificity consists in defining the errors into a reduced basis that evolves in time with the model dynamic. Validation by twin experiments is conducted and the method is shown to be efficient in quasi real conditions. Data from the first 2 years of the Topex/Poseidon mission are assimilated into the Gent & Cane [1989] model. Assimilation results are evaluated against independent in situ data, namely TAO mooring observations.

  14. Measuring MFT's Mental Health Literacy and Competence: The Effects of an On-Line Introductory Workshop on Eating Disorders

    ERIC Educational Resources Information Center

    Beck-Ellsworth, Danielle

    2011-01-01

    Purpose of study: The purpose of the validity study was to establish known-groups validity of the two measures used for the main study by comparing the responses of the Eating Disorder experts and non-experts. The purpose of the main study was to develop an on-line introductory workshop on eating disorders and investigate the levels of competency…

  15. PIV Measurements of the CEV Hot Abort Motor Plume for CFD Validation

    NASA Technical Reports Server (NTRS)

    Wernet, Mark; Wolter, John D.; Locke, Randy; Wroblewski, Adam; Childs, Robert; Nelson, Andrea

    2010-01-01

    NASA s next manned launch platform for missions to the moon and Mars are the Orion and Ares systems. Many critical aspects of the launch system performance are being verified using computational fluid dynamics (CFD) predictions. The Orion Launch Abort Vehicle (LAV) consists of a tower mounted tractor rocket tasked with carrying the Crew Module (CM) safely away from the launch vehicle in the event of a catastrophic failure during the vehicle s ascent. Some of the predictions involving the launch abort system flow fields produced conflicting results, which required further investigation through ground test experiments. Ground tests were performed to acquire data from a hot supersonic jet in cross-flow for the purpose of validating CFD turbulence modeling relevant to the Orion Launch Abort Vehicle (LAV). Both 2-component axial plane Particle Image Velocimetry (PIV) and 3-component cross-stream Stereo Particle Image Velocimetry (SPIV) measurements were obtained on a model of an Abort Motor (AM). Actual flight conditions could not be simulated on the ground, so the highest temperature and pressure conditions that could be safely used in the test facility (nozzle pressure ratio 28.5 and a nozzle temperature ratio of 3) were used for the validation tests. These conditions are significantly different from those of the flight vehicle, but were sufficiently high enough to begin addressing turbulence modeling issues that predicated the need for the validation tests.

  16. Validation of the Yale-Brown Obsessive-Compulsive Severity Scale in African Americans with obsessive-compulsive disorder.

    PubMed

    Williams, Monnica T; Wetterneck, Chad T; Thibodeau, Michel A; Duque, Gerardo

    2013-09-30

    The Yale-Brown Obsessive Compulsive Scale (Y-BOCS) is widely used in the assessment of obsessive-compulsive disorder (OCD), but the psychometric properties of the instrument have not been examined in African Americans with OCD. Therefore, the purpose of this study is to explore the properties of the Y-BOCS severity scale in this population. Participants were 75 African American adults with a lifetime diagnosis of OCD. They completed the Y-BOCS, the Beck Anxiety Inventory (BAI), the Beck Depression Inventory-II (BDI-II), and the Multigroup Ethnic Identity Measure (MEIM). Evaluators rated OCD severity using the Clinical Global Impression Scale (CGI) and their global assessment of functioning (GAF). The Y-BOCS was significantly correlated with both the CGI and GAF, indicating convergent validity. It also demonstrated good internal consistency (α=0.83) and divergent validity when compared to the BAI and BDI-II. Confirmatory factor analyses tested five previously reported models and supported a three-factor solution, although no model exhibited excellent fit. An exploratory factor analysis was conducted, supporting a three-factor solution. A linear regression was conducted, predicting CGI from the three factors of the Y-BOCS and the MEIM, and the model was significant. The Y-BOCS appears to be a valid measure for African American populations. © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Predicting brain acceleration during heading of soccer ball

    NASA Astrophysics Data System (ADS)

    Taha, Zahari; Hasnun Arif Hassan, Mohd; Azri Aris, Mohd; Anuar, Zulfika

    2013-12-01

    There has been a long debate whether purposeful heading could cause harm to the brain. Studies have shown that repetitive heading could lead to degeneration of brain cells, which is similarly found in patients with mild traumatic brain injury. A two-degree of freedom linear mathematical model was developed to study the impact of soccer ball to the brain during ball-to-head impact in soccer. From the model, the acceleration of the brain upon impact can be obtained. The model is a mass-spring-damper system, in which the skull is modelled as a mass and the neck is modelled as a spring-damper system. The brain is a mass with suspension characteristics that are also defined by a spring and a damper. The model was validated by experiment, in which a ball was dropped from different heights onto an instrumented dummy skull. The validation shows that the results obtained from the model are in a good agreement with the brain acceleration measured from the experiment. This findings show that a simple linear mathematical model can be useful in giving a preliminary insight on what human brain endures during a ball-to-head impact.

  18. Space Weather Model Testing And Validation At The Community Coordinated Modeling Center

    NASA Astrophysics Data System (ADS)

    Hesse, M.; Kuznetsova, M.; Rastaetter, L.; Falasca, A.; Keller, K.; Reitan, P.

    The Community Coordinated Modeling Center (CCMC) is a multi-agency partner- ship aimed at the creation of next generation space weather models. The goal of the CCMC is to undertake the research and developmental work necessary to substantially increase the present-day modeling capability for space weather purposes, and to pro- vide models for transition to the rapid prototyping centers at the space weather forecast centers. This goal requires close collaborations with and substantial involvement of the research community. The physical regions to be addressed by CCMC-related activities range from the solar atmosphere to the Earth's upper atmosphere. The CCMC is an integral part of NASA's Living With aStar initiative, of the National Space Weather Program Implementation Plan, and of the Department of Defense Space Weather Tran- sition Plan. CCMC includes a facility at NASA Goddard Space Flight Center, as well as distributed computing facilities provided by the Air Force. CCMC also provides, to the research community, access to state-of-the-art space research models. In this paper we will provide updates on CCMC status, on current plans, research and devel- opment accomplishments and goals, and on the model testing and validation process undertaken as part of the CCMC mandate.

  19. 29 CFR 1607.5 - General standards for validity studies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false General standards for validity studies. 1607.5 Section 1607... studies. A. Acceptable types of validity studies. For the purposes of satisfying these guidelines, users may rely upon criterion-related validity studies, content validity studies or construct validity...

  20. Prediction of Continuous Cooling Transformation Diagrams for Dual-Phase Steels from the Intercritical Region

    NASA Astrophysics Data System (ADS)

    Colla, V.; Desanctis, M.; Dimatteo, A.; Lovicu, G.; Valentini, R.

    2011-09-01

    The purpose of the present work is the implementation and validation of a model able to predict the microstructure changes and the mechanical properties in the modern high-strength dual-phase steels after the continuous annealing process line (CAPL) and galvanizing (Galv) process. Experimental continuous cooling transformation (CCT) diagrams for 13 differently alloying dual-phase steels were measured by dilatometry from the intercritical range and were used to tune the parameters of the microstructural prediction module of the model. Mechanical properties and microstructural features were measured for more than 400 dual-phase steels simulating the CAPL and Galv industrial process, and the results were used to construct the mechanical model that predicts mechanical properties from microstructural features, chemistry, and process parameters. The model was validated and proved its efficiency in reproducing the transformation kinetic and mechanical properties of dual-phase steels produced by typical industrial process. Although it is limited to the dual-phase grades and chemical compositions explored, this model will constitute a useful tool for the steel industry.

  1. Towards Application of NASA Standard for Models and Simulations in Aeronautical Design Process

    NASA Astrophysics Data System (ADS)

    Vincent, Luc; Dunyach, Jean-Claude; Huet, Sandrine; Pelissier, Guillaume; Merlet, Joseph

    2012-08-01

    Even powerful computational techniques like simulation endure limitations in their validity domain. Consequently using simulation models requires cautions to avoid making biased design decisions for new aeronautical products on the basis of inadequate simulation results. Thus the fidelity, accuracy and validity of simulation models shall be monitored in context all along the design phases to build confidence in achievement of the goals of modelling and simulation.In the CRESCENDO project, we adapt the Credibility Assessment Scale method from NASA standard for models and simulations from space programme to the aircraft design in order to assess the quality of simulations. The proposed eight quality assurance metrics aggregate information to indicate the levels of confidence in results. They are displayed in management dashboard and can secure design trade-off decisions at programme milestones.The application of this technique is illustrated in aircraft design context with specific thermal Finite Elements Analysis. This use case shows how to judge the fitness- for-purpose of simulation as Virtual testing means and then green-light the continuation of Simulation Lifecycle Management (SLM) process.

  2. New public QSAR model for carcinogenicity

    PubMed Central

    2010-01-01

    Background One of the main goals of the new chemical regulation REACH (Registration, Evaluation and Authorization of Chemicals) is to fulfill the gaps in data concerned with properties of chemicals affecting the human health. (Q)SAR models are accepted as a suitable source of information. The EU funded CAESAR project aimed to develop models for prediction of 5 endpoints for regulatory purposes. Carcinogenicity is one of the endpoints under consideration. Results Models for prediction of carcinogenic potency according to specific requirements of Chemical regulation were developed. The dataset of 805 non-congeneric chemicals extracted from Carcinogenic Potency Database (CPDBAS) was used. Counter Propagation Artificial Neural Network (CP ANN) algorithm was implemented. In the article two alternative models for prediction carcinogenicity are described. The first model employed eight MDL descriptors (model A) and the second one twelve Dragon descriptors (model B). CAESAR's models have been assessed according to the OECD principles for the validation of QSAR. For the model validity we used a wide series of statistical checks. Models A and B yielded accuracy of training set (644 compounds) equal to 91% and 89% correspondingly; the accuracy of the test set (161 compounds) was 73% and 69%, while the specificity was 69% and 61%, respectively. Sensitivity in both cases was equal to 75%. The accuracy of the leave 20% out cross validation for the training set of models A and B was equal to 66% and 62% respectively. To verify if the models perform correctly on new compounds the external validation was carried out. The external test set was composed of 738 compounds. We obtained accuracy of external validation equal to 61.4% and 60.0%, sensitivity 64.0% and 61.8% and specificity equal to 58.9% and 58.4% respectively for models A and B. Conclusion Carcinogenicity is a particularly important endpoint and it is expected that QSAR models will not replace the human experts opinions and conventional methods. However, we believe that combination of several methods will provide useful support to the overall evaluation of carcinogenicity. In present paper models for classification of carcinogenic compounds using MDL and Dragon descriptors were developed. Models could be used to set priorities among chemicals for further testing. The models at the CAESAR site were implemented in java and are publicly accessible. PMID:20678182

  3. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. Dixon

    The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Reportmore » is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The DST THC Model is used solely for the validation of the THC Seepage Model and is not used for calibration to measured data.« less

  4. Use of genetic programming, logistic regression, and artificial neural nets to predict readmission after coronary artery bypass surgery.

    PubMed

    Engoren, Milo; Habib, Robert H; Dooner, John J; Schwann, Thomas A

    2013-08-01

    As many as 14 % of patients undergoing coronary artery bypass surgery are readmitted within 30 days. Readmission is usually the result of morbidity and may lead to death. The purpose of this study is to develop and compare statistical and genetic programming models to predict readmission. Patients were divided into separate Construction and Validation populations. Using 88 variables, logistic regression, genetic programs, and artificial neural nets were used to develop predictive models. Models were first constructed and tested on the Construction populations, then validated on the Validation population. Areas under the receiver operator characteristic curves (AU ROC) were used to compare the models. Two hundred and two patients (7.6 %) in the 2,644 patient Construction group and 216 (8.0 %) of the 2,711 patient Validation group were re-admitted within 30 days of CABG surgery. Logistic regression predicted readmission with AU ROC = .675 ± .021 in the Construction group. Genetic programs significantly improved the accuracy, AU ROC = .767 ± .001, p < .001). Artificial neural nets were less accurate with AU ROC = 0.597 ± .001 in the Construction group. Predictive accuracy of all three techniques fell in the Validation group. However, the accuracy of genetic programming (AU ROC = .654 ± .001) was still trivially but statistically non-significantly better than that of the logistic regression (AU ROC = .644 ± .020, p = .61). Genetic programming and logistic regression provide alternative methods to predict readmission that are similarly accurate.

  5. Hydrological Validation of The Lpj Dynamic Global Vegetation Model - First Results and Required Actions

    NASA Astrophysics Data System (ADS)

    Haberlandt, U.; Gerten, D.; Schaphoff, S.; Lucht, W.

    Dynamic global vegetation models are developed with the main purpose to describe the spatio-temporal dynamics of vegetation at the global scale. Increasing concern about climate change impacts has put the focus of recent applications on the sim- ulation of the global carbon cycle. Water is a prime driver of biogeochemical and biophysical processes, thus an appropriate representation of the water cycle is crucial for their proper simulation. However, these models usually lack thorough validation of the water balance they produce. Here we present a hydrological validation of the current version of the LPJ (Lund- Potsdam-Jena) model, a dynamic global vegetation model operating at daily time steps. Long-term simulated runoff and evapotranspiration are compared to literature values, results from three global hydrological models, and discharge observations from various macroscale river basins. It was found that the seasonal and spatial patterns of the LPJ-simulated average values correspond well both with the measurements and the results from the stand-alone hy- drological models. However, a general underestimation of runoff occurs, which may be attributable to the low input dynamics of precipitation (equal distribution within a month), to the simulated vegetation pattern (potential vegetation without anthro- pogenic influence), and to some generalizations of the hydrological components in LPJ. Future research will focus on a better representation of the temporal variability of climate forcing, improved description of hydrological processes, and on the consider- ation of anthropogenic land use.

  6. Shoulder model validation and joint contact forces during wheelchair activities.

    PubMed

    Morrow, Melissa M B; Kaufman, Kenton R; An, Kai-Nan

    2010-09-17

    Chronic shoulder impingement is a common problem for manual wheelchair users. The loading associated with performing manual wheelchair activities of daily living is substantial and often at a high frequency. Musculoskeletal modeling and optimization techniques can be used to estimate the joint contact forces occurring at the shoulder to assess the soft tissue loading during an activity and to possibly identify activities and strategies that place manual wheelchair users at risk for shoulder injuries. The purpose of this study was to validate an upper extremity musculoskeletal model and apply the model to wheelchair activities for analysis of the estimated joint contact forces. Upper extremity kinematics and handrim wheelchair kinetics were measured over three conditions: level propulsion, ramp propulsion, and a weight relief lift. The experimental data were used as input to a subject-specific musculoskeletal model utilizing optimization to predict joint contact forces of the shoulder during all conditions. The model was validated using a mean absolute error calculation. Model results confirmed that ramp propulsion and weight relief lifts place the shoulder under significantly higher joint contact loading than level propulsion. In addition, they exhibit large superior contact forces that could contribute to impingement. This study highlights the potential impingement risk associated with both the ramp and weight relief lift activities. Level propulsion was shown to have a low relative risk of causing injury, but with consideration of the frequency with which propulsion is performed, this observation is not conclusive.

  7. Development and Validation of Osteoporosis Risk-Assessment Model for Korean Men

    PubMed Central

    Oh, Sun Min; Song, Bo Mi; Nam, Byung-Ho; Rhee, Yumie; Moon, Seong-Hwan; Kim, Deog Young; Kang, Dae Ryong

    2016-01-01

    Purpose The aim of the present study was to develop an osteoporosis risk-assessment model to identify high-risk individuals among Korean men. Materials and Methods The study used data from 1340 and 1110 men ≥50 years who participated in the 2009 and 2010 Korean National Health and Nutrition Examination Survey, respectively, for development and validation of an osteoporosis risk-assessment model. Osteoporosis was defined as T score ≤-2.5 at either the femoral neck or lumbar spine. Performance of the candidate models and the Osteoporosis Self-assessment Tool for Asian (OSTA) was compared with sensitivity, specificity, and area under the receiver operating characteristics curve (AUC). A net reclassification improvement was further calculated to compare the developed Korean Osteoporosis Risk-Assessment Model for Men (KORAM-M) with OSTA. Results In the development dataset, the prevalence of osteoporosis was 8.1%. KORAM-M, consisting of age and body weight, had a sensitivity of 90.8%, a specificity of 42.4%, and an AUC of 0.666 with a cut-off score of -9. In the validation dataset, similar results were shown: sensitivity 87.9%, specificity 39.7%, and AUC 0.638. Additionally, risk categorization with KORAM-M showed improved reclassification over that of OSTA up to 22.8%. Conclusion KORAM-M can be simply used as a pre-screening tool to identify candidates for dual energy X-ray absorptiometry tests. PMID:26632400

  8. Quantitative studies on structure-DPPH• scavenging activity relationships of food phenolic acids.

    PubMed

    Jing, Pu; Zhao, Shu-Juan; Jian, Wen-Jie; Qian, Bing-Jun; Dong, Ying; Pang, Jie

    2012-11-01

    Phenolic acids are potent antioxidants, yet the quantitative structure-activity relationships of phenolic acids remain unclear. The purpose of this study was to establish 3D-QSAR models able to predict phenolic acids with high DPPH• scavenging activity and understand their structure-activity relationships. The model has been established by using a training set of compounds with cross-validated q2 = 0.638/0.855, non-cross-validated r2 = 0.984/0.986, standard error of estimate = 0.236/0.216, and F = 139.126/208.320 for the best CoMFA/CoMSIA models. The predictive ability of the models was validated with the correlation coefficient r2(pred) = 0.971/0.996 (>0.6) for each model. Additionally, the contour map results suggested that structural characteristics of phenolics acids favorable for the high DPPH• scavenging activity might include: (1) bulky and/or electron-donating substituent groups on the phenol ring; (2) electron-donating groups at the meta-position and/or hydrophobic groups at the meta-/ortho-position; (3) hydrogen-bond donor/electron-donating groups at the ortho-position. The results have been confirmed based on structural analyses of phenolic acids and their DPPH• scavenging data from eight recent publications. The findings may provide deeper insight into the antioxidant mechanisms and provide useful information for selecting phenolic acids for free radical scavenging properties.

  9. Calibration of a rotating accelerometer gravity gradiometer using centrifugal gradients

    NASA Astrophysics Data System (ADS)

    Yu, Mingbiao; Cai, Tijing

    2018-05-01

    The purpose of this study is to calibrate scale factors and equivalent zero biases of a rotating accelerometer gravity gradiometer (RAGG). We calibrate scale factors by determining the relationship between the centrifugal gradient excitation and RAGG response. Compared with calibration by changing the gravitational gradient excitation, this method does not need test masses and is easier to implement. The equivalent zero biases are superpositions of self-gradients and the intrinsic zero biases of the RAGG. A self-gradient is the gravitational gradient produced by surrounding masses, and it correlates well with the RAGG attitude angle. We propose a self-gradient model that includes self-gradients and the intrinsic zero biases of the RAGG. The self-gradient model is a function of the RAGG attitude, and it includes parameters related to surrounding masses. The calibration of equivalent zero biases determines the parameters of the self-gradient model. We provide detailed procedures and mathematical formulations for calibrating scale factors and parameters in the self-gradient model. A RAGG physical simulation system substitutes for the actual RAGG in the calibration and validation experiments. Four point masses simulate four types of surrounding masses producing self-gradients. Validation experiments show that the self-gradients predicted by the self-gradient model are consistent with those from the outputs of the RAGG physical simulation system, suggesting that the presented calibration method is valid.

  10. Validation of the Spanish version of Mackey childbirth satisfaction rating scale.

    PubMed

    Caballero, Pablo; Delgado-García, Beatriz E; Orts-Cortes, Isabel; Moncho, Joaquin; Pereyra-Zamora, Pamela; Nolasco, Andreu

    2016-04-16

    The "Mackey Childbirth Satisfaction Rating Scale" (MCSRS) is a complete non-validated scale which includes the most important factors associated with maternal satisfaction. Our primary purpose was to describe the internal structure of the scale and validate the reliability and validity of concept of its Spanish version MCSRS-E. The MCSRS was translated into Spanish, back-translated and adapted to the Spanish population. It was then administered following a pilot test with women who met the study participant requirements. The scale structure was obtained by performing an exploratory factorial analysis using a sample of 304 women. The structures obtained were tested by conducting a confirmatory factorial analysis using a sample of 159 women. To test the validity of concept, the structure factors were correlated with expectations prior to childbirth experiences. McDonald's omegas were calculated for each model to establish the reliability of each factor. The study was carried out at four University Hospitals; Alicante, Elche, Torrevieja and Vinalopo Salud of Elche. The inclusion criteria were women aged 18-45 years old who had just delivered a singleton live baby at 38-42 weeks through vaginal delivery. Women who had difficulty speaking and understanding Spanish were excluded. The process generated 5 different possible internal structures in a nested model more consistent with the theory than other internal structures of the MCSRS applied hitherto. All of them had good levels of validation and reliability. This nested model to explain internal structure of MCSRS-E can accommodate different clinical practice scenarios better than the other structures applied to date, and it is a flexible tool which can be used to identify the aspects that should be changed to improve maternal satisfaction and hence maternal health.

  11. T2* Mapping Provides Information That Is Statistically Comparable to an Arthroscopic Evaluation of Acetabular Cartilage.

    PubMed

    Morgan, Patrick; Nissi, Mikko J; Hughes, John; Mortazavi, Shabnam; Ellerman, Jutta

    2017-07-01

    Objectives The purpose of this study was to validate T2* mapping as an objective, noninvasive method for the prediction of acetabular cartilage damage. Methods This is the second step in the validation of T2*. In a previous study, we established a quantitative predictive model for identifying and grading acetabular cartilage damage. In this study, the model was applied to a second cohort of 27 consecutive hips to validate the model. A clinical 3.0-T imaging protocol with T2* mapping was used. Acetabular regions of interest (ROI) were identified on magnetic resonance and graded using the previously established model. Each ROI was then graded in a blinded fashion by arthroscopy. Accurate surgical location of ROIs was facilitated with a 2-dimensional map projection of the acetabulum. A total of 459 ROIs were studied. Results When T2* mapping and arthroscopic assessment were compared, 82% of ROIs were within 1 Beck group (of a total 6 possible) and 32% of ROIs were classified identically. Disease prediction based on receiver operating characteristic curve analysis demonstrated a sensitivity of 0.713 and a specificity of 0.804. Model stability evaluation required no significant changes to the predictive model produced in the initial study. Conclusions These results validate that T2* mapping provides statistically comparable information regarding acetabular cartilage when compared to arthroscopy. In contrast to arthroscopy, T2* mapping is quantitative, noninvasive, and can be used in follow-up. Unlike research quantitative magnetic resonance protocols, T2* takes little time and does not require a contrast agent. This may facilitate its use in the clinical sphere.

  12. Measurements using orthodontic analysis software on digital models obtained by 3D scans of plaster casts : Intrarater reliability and validity.

    PubMed

    Czarnota, Judith; Hey, Jeremias; Fuhrmann, Robert

    2016-01-01

    The purpose of this work was to determine the reliability and validity of measurements performed on digital models with a desktop scanner and analysis software in comparison with measurements performed manually on conventional plaster casts. A total of 20 pairs of plaster casts reflecting the intraoral conditions of 20 fully dentate individuals were digitized using a three-dimensional scanner (D700; 3Shape). A series of defined parameters were measured both on the resultant digital models with analysis software (Ortho Analyzer; 3Shape) and on the original plaster casts with a digital caliper (Digimatic CD-15DCX; Mitutoyo). Both measurement series were repeated twice and analyzed for intrarater reliability based on intraclass correlation coefficients (ICCs). The results from the digital models were evaluated for their validity against the casts by calculating mean-value differences and associated 95 % limits of agreement (Bland-Altman method). Statistically significant differences were identified via a paired t test. Significant differences were obtained for 16 of 24 tooth-width measurements, for 2 of 5 sites of contact-point displacement in the mandibular anterior segment, for overbite, for maxillary intermolar distance, for Little's irregularity index, and for the summation indices of maxillary and mandibular incisor width. Overall, however, both the mean differences between the results obtained on the digital models versus on the plaster casts and the dispersion ranges associated with these differences suggest that the deviations incurred by the digital measuring technique are not clinically significant. Digital models are adequately reproducible and valid to be employed for routine measurements in orthodontic practice.

  13. Large Eddy Simulation Modeling of Flashback and Flame Stabilization in Hydrogen-Rich Gas Turbines Using a Hierarchical Validation Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clemens, Noel

    This project was a combined computational and experimental effort to improve predictive capability for boundary layer flashback of premixed swirl flames relevant to gas-turbine power plants operating with high-hydrogen-content fuels. During the course of this project, significant progress in modeling was made on four major fronts: 1) use of direct numerical simulation of turbulent flames to understand the coupling between the flame and the turbulent boundary layer; 2) improved modeling capability for flame propagation in stratified pre-mixtures; 3) improved portability of computer codes using the OpenFOAM platform to facilitate transfer to industry and other researchers; and 4) application of LESmore » to flashback in swirl combustors, and a detailed assessment of its capabilities and limitations for predictive purposes. A major component of the project was an experimental program that focused on developing a rich experimental database of boundary layer flashback in swirl flames. Both methane and high-hydrogen fuels, including effects of elevated pressure (1 to 5 atm), were explored. For this project, a new model swirl combustor was developed. Kilohertz-rate stereoscopic PIV and chemiluminescence imaging were used to investigate the flame propagation dynamics. In addition to the planar measurements, a technique capable of detecting the instantaneous, time-resolved 3D flame front topography was developed and applied successfully to investigate the flow-flame interaction. The UT measurements and legacy data were used in a hierarchical validation approach where flows with increasingly complex physics were used for validation. First component models were validated with DNS and literature data in simplified configurations, and this was followed by validation with the UT 1-atm flashback cases, and then the UT high-pressure flashback cases. The new models and portable code represent a major improvement over what was available before this project was initiated.« less

  14. Estimating the urban bias of surface shelter temperatures using upper-air and satellite data. Part 1: Development of models predicting surface shelter temperatures

    NASA Technical Reports Server (NTRS)

    Epperson, David L.; Davis, Jerry M.; Bloomfield, Peter; Karl, Thomas R.; Mcnab, Alan L.; Gallo, Kevin P.

    1995-01-01

    Multiple regression techniques were used to predict surface shelter temperatures based on the time period 1986-89 using upper-air data from the European Centre for Medium-Range Weather Forecasts (ECMWF) to represent the background climate and site-specific data to represent the local landscape. Global monthly mean temperature models were developed using data from over 5000 stations available in the Global Historical Climate Network (GHCN). Monthly maximum, mean, and minimum temperature models for the United States were also developed using data from over 1000 stations available in the U.S. Cooperative (COOP) Network and comparative monthly mean temperature models were developed using over 1150 U.S. stations in the GHCN. Three-, six-, and full-variable models were developed for comparative purposes. Inferences about the variables selected for the various models were easier for the GHCN models, which displayed month-to-month consistency in which variables were selected, than for the COOP models, which were assigned a different list of variables for nearly every month. These and other results suggest that global calibration is preferred because data from the global spectrum of physical processes that control surface temperatures are incorporated in a global model. All of the models that were developed in this study validated relatively well, especially the global models. Recalibration of the models with validation data resulted in only slightly poorer regression statistics, indicating that the calibration list of variables was valid. Predictions using data from the validation dataset in the calibrated equation were better for the GHCN models, and the globally calibrated GHCN models generally provided better U.S. predictions than the U.S.-calibrated COOP models. Overall, the GHCN and COOP models explained approximately 64%-95% of the total variance of surface shelter temperatures, depending on the month and the number of model variables. In addition, root-mean-square errors (rmse's) were over 3 C for GHCN models and over 2 C for COOP models for winter months, and near 2 C for GHCN models and near 1.5 C for COOP models for summer months.

  15. [The Amsterdam wrist rules: the multicenter prospective derivation and external validation of a clinical decision rule for the use of radiography in acute wrist trauma].

    PubMed

    Walenkamp, Monique M J; Bentohami, Abdelali; Slaar, Annelie; Beerekamp, M S H Suzan; Maas, Mario; Jager, L C Cara; Sosef, Nico L; van Velde, Romuald; Ultee, Jan M; Steyerberg, Ewout W; Goslings, J C Carel; Schep, Niels W L

    2016-01-01

    Although only 39% of patients with wrist trauma have sustained a fracture, the majority of patients is routinely referred for radiography. The purpose of this study was to derive and externally validate a clinical decision rule that selects patients with acute wrist trauma in the Emergency Department (ED) for radiography. This multicenter prospective study consisted of three components: (1) derivation of a clinical prediction model for detecting wrist fractures in patients following wrist trauma; (2) external validation of this model; and (3) design of a clinical decision rule. The study was conducted in the EDs of five Dutch hospitals: one academic hospital (derivation cohort) and four regional hospitals (external validation cohort). We included all adult patients with acute wrist trauma. The main outcome was fracture of the wrist (distal radius, distal ulna or carpal bones) diagnosed on conventional X-rays. A total of 882 patients were analyzed; 487 in the derivation cohort and 395 in the validation cohort. We derived a clinical prediction model with eight variables: age; sex, swelling of the wrist; swelling of the anatomical snuffbox, visible deformation; distal radius tender to palpation; pain on radial deviation and painful axial compression of the thumb. The Area Under the Curve at external validation of this model was 0.81 (95% CI: 0.77-0.85). The sensitivity and specificity of the Amsterdam Wrist Rules (AWR) in the external validation cohort were 98% (95% CI: 95-99%) and 21% (95% CI: 15%-28). The negative predictive value was 90% (95% CI: 81-99%). The Amsterdam Wrist Rules is a clinical prediction rule with a high sensitivity and negative predictive value for fractures of the wrist. Although external validation showed low specificity and 100 % sensitivity could not be achieved, the Amsterdam Wrist Rules can provide physicians in the Emergency Department with a useful screening tool to select patients with acute wrist trauma for radiography. The upcoming implementation study will further reveal the impact of the Amsterdam Wrist Rules on the anticipated reduction of X-rays requested, missed fractures, Emergency Department waiting times and health care costs.

  16. Personal Accountability in Education: Measure Development and Validation

    ERIC Educational Resources Information Center

    Rosenblatt, Zehava

    2017-01-01

    Purpose: The purpose of this paper, three-study research project, is to establish and validate a two-dimensional scale to measure teachers' and school administrators' accountability disposition. Design/methodology/approach: The scale items were developed in focus groups, and the final measure was tested on various samples of Israeli teachers and…

  17. Convergent and divergent validity of the Mullen Scales of Early Learning in young children with and without autism spectrum disorder.

    PubMed

    Swineford, Lauren B; Guthrie, Whitney; Thurm, Audrey

    2015-12-01

    The purpose of this study was to report on the construct, convergent, and divergent validity of the Mullen Scales of Early Learning (MSEL), a widely used test of development for young children. The sample consisted of 399 children with a mean age of 3.38 years (SD = 1.14) divided into a group of children with autism spectrum disorder (ASD) and a group of children not on the autism spectrum, with and without developmental delays. The study used the MSEL and several other measures assessing constructs relevant to the age range--including developmental skills, autism symptoms, and psychopathology symptoms--across multiple methods of assessment. Multiple-group confirmatory factor analyses revealed good overall fit and equal form of the MSEL 1-factor model across the ASD and nonspectrum groups, supporting the construct validity of the MSEL. However, neither full nor partial invariance of factor loadings was established because of the lower loadings in the ASD group compared with the nonspectrum group. Exploratory structural equation modeling revealed that other measures of developmental skills loaded together with the MSEL domain scores on a Developmental Functioning factor, supporting convergent validity of the MSEL. Divergent validity was supported by the lack of loading of MSEL domain scores on Autism Symptoms or Emotion/Behavior Problems factors. Although factor structure and loadings varied across groups, convergent and divergent validity findings were similar in the ASD and nonspectrum samples. Together, these results demonstrate evidence for the construct, convergent, and divergent validity of the MSEL using powerful data-analytic techniques. (c) 2015 APA, all rights reserved).

  18. Explicit robust schemes for implementation of general principal value-based constitutive models

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Saleeb, A. F.; Tan, H. Q.; Zhang, Y.

    1993-01-01

    The issue of developing effective and robust schemes to implement general hyperelastic constitutive models is addressed. To this end, special purpose functions are used to symbolically derive, evaluate, and automatically generate the associated FORTRAN code for the explicit forms of the corresponding stress function and material tangent stiffness tensors. These explicit forms are valid for the entire deformation range. The analytical form of these explicit expressions is given here for the case in which the strain-energy potential is taken as a nonseparable polynomial function of the principle stretches.

  19. Aircraft interior noise models - Sidewall trim, stiffened structures, and cabin acoustics with floor partition

    NASA Technical Reports Server (NTRS)

    Pope, L. D.; Wilby, E. G.; Willis, C. M.; Mayes, W. H.

    1983-01-01

    As part of the continuing development of an aircraft interior noise prediction model, in which a discrete modal representation and power flow analysis are used, theoretical results are considered for inclusion of sidewall trim, stiffened structures, and cabin acoustics with floor partition. For validation purposes, predictions of the noise reductions for three test articles (a bare ring-stringer stiffened cylinder, an unstiffened cylinder with floor and insulation, and a ring-stringer stiffened cylinder with floor and sidewall trim) are compared with measurements.

  20. Objectifying Content Validity: Conducting a Content Validity Study in Social Work Research.

    ERIC Educational Resources Information Center

    Rubio, Doris McGartland; Berg-Weger, Marla; Tebb, Susan S.; Lee, E. Suzanne; Rauch, Shannon

    2003-01-01

    The purpose of this article is to demonstrate how to conduct a content validity study. Instructions on how to calculate a content validity index, factorial validity index, and an interrater reliability index and guide for interpreting these indices are included. Implications regarding the value of conducting a content validity study for…

  1. Validity of Learning Module Natural Sciences Oriented Constructivism with the Contain of Character Education for Students of Class VIII at Yunior Hight School

    NASA Astrophysics Data System (ADS)

    Oktarina, K.; Lufri, L.; Chatri, M.

    2018-04-01

    Referring to primary data collected through observation and interview to natural science teachers and some students, it is found that there is no natural science teaching materials in the form of learning modules that can make learners learn independently, build their own knowledge, and construct good character in themselves. In order to address this problem, then it is developed natural science learning module oriented to constructivism with the contain of character education. The purpose of this study is to reconstruct valid module of natural science learning materials. This type of research is a development research using the Plomp model. The development phase of the Plomp model consists of 3 stages, namely 1) preliminary research phase, 2) development or prototyping phase, and 3) assessment phase. The result of the study shows that natural science learning module oriented to constructivism with the contain of character education for students class VIII of Yunior High School 11 Sungai Penuh is valid. In future work, practicality and effectiveness will be investigated.

  2. From control to causation: Validating a 'complex systems model' of running-related injury development and prevention.

    PubMed

    Hulme, A; Salmon, P M; Nielsen, R O; Read, G J M; Finch, C F

    2017-11-01

    There is a need for an ecological and complex systems approach for better understanding the development and prevention of running-related injury (RRI). In a previous article, we proposed a prototype model of the Australian recreational distance running system which was based on the Systems Theoretic Accident Mapping and Processes (STAMP) method. That model included the influence of political, organisational, managerial, and sociocultural determinants alongside individual-level factors in relation to RRI development. The purpose of this study was to validate that prototype model by drawing on the expertise of both systems thinking and distance running experts. This study used a modified Delphi technique involving a series of online surveys (December 2016- March 2017). The initial survey was divided into four sections containing a total of seven questions pertaining to different features associated with the prototype model. Consensus in opinion about the validity of the prototype model was reached when the number of experts who agreed or disagreed with survey statement was ≥75% of the total number of respondents. A total of two Delphi rounds was needed to validate the prototype model. Out of a total of 51 experts who were initially contacted, 50.9% (n = 26) completed the first round of the Delphi, and 92.3% (n = 24) of those in the first round participated in the second. Most of the 24 full participants considered themselves to be a running expert (66.7%), and approximately a third indicated their expertise as a systems thinker (33.3%). After the second round, 91.7% of the experts agreed that the prototype model was a valid description of the Australian distance running system. This is the first study to formally examine the development and prevention of RRI from an ecological and complex systems perspective. The validated model of the Australian distance running system facilitates theoretical advancement in terms of identifying practical system-wide opportunities for the implementation of sustainable RRI prevention interventions. This 'big picture' perspective represents the first step required when thinking about the range of contributory causal factors that affect other system elements, as well as runners' behaviours in relation to RRI risk. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. An estimation of the main wetting branch of the soil water retention curve based on its main drying branch using the machine learning method

    NASA Astrophysics Data System (ADS)

    Lamorski, Krzysztof; Šimūnek, Jiří; Sławiński, Cezary; Lamorska, Joanna

    2017-02-01

    In this paper, we estimated using the machine learning methodology the main wetting branch of the soil water retention curve based on the knowledge of the main drying branch and other, optional, basic soil characteristics (particle size distribution, bulk density, organic matter content, or soil specific surface). The support vector machine algorithm was used for the models' development. The data needed by this algorithm for model training and validation consisted of 104 different undisturbed soil core samples collected from the topsoil layer (A horizon) of different soil profiles in Poland. The main wetting and drying branches of SWRC, as well as other basic soil physical characteristics, were determined for all soil samples. Models relying on different sets of input parameters were developed and validated. The analysis showed that taking into account other input parameters (i.e., particle size distribution, bulk density, organic matter content, or soil specific surface) than information about the drying branch of the SWRC has essentially no impact on the models' estimations. Developed models are validated and compared with well-known models that can be used for the same purpose, such as the Mualem (1977) (M77) and Kool and Parker (1987) (KP87) models. The developed models estimate the main wetting SWRC branch with estimation errors (RMSE = 0.018 m3/m3) that are significantly lower than those for the M77 (RMSE = 0.025 m3/m3) or KP87 (RMSE = 0. 047 m3/m3) models.

  4. Inter-Disciplinary Collaboration in Support of the Post-Standby TREAT Mission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHart, Mark; Baker, Benjamin; Ortensi, Javier

    Although analysis methods have advanced significantly in the last two decades, high fidelity multi- physics methods for reactors systems have been under development for only a few years and are not presently mature nor deployed. Furthermore, very few methods provide the ability to simulate rapid transients in three dimensions. Data for validation of advanced time-dependent multi- physics is sparse; at TREAT, historical data were not collected for the purpose of validating three-dimensional methods, let alone multi-physics simulations. Existing data continues to be collected to attempt to simulate the behavior of experiments and calibration transients, but it will be insufficient formore » the complete validation of analysis methods used for TREAT transient simulations. Hence, a 2018 restart will most likely occur without the direct application of advanced modeling and simulation methods. At present, the current INL modeling and simulation team plans to work with TREAT operations staff in performing reactor simulations with MAMMOTH, in parallel with the software packages currently being used in preparation for core restart (e.g., MCNP5, RELAP5, ABAQUS). The TREAT team has also requested specific measurements to be performed during startup testing, currently scheduled to run from February to August of 2018. These startup measurements will be crucial in validating the new analysis methods in preparation for ultimate application for TREAT operations and experiment design. This document describes the collaboration between modeling and simulation staff and restart, operations, instrumentation and experiment development teams to be able to effectively interact and achieve successful validation work during restart testing.« less

  5. Rural Parents’ Perceived Stigma of Seeking Mental Health Services for their Children: Development and Evaluation of a New Instrument

    PubMed Central

    Williams, Stacey L.; Polaha, Jodi

    2014-01-01

    The purpose of this paper was to examine the validity of score interpretations of an instrument developed to measure parents’ perceptions of stigma about seeking mental health services for their children. The validity of the score interpretations of the instrument was tested in two studies. Study 1 examined confirmatory factor analysis (CFA) employing a split half approach, and construct and criterion validity using the entire sample of parents in rural Appalachia whose children were experiencing psychosocial concerns (N=347), while Study 2 further examined CFA, construct and criterion validity, as well as predictive validity of the scores on the new scale using a general sample of parents in rural Appalachia (N=184). Results of exploratory and confirmatory factor analyses revealed support for a two factor model of parents’ perceived stigma, which represented both self and public forms of stigma associated with seeking mental health services for their children, and correlated with existing measures of stigma and other psychosocial variables. Further, the new self and public stigma scale significantly predicted parents’ willingness to seek services for children. PMID:24749752

  6. Spacecraft Internal Acoustic Environment Modeling

    NASA Technical Reports Server (NTRS)

    Chu, Shao-Sheng R.; Allen Christopher S.

    2010-01-01

    Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.

  7. Adulteration of Argentinean milk fats with animal fats: Detection by fatty acids analysis and multivariate regression techniques.

    PubMed

    Rebechi, S R; Vélez, M A; Vaira, S; Perotti, M C

    2016-02-01

    The aims of the present study were to test the accuracy of the fatty acid ratios established by the Argentinean Legislation to detect adulterations of milk fat with animal fats and to propose a regression model suitable to evaluate these adulterations. For this purpose, 70 milk fat, 10 tallow and 7 lard fat samples were collected and analyzed by gas chromatography. Data was utilized to simulate arithmetically adulterated milk fat samples at 0%, 2%, 5%, 10% and 15%, for both animal fats. The fatty acids ratios failed to distinguish adulterated milk fats containing less than 15% of tallow or lard. For each adulterant, Multiple Linear Regression (MLR) was applied, and a model was chosen and validated. For that, calibration and validation matrices were constructed employing genuine and adulterated milk fat samples. The models were able to detect adulterations of milk fat at levels greater than 10% for tallow and 5% for lard. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Generalized second law of thermodynamics in f(T,TG) gravity

    NASA Astrophysics Data System (ADS)

    Zubair, M.; Jawad, Abdul

    2015-11-01

    We discuss the equilibrium picture of thermodynamic at the apparent horizon of FRW universe in f(T,TG) gravity, where T represents the torsion invariant and TG is the teleparallel equivalent of the Gauss-Bonnet term. It is found that one can translate the Friedmann equations to the standard form of first law of thermodynamics. We discuss GSLT in the locality of assumption that temperature of matter inside the horizon is similar to that of apparent horizon. Furthermore, we consider particular models in this theory and generate constraints on the coupling parameters for the validity of GSLT. For this purpose we set the present day values of cosmic parameters and find the possible constraints on f(T,TG) models. We also choose the power law cosmology and found that GSLT can be met in accelerated cosmic expansion. We have also presented the cosmological reconstruction of some viable f(T,TG) models and discussed the cosmic evolution and validity of GSLT.

  9. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  10. Radiant Energy Measurements from a Scaled Jet Engine Axisymmetric Exhaust Nozzle for a Baseline Code Validation Case

    NASA Technical Reports Server (NTRS)

    Baumeister, Joseph F.

    1994-01-01

    A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.

  11. Assessment of MARMOT. A Mesoscale Fuel Performance Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonks, M. R.; Schwen, D.; Zhang, Y.

    2015-04-01

    MARMOT is the mesoscale fuel performance code under development as part of the US DOE Nuclear Energy Advanced Modeling and Simulation Program. In this report, we provide a high level summary of MARMOT, its capabilities, and its current state of validation. The purpose of MARMOT is to predict the coevolution of microstructure and material properties of nuclear fuel and cladding. It accomplished this using the phase field method coupled to solid mechanics and heat conduction. MARMOT is based on the Multiphysics Object-Oriented Simulation Environment (MOOSE), and much of its basic capability in the areas of the phase field method, mechanics,more » and heat conduction come directly from MOOSE modules. However, additional capability specific to fuel and cladding is available in MARMOT. While some validation of MARMOT has been completed in the areas of fission gas behavior and grain growth, much more validation needs to be conducted. However, new mesoscale data needs to be obtained in order to complete this validation.« less

  12. Development of diagnostic test instruments to reveal level student conception in kinematic and dynamics

    NASA Astrophysics Data System (ADS)

    Handhika, J.; Cari, C.; Suparmi, A.; Sunarno, W.; Purwandari, P.

    2018-03-01

    The purpose of this research was to develop a diagnostic test instrument to reveal students' conceptions in kinematics and dynamics. The diagnostic test was developed based on the content indicator the concept of (1) displacement and distance, (2) instantaneous and average velocity, (3) zero and constant acceleration, (4) gravitational acceleration (5) Newton's first Law, (6) and Newton's third Law. The diagnostic test development model includes: Diagnostic test requirement analysis, formulating test-making objectives, developing tests, checking the validity of the content and the performance of reliability, and application of tests. The Content Validation Index (CVI) results in the category are highly relevant, with a value of 0.85. Three questions get negative Content Validation Ratio CVR) (-0.6), after revised distractors and clarify visual presentation; the CVR become 1 (highly relevant). This test was applied, obtained 16 valid test items, with Cronbach Alpha value of 0.80. It can conclude that diagnostic test can be used to reveal the level of students conception in kinematics and dynamics.

  13. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE PAGES

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir; ...

    2017-08-19

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  14. SU-E-T-50: Automatic Validation of Megavoltage Beams Modeled for Clinical Use in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melchior, M; Salinas Aranda, F; 21st Century Oncology, Ft. Myers, FL

    2014-06-01

    Purpose: To automatically validate megavoltage beams modeled in XiO™ 4.50 (Elekta, Stockholm, Sweden) and Varian Eclipse™ Treatment Planning Systems (TPS) (Varian Associates, Palo Alto, CA, USA), reducing validation time before beam-on for clinical use. Methods: A software application that can automatically read and analyze DICOM RT Dose and W2CAD files was developed using MatLab integrated development environment.TPS calculated dose distributions, in DICOM RT Dose format, and dose values measured in different Varian Clinac beams, in W2CAD format, were compared. Experimental beam data used were those acquired for beam commissioning, collected on a water phantom with a 2D automatic beam scanningmore » system.Two methods were chosen to evaluate dose distributions fitting: gamma analysis and point tests described in Appendix E of IAEA TECDOC-1583. Depth dose curves and beam profiles were evaluated for both open and wedged beams. Tolerance parameters chosen for gamma analysis are 3% and 3 mm dose and distance, respectively.Absolute dose was measured independently at points proposed in Appendix E of TECDOC-1583 to validate software results. Results: TPS calculated depth dose distributions agree with measured beam data under fixed precision values at all depths analyzed. Measured beam dose profiles match TPS calculated doses with high accuracy in both open and wedged beams. Depth and profile dose distributions fitting analysis show gamma values < 1. Relative errors at points proposed in Appendix E of TECDOC-1583 meet therein recommended tolerances.Independent absolute dose measurements at points proposed in Appendix E of TECDOC-1583 confirm software results. Conclusion: Automatic validation of megavoltage beams modeled for their use in the clinic was accomplished. The software tool developed proved efficient, giving users a convenient and reliable environment to decide whether to accept or not a beam model for clinical use. Validation time before beam-on for clinical use was reduced to a few hours.« less

  15. Proposed Modifications to the Conceptual Model of Coaching Efficacy and Additional Validity Evidence for the Coaching Efficacy Scale II-High School Teams

    ERIC Educational Resources Information Center

    Myers, Nicholas; Feltz, Deborah; Chase, Melissa

    2011-01-01

    The purpose of this study was to determine whether theoretically relevant sources of coaching efficacy could predict the measures derived from the Coaching Efficacy Scale II-High School Teams (CES II-HST). Data were collected from head coaches of high school teams in the United States (N = 799). The analytic framework was a multiple-group…

  16. Why Do College Students Cheat? A Structural Equation Modeling Validation of the Theory of Planned Behavior

    ERIC Educational Resources Information Center

    AL-Dossary, Saeed Abdullah

    2017-01-01

    Cheating on tests is a serious problem in education. The purpose of this study was to test the efficacy of a modified form of the theory of planned behavior (TPB) to predict cheating behavior among a sample of Saudi university students. This study also sought to test the influence of cheating in high school on cheating in college within the…

  17. A Capstone Project Using the Gap Analysis Model: Closing the College Readiness Gap for Latino English Language Learners with a Focus on School Support and School Counseling Resources

    ERIC Educational Resources Information Center

    Jimenez, Evelyn

    2013-01-01

    This capstone project applied Clark and Estes' (2008) gap analysis framework to identify performance gaps, develop perceived root causes, validate the causes, and formulate research-based solutions to present to Trojan High School. The purpose was to examine ways to increase the academic achievement of ELL students, specifically Latinos, by…

  18. Examining the Predictive Validity of a Dynamic Assessment of Decoding to Forecast Response to Tier 2 Intervention

    ERIC Educational Resources Information Center

    Cho, Eunsoo; Compton, Donald L.; Fuchs, Douglas; Fuchs, Lynn S.; Bouton, Bobette

    2014-01-01

    The purpose of this study was to examine the role of a dynamic assessment (DA) of decoding in predicting responsiveness to Tier 2 small-group tutoring in a response-to-intervention model. First grade students (n = 134) who did not show adequate progress in Tier 1 based on 6 weeks of progress monitoring received Tier 2 small-group tutoring in…

  19. The 2-MEV model: Constancy of adolescent environmental values within an 8-year time frame

    NASA Astrophysics Data System (ADS)

    Bogner, F. X.; Johnson, B.; Buxner, S.; Felix, L.

    2015-08-01

    The 2-MEV model is a widely used tool to monitor children's environmental perception by scoring individual values. Although the scale's validity has been confirmed repeatedly and independently as well as the scale is in usage within more than two dozen language units all over the world, longitudinal properties still need clarification. The purpose of the present study therefore was to validate the 2-MEV scale based on a large data basis of 10,676 children collected over an eight-year period. Cohorts of three different US states contributed to the sample by responding to a paper-and-pencil questionnaire within their pre-test initiatives in the context of field center programs. Since we used only the pre-program 2-MEV scale results (which is before participation in education programs), the data were clearly unspoiled by any follow-up interventions. The purpose of analysis was fourfold: First, to test and confirm the hypothesized factorized structure for the large data set and for the subsample of each of the three states. Second, to analyze the scoring pattern across the eight years' time range for both preservation and utilitarian preferences. Third, to investigate any age effects in the extracted factors. Finally, to extract suitable recommendations for educational implementation efforts.

  20. Proposed Core Competencies and Empirical Validation Procedure in Competency Modeling: Confirmation and Classification.

    PubMed

    Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia

    2016-01-01

    Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.

  1. Effects of Small-Group Tutoring with and without Validated Classroom Instruction on At-Risk Students' Math Problem Solving: Are Two Tiers of Prevention Better Than One?

    PubMed

    Fuchs, Lynn S; Fuchs, Douglas; Craddock, Caitlin; Hollenbeck, Kurstin N; Hamlett, Carol L; Schatschneider, Christopher

    2008-01-01

    The purpose of this study was to assess the effects of small-group tutoring with and without validated classroom instruction on at-risk (AR) students' math problem solving. Stratifying within schools, 119 3(rd)-grade classes were randomly assigned to conventional or validated problem-solving instruction (Hot Math [schema-broadening instruction]). Students identified as AR (n = 243) were randomly assigned, within classroom conditions, to receive Hot Math tutoring or not. Students were tested on problem-solving and math applications measures before and after 16 weeks of intervention. Analyses of variance, which accounted for the nested structure of the data, revealed the tutored students who received validated classroom instruction achieved better than tutored students who received conventional classroom instruction (ES = 1.34). However, the advantage for tutoring over no tutoring was similar whether or not students received validated or conventional classroom instruction (ESs = 1.18 and 1.13). Tutoring, not validated classroom instruction reduced the prevalence of math difficulty. Implications for responsiveness-to-intervention prevention models and for enhancing math problem-solving instruction are discussed.

  2. Effects of Small-Group Tutoring with and without Validated Classroom Instruction on At-Risk Students' Math Problem Solving: Are Two Tiers of Prevention Better Than One?

    PubMed Central

    Fuchs, Lynn S.; Fuchs, Douglas; Craddock, Caitlin; Hollenbeck, Kurstin N.; Hamlett, Carol L.; Schatschneider, Christopher

    2008-01-01

    The purpose of this study was to assess the effects of small-group tutoring with and without validated classroom instruction on at-risk (AR) students' math problem solving. Stratifying within schools, 119 3rd-grade classes were randomly assigned to conventional or validated problem-solving instruction (Hot Math [schema-broadening instruction]). Students identified as AR (n = 243) were randomly assigned, within classroom conditions, to receive Hot Math tutoring or not. Students were tested on problem-solving and math applications measures before and after 16 weeks of intervention. Analyses of variance, which accounted for the nested structure of the data, revealed the tutored students who received validated classroom instruction achieved better than tutored students who received conventional classroom instruction (ES = 1.34). However, the advantage for tutoring over no tutoring was similar whether or not students received validated or conventional classroom instruction (ESs = 1.18 and 1.13). Tutoring, not validated classroom instruction reduced the prevalence of math difficulty. Implications for responsiveness-to-intervention prevention models and for enhancing math problem-solving instruction are discussed. PMID:19122881

  3. Measurement of Function Post Hip Fracture: Testing a Comprehensive Measurement Model of Physical Function

    PubMed Central

    Gruber-Baldini, Ann L.; Hicks, Gregory; Ostir, Glen; Klinedinst, N. Jennifer; Orwig, Denise; Magaziner, Jay

    2015-01-01

    Background Measurement of physical function post hip fracture has been conceptualized using multiple different measures. Purpose This study tested a comprehensive measurement model of physical function. Design This was a descriptive secondary data analysis including 168 men and 171 women post hip fracture. Methods Using structural equation modeling, a measurement model of physical function which included grip strength, activities of daily living, instrumental activities of daily living and performance was tested for fit at 2 and 12 months post hip fracture and among male and female participants and validity of the measurement model of physical function was evaluated based on how well the model explained physical activity, exercise and social activities post hip fracture. Findings The measurement model of physical function fit the data. The amount of variance the model or individual factors of the model explained varied depending on the activity. Conclusion Decisions about the ideal way in which to measure physical function should be based on outcomes considered and participant Clinical Implications The measurement model of physical function is a reliable and valid method to comprehensively measure physical function across the hip fracture recovery trajectory. Practical but useful assessment of function should be considered and monitored over the recovery trajectory post hip fracture. PMID:26492866

  4. Asymptotic behaviour of two-point functions in multi-species models

    NASA Astrophysics Data System (ADS)

    Kozlowski, Karol K.; Ragoucy, Eric

    2016-05-01

    We extract the long-distance asymptotic behaviour of two-point correlation functions in massless quantum integrable models containing multi-species excitations. For such a purpose, we extend to these models the method of a large-distance regime re-summation of the form factor expansion of correlation functions. The key feature of our analysis is a technical hypothesis on the large-volume behaviour of the form factors of local operators in such models. We check the validity of this hypothesis on the example of the SU (3)-invariant XXX magnet by means of the determinant representations for the form factors of local operators in this model. Our approach confirms the structure of the critical exponents obtained previously for numerous models solvable by the nested Bethe Ansatz.

  5. The Unmanned Aerial System SUMO: an alternative measurement tool for polar boundary layer studies

    NASA Astrophysics Data System (ADS)

    Mayer, S.; Jonassen, M. O.; Reuder, J.

    2012-04-01

    Numerical weather prediction and climate models face special challenges in particular in the commonly stable conditions in the high-latitude environment. For process studies as well as for model validation purposes in-situ observations in the atmospheric boundary layer are highly required, but difficult to retrieve. We introduce a new measurement system for corresponding observations. The Small Unmanned Meteorological Observer SUMO consists of a small and light-weight auto-piloted model aircraft, equipped with a meteorological sensor package. SUMO has been operated in polar environments, among others during IPY on Spitsbergen in the year 2009 and has proven its capabilities for atmospheric measurements with high spatial and temporal resolution even at temperatures of -30 deg C. A comparison of the SUMO data with radiosondes and tethered balloons shows that SUMO can provide atmospheric profiles with comparable quality to those well-established systems. Its high data quality allowed its utilization for evaluation purposes of high-resolution model runs performed with the Weather Research and Forecasting model WRF and for the detailed investigation of an orographically modified flow during a case study.

  6. Reflective Thinking Scale: A Validity and Reliability Study

    ERIC Educational Resources Information Center

    Basol, Gulsah; Evin Gencel, Ilke

    2013-01-01

    The purpose of this study was to adapt Reflective Thinking Scale to Turkish and investigate its validity and reliability over a Turkish university students' sample. Reflective Thinking Scale (RTS) is a 5 point Likert scale (ranging from 1 corresponding Agree Completely, 3 to Neutral, and 5 to Not Agree Completely), purposed to measure reflective…

  7. Learning Transfer--Validation of the Learning Transfer System Inventory in Portugal

    ERIC Educational Resources Information Center

    Velada, Raquel; Caetano, Antonio; Bates, Reid; Holton, Ed

    2009-01-01

    Purpose: The purpose of this paper is to analyze the construct validity of learning transfer system inventory (LTSI) for use in Portugal. Furthermore, it also aims to analyze whether LTSI dimensions differ across individual variables such as gender, age, educational level and job tenure. Design/methodology/approach: After a rigorous translation…

  8. Cross-Validation of FITNESSGRAM® Health-Related Fitness Standards in Hungarian Youth

    ERIC Educational Resources Information Center

    Laurson, Kelly R.; Saint-Maurice, Pedro F.; Karsai, István; Csányi, Tamás

    2015-01-01

    Purpose: The purpose of this study was to cross-validate FITNESSGRAM® aerobic and body composition standards in a representative sample of Hungarian youth. Method: A nationally representative sample (N = 405) of Hungarian adolescents from the Hungarian National Youth Fitness Study (ages 12-18.9 years) participated in an aerobic capacity assessment…

  9. The Development and Validation of the Age-Based Rejection Sensitivity Questionnaire

    ERIC Educational Resources Information Center

    Kang, Sonia K.; Chasteen, Alison L.

    2009-01-01

    Purpose: There is much evidence suggesting that older adults are often negatively affected by aging stereotypes; however, no method to identify individual differences in vulnerability to these effects has yet been developed. The purpose of this study was to develop a reliable and valid questionnaire to measure individual differences in the…

  10. Factorial Validity and Psychometric Examination of the Exercise Dependence Scale-Revised

    ERIC Educational Resources Information Center

    Downs, Danielle Symons; Hausenblas, Heather A.; Nigg, Claudio R.

    2004-01-01

    The research purposes were to examine the factorial and convergent validity, internal consistency, and test-retest reliability of the Exercise Dependence Scale (EDS). Two separate studies, containing a total of 1,263 college students, were undertaken to accomplish these purposes. Participants completed the EDS and measures of exercise behavior and…

  11. Development of, and initial validity evidence for, the referee self-efficacy scale: a multistudy report.

    PubMed

    Myers, Nicholas D; Feltz, Deborah L; Guillén, Félix; Dithurbide, Lori

    2012-12-01

    The purpose of this multistudy report was to develop, and then to provide initial validity evidence for measures derived from, the Referee Self-Efficacy Scale. Data were collected from referees (N = 1609) in the United States (n = 978) and Spain (n = 631). In Study 1 (n = 512), a single-group exploratory structural equation model provided evidence for four factors: game knowledge, decision making, pressure, and communication. In Study 2 (n = 1153), multiple-group confirmatory factor analytic models provided evidence for partial factorial invariance by country, level of competition, team gender, and sport refereed. In Study 3 (n = 456), potential sources of referee self-efficacy information combined to account for a moderate or large amount of variance in each dimension of referee self-efficacy with years of referee experience, highest level refereed, physical/mental preparation, and environmental comfort, each exerting at least two statistically significant direct effects.

  12. A verification and validation effort for high explosives at Los Alamos National Lab (u)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scovel, Christina A; Menikoff, Ralph S

    2009-01-01

    We have started a project to verify and validate ASC codes used to simulate detonation waves in high explosives. Since there are no non-trivial analytic solutions, we are going to compare simulated results with experimental data that cover a wide range of explosive phenomena. The intent is to compare both different codes and different high explosives (HE) models. The first step is to test the products equation of state used for the HE models, For this purpose, the cylinder test, flyer plate and plate-push experiments are being used. These experiments sample different regimes in thermodynamic phase space: the CJ isentropemore » for the cylinder tests, the isentrope behind an overdriven detonation wave for the flyer plate experiment, and expansion following a reflected CJ detonation for the plate-push experiment, which is sensitive to the Gruneisen coefficient. The results of our findings for PBX 9501 are presented here.« less

  13. Design and development of a cross-cultural disposition inventory

    NASA Astrophysics Data System (ADS)

    Davies, Randall; Zaugg, Holt; Tateishi, Isaku

    2015-01-01

    Advances in technology have increased the likelihood that engineers will have to work in a global, culturally diverse setting. Many schools of engineering are currently revising their curricula to help students to develop cultural competence. However, our ability to measure cultural dispositions can be a challenge. The purpose of this project was to develop and test an instrument that measures the various aspects of cultural disposition. The results of the validation process verified that the hypothesised model adequately represented the data. The refined instrument produced a four-factor model for the overall construct. The validation process for the instrument verified the existence of specific subcomponents that form the overall cultural disposition construct. There also seems to be a hierarchical relationship within the subcomponents of cultural disposition. Additional research is needed to explore which aspects of cultural disposition affect an individual's ability to work effectively in a culturally diverse engineering team.

  14. An Empirical Comparison of Different Models of Active Aging in Canada: The International Mobility in Aging Study

    PubMed Central

    Ahmed, Tamer; Filiatrault, Johanne; Yu, Hsiu-Ting; Zunzunegui, Maria Victoria

    2017-01-01

    Abstract Purpose: Active aging is a concept that lacks consensus. The WHO defines it as a holistic concept that encompasses the overall health, participation, and security of older adults. Fernández-Ballesteros and colleagues propose a similar concept but omit security and include mood and cognitive function. To date, researchers attempting to validate conceptual models of active aging have obtained mixed results. The goal of this study was to examine the validity of existing models of active aging with epidemiological data from Canada. Methods: The WHO model of active aging and the psychological model of active aging developed by Fernández-Ballesteros and colleagues were tested with confirmatory factor analysis. The data used included 799 community-dwelling older adults between 65 and 74 years old, recruited from the patient lists of family physicians in Saint-Hyacinthe, Quebec and Kingston, Ontario. Results: Neither model could be validated in the sample of Canadian older adults. Although a concept of healthy aging can be modeled adequately, social participation and security did not fit a latent factor model. A simple binary index indicated that 27% of older adults in the sample did not meet the active aging criteria proposed by the WHO. Implications: Our results suggest that active aging might represent a human rights policy orientation rather than an empirical measurement tool to guide research among older adult populations. Binary indexes of active aging may serve to highlight what remains to be improved about the health, participation, and security of growing populations of older adults. PMID:26350153

  15. Authentication of organic feed by near-infrared spectroscopy combined with chemometrics: a feasibility study.

    PubMed

    Tres, A; van der Veer, G; Perez-Marin, M D; van Ruth, S M; Garrido-Varo, A

    2012-08-22

    Organic products tend to retail at a higher price than their conventional counterparts, which makes them susceptible to fraud. In this study we evaluate the application of near-infrared spectroscopy (NIRS) as a rapid, cost-effective method to verify the organic identity of feed for laying hens. For this purpose a total of 36 organic and 60 conventional feed samples from The Netherlands were measured by NIRS. A binary classification model (organic vs conventional feed) was developed using partial least squares discriminant analysis. Models were developed using five different data preprocessing techniques, which were externally validated by a stratified random resampling strategy using 1000 realizations. Spectral regions related to the protein and fat content were among the most important ones for the classification model. The models based on data preprocessed using direct orthogonal signal correction (DOSC), standard normal variate (SNV), and first and second derivatives provided the most successful results in terms of median sensitivity (0.91 in external validation) and median specificity (1.00 for external validation of SNV models and 0.94 for DOSC and first and second derivative models). A previously developed model, which was based on fatty acid fingerprinting of the same set of feed samples, provided a higher sensitivity (1.00). This shows that the NIRS-based approach provides a rapid and low-cost screening tool, whereas the fatty acid fingerprinting model can be used for further confirmation of the organic identity of feed samples for laying hens. These methods provide additional assurance to the administrative controls currently conducted in the organic feed sector.

  16. Precipitation projections under GCMs perspective and Turkish Water Foundation (TWF) statistical downscaling model procedures

    NASA Astrophysics Data System (ADS)

    Dabanlı, İsmail; Şen, Zekai

    2018-04-01

    The statistical climate downscaling model by the Turkish Water Foundation (TWF) is further developed and applied to a set of monthly precipitation records. The model is structured by two phases as spatial (regional) and temporal downscaling of global circulation model (GCM) scenarios. The TWF model takes into consideration the regional dependence function (RDF) for spatial structure and Markov whitening process (MWP) for temporal characteristics of the records to set projections. The impact of climate change on monthly precipitations is studied by downscaling Intergovernmental Panel on Climate Change-Special Report on Emission Scenarios (IPCC-SRES) A2 and B2 emission scenarios from Max Plank Institute (EH40PYC) and Hadley Center (HadCM3). The main purposes are to explain the TWF statistical climate downscaling model procedures and to expose the validation tests, which are rewarded in same specifications as "very good" for all stations except one (Suhut) station in the Akarcay basin that is in the west central part of Turkey. Eventhough, the validation score is just a bit lower at the Suhut station, the results are "satisfactory." It is, therefore, possible to say that the TWF model has reasonably acceptable skill for highly accurate estimation regarding standard deviation ratio (SDR), Nash-Sutcliffe efficiency (NSE), and percent bias (PBIAS) criteria. Based on the validated model, precipitation predictions are generated from 2011 to 2100 by using 30-year reference observation period (1981-2010). Precipitation arithmetic average and standard deviation have less than 5% error for EH40PYC and HadCM3 SRES (A2 and B2) scenarios.

  17. Prediction of bovine milk technological traits from mid-infrared spectroscopy analysis in dairy cows.

    PubMed

    Visentin, G; McDermott, A; McParland, S; Berry, D P; Kenny, O A; Brodkorb, A; Fenelon, M A; De Marchi, M

    2015-09-01

    Rapid, cost-effective monitoring of milk technological traits is a significant challenge for dairy industries specialized in cheese manufacturing. The objective of the present study was to investigate the ability of mid-infrared spectroscopy to predict rennet coagulation time, curd-firming time, curd firmness at 30 and 60min after rennet addition, heat coagulation time, casein micelle size, and pH in cow milk samples, and to quantify associations between these milk technological traits and conventional milk quality traits. Samples (n=713) were collected from 605 cows from multiple herds; the samples represented multiple breeds, stages of lactation, parities, and milking times. Reference analyses were undertaken in accordance with standardized methods, and mid-infrared spectra in the range of 900 to 5,000cm(-1) were available for all samples. Prediction models were developed using partial least squares regression, and prediction accuracy was based on both cross and external validation. The proportion of variance explained by the prediction models in external validation was greatest for pH (71%), followed by rennet coagulation time (55%) and milk heat coagulation time (46%). Models to predict curd firmness 60min from rennet addition and casein micelle size, however, were poor, explaining only 25 and 13%, respectively, of the total variance in each trait within external validation. On average, all prediction models tended to be unbiased. The linear regression coefficient of the reference value on the predicted value varied from 0.17 (casein micelle size regression model) to 0.83 (pH regression model) but all differed from 1. The ratio performance deviation of 1.07 (casein micelle size prediction model) to 1.79 (pH prediction model) for all prediction models in the external validation was <2, suggesting that none of the prediction models could be used for analytical purposes. With the exception of casein micelle size and curd firmness at 60min after rennet addition, the developed prediction models may be useful as a screening method, because the concordance correlation coefficient ranged from 0.63 (heat coagulation time prediction model) to 0.84 (pH prediction model) in the external validation. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Development and validation of the Australian version of the Birth Satisfaction Scale-Revised (BSS-R).

    PubMed

    Jefford, Elaine; Hollins Martin, Caroline J; Martin, Colin R

    2018-02-01

    The 10-item Birth Satisfaction Scale-Revised (BSS-R) has recently been endorsed by international expert consensus for global use as the birth satisfaction outcome measure of choice. English-language versions of the tool include validated UK and US versions; however, the instrument has not, to date, been contextualised and validated in an Australian English-language version. The current investigation sought to develop and validate an English-language version of the tool for use within the Australian context. A two-stage study. Following review and modification by expert panel, the Australian BSS-R (A-BSS-R) was (Stage 1) evaluated for factor structure, internal consistency, known-groups discriminant validity and divergent validity. Stage 2 directly compared the A-BSS-R data set with the original UK data set to determine the invariance characteristics of the new instrument. Participants were a purposive sample of Australian postnatal women (n = 198). The A-BSS-R offered a good fit to data consistent with the BSS-R tridimensional measurement model and was found to be conceptually and measurement equivalent to the UK version. The A-BSS-R demonstrated excellent known-groups discriminant validity, generally good divergent validity and overall good internal consistency. The A-BSS-R represents a robust and valid measure of the birth satisfaction concept suitable for use within Australia and appropriate for application to International comparative studies.

  19. Skin sensitisation--moving forward with non-animal testing strategies for regulatory purposes in the EU.

    PubMed

    Basketter, David; Alépée, Nathalie; Casati, Silvia; Crozier, Jonathan; Eigler, Dorothea; Griem, Peter; Hubesch, Bruno; de Knecht, Joop; Landsiedel, Robert; Louekari, Kimmo; Manou, Irene; Maxwell, Gavin; Mehling, Annette; Netzeva, Tatiana; Petry, Thomas; Rossi, Laura H

    2013-12-01

    In a previous EPAA-Cefic LRI workshop in 2011, issues surrounding the use and interpretation of results from the local lymph node assay were addressed. At the beginning of 2013 a second joint workshop focused greater attention on the opportunities to make use of non-animal test data, not least since a number of in vitro assays have progressed to an advanced position in terms of their formal validation. It is already recognised that information produced from non-animal assays can be used in regulatory decision-making, notably in terms of classifying a substance as a skin sensitiser. The evolution into a full replacement for hazard identification, where the decision is not to classify, requires the generation of confidence in the in vitro alternative, e.g. via formal validation, the existence of peer reviewed publications and the knowledge that the assay(s) are founded on key elements of the Adverse Outcome Pathway for skin sensitisation. It is foreseen that the validated in vitro assays and relevant QSAR models can be organised into formal testing strategies to be applied for regulatory purposes by the industry. To facilitate progress, the European Partnership for Alternative Approaches to animal testing (EPAA) provided the platform for cross-industry and regulatory dialogue, enabling an essential and open debate on the acceptability of an in vitro based integrated strategy. Based on these considerations, a follow up activity was agreed upon to explore an example of an Integrated Testing Strategy for skin sensitisation hazard identification purposes in the context of REACH submissions. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Catchment-scale Validation of a Physically-based, Post-fire Runoff and Erosion Model

    NASA Astrophysics Data System (ADS)

    Quinn, D.; Brooks, E. S.; Robichaud, P. R.; Dobre, M.; Brown, R. E.; Wagenbrenner, J.

    2017-12-01

    The cascading consequences of fire-induced ecological changes have profound impacts on both natural and managed forest ecosystems. Forest managers tasked with implementing post-fire mitigation strategies need robust tools to evaluate the effectiveness of their decisions, particularly those affecting hydrological recovery. Various hillslope-scale interfaces of the physically-based Water Erosion Prediction Project (WEPP) model have been successfully validated for this purpose using fire-effected plot experiments, however these interfaces are explicitly designed to simulate single hillslopes. Spatially-distributed, catchment-scale WEPP interfaces have been developed over the past decade, however none have been validated for post-fire simulations, posing a barrier to adoption for forest managers. In this validation study, we compare WEPP simulations with pre- and post-fire hydrological records for three forested catchments (W. Willow, N. Thomas, and S. Thomas) that burned in the 2011 Wallow Fire in Northeastern Arizona, USA. Simulations were conducted using two approaches; the first using automatically created inputs from an online, spatial, post-fire WEPP interface, and the second using manually created inputs which incorporate the spatial variability of fire effects observed in the field. Both approaches were compared to five years of observed post-fire sediment and flow data to assess goodness of fit.

  1. Sensitivity of shock boundary-layer interactions to weak geometric perturbations

    NASA Astrophysics Data System (ADS)

    Kim, Ji Hoon; Eaton, John K.

    2016-11-01

    Shock-boundary layer interactions can be sensitive to small changes in the inlet flow and boundary conditions. Robust computational models must capture this sensitivity, and validation of such models requires a suitable experimental database with well-defined inlet and boundary conditions. To that end, the purpose of this experiment is to systematically document the effects of small geometric perturbations on a SBLI flow to investigate the flow physics and establish an experimental dataset tailored for CFD validation. The facility used is a Mach 2.1, continuous operation wind tunnel. The SBLI is generated using a compression wedge; the region of interest is the resulting reflected shock SBLI. The geometric perturbations, which are small spanwise rectangular prisms, are introduced ahead of the compression ramp on the opposite wall. PIV is used to study the SBLI for 40 different perturbation geometries. Results show that the dominant effect of the perturbations is a global shift of the SBLI itself. In addition, the bumps introduce weaker shocks of varying strength and angles, depending on the bump height and location. Various scalar validation metrics, including a measure of shock unsteadiness, and their uncertainties are also computed to better facilitate CFD validation. Ji Hoon Kim is supported by an OTR Stanford Graduate Fellowship.

  2. Validation of an assay for quantification of alpha-amylase in saliva of sheep

    PubMed Central

    Fuentes-Rubio, Maria; Fuentes, Francisco; Otal, Julio; Quiles, Alberto; Hevia, María Luisa

    2016-01-01

    The objective of this study was to develop a time-resolved immunofluorometric assay (TR-IFMA) for quantification of salivary alpha-amylase in sheep. For that purpose, after the design of the assay, an analytical and a clinical validation were carried out. The analytical validation of the assay showed intra- and inter-assay coefficients of variation (CVs) of 6.1% and 10.57%, respectively and an analytical limit of detection of 0.09 ng/mL. The assay also demonstrated a high level of accuracy, as determined by linearity under dilution. For clinical validation, a model of acute stress testing was conducted to determine whether expected significant changes in alpha-amylase were picked up in the newly developed assay. In that model, 11 sheep were immobilized and confronted with a sheepdog to induce stress. Saliva samples were obtained before stress induction and 15, 30, and 60 min afterwards. Salivary cortisol was measured as a reference of stress level. The results of TR-IFMA showed a significant increase (P < 0.01) in the concentration of alpha-amylase in saliva after stress induction. The assay developed in this study could be used to measure salivary alpha-amylase in the saliva of sheep and this enzyme could be a possible noninvasive biomarker of stress in sheep. PMID:27408332

  3. Cross-cultural adaptation and construct validity of the Korean version of a physical activity measure for community-dwelling elderly.

    PubMed

    Choi, Bongsam

    2018-01-01

    [Purpose] This study aimed to cross-cultural adapt and validate the Korean version of an physical activity measure (K-PAM) for community-dwelling elderly. [Subjects and Methods] One hundred and thirty eight community-dwelling elderlies, 32 males and 106 female, participated in the study. All participants were asked to fill out a fifty-one item questionnaire measuring perceived difficulty in the activities of daily living (ADL) for the elderly. One-parameter model of item response theory (Rasch analysis) was applied to determine the construct validity and to inspect item-level psychometric properties of 51 ADL items of the K-PAM. [Results] Person separation reliability (analogous to Cronbach's alpha) for internal consistency was ranging 0.93 to 0.94. A total of 16 items was misfit to the Rasch model. After misfit item deletion, 35 ADL items of the K-PAM were placed in an empirically meaningful hierarchy from easy to hard. The item-person map analysis delineated that the item difficulty was well matched for the elderlies with moderate and low ability except for high ceilings. [Conclusion] Cross-cultural adapted K-PAM was shown to be sufficient for establishing construct validity and stable psychometric properties confirmed by person separation reliability and fit statistics.

  4. Real-time sensor validation and fusion for distributed autonomous sensors

    NASA Astrophysics Data System (ADS)

    Yuan, Xiaojing; Li, Xiangshang; Buckles, Bill P.

    2004-04-01

    Multi-sensor data fusion has found widespread applications in industrial and research sectors. The purpose of real time multi-sensor data fusion is to dynamically estimate an improved system model from a set of different data sources, i.e., sensors. This paper presented a systematic and unified real time sensor validation and fusion framework (RTSVFF) based on distributed autonomous sensors. The RTSVFF is an open architecture which consists of four layers - the transaction layer, the process fusion layer, the control layer, and the planning layer. This paradigm facilitates distribution of intelligence to the sensor level and sharing of information among sensors, controllers, and other devices in the system. The openness of the architecture also provides a platform to test different sensor validation and fusion algorithms and thus facilitates the selection of near optimal algorithms for specific sensor fusion application. In the version of the model presented in this paper, confidence weighted averaging is employed to address the dynamic system state issue noted above. The state is computed using an adaptive estimator and dynamic validation curve for numeric data fusion and a robust diagnostic map for decision level qualitative fusion. The framework is then applied to automatic monitoring of a gas-turbine engine, including a performance comparison of the proposed real-time sensor fusion algorithms and a traditional numerical weighted average.

  5. The Generalized Problematic Internet Use Scale 2: Validation and test of the model to Facebook use.

    PubMed

    Assunção, Raquel S; Matos, Paula Mena

    2017-01-01

    The main goals of the present study were to test the psychometric properties of a Portuguese version of the GPIUS2 (Generalized Problematic Internet Use Scale 2, Caplan, 2010), and to test whether the cognitive-behavioral model proposed by Caplan (2010) replicated in the context of Facebook use. We used a sample of 761 Portuguese adolescents (53.7% boys, 46.3% girls, mean age = 15.8). Our results showed that the data presented an adequate fit to the original model using confirmatory factor analysis. The scale presented also good internal consistency and adequate construct validity. The cognitive-behavioral model was also applicable to the Facebook context, presenting good fit. Consistently with previous findings we found that preference for online social interaction and the use of Facebook to mood regulation purposes, predicted positively and significantly the deficient self-regulation in Facebook use, which in turn was a significant predictor of the negative outcomes associated with this use. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  6. Role of optimization in the human dynamics of task execution

    NASA Astrophysics Data System (ADS)

    Cajueiro, Daniel O.; Maldonado, Wilfredo L.

    2008-03-01

    In order to explain the empirical evidence that the dynamics of human activity may not be well modeled by Poisson processes, a model based on queuing processes was built in the literature [A. L. Barabasi, Nature (London) 435, 207 (2005)]. The main assumption behind that model is that people execute their tasks based on a protocol that first executes the high priority item. In this context, the purpose of this paper is to analyze the validity of that hypothesis assuming that people are rational agents that make their decisions in order to minimize the cost of keeping nonexecuted tasks on the list. Therefore, we build and analytically solve a dynamic programming model with two priority types of tasks and show that the validity of this hypothesis depends strongly on the structure of the instantaneous costs that a person has to face if a given task is kept on the list for more than one period. Moreover, one interesting finding is that in one of the situations the protocol used to execute the tasks generates complex one-dimensional dynamics.

  7. Finite element modeling of the residual stress evolution in forged and direct-aged alloy 718 turbine disks during manufacturing and its experimental validation

    NASA Astrophysics Data System (ADS)

    Drexler, Andreas; Ecker, Werner; Hessert, Roland; Oberwinkler, Bernd; Gänser, Hans-Peter; Keckes, Jozef; Hofmann, Michael; Fischersworring-Bunk, Andreas

    2017-10-01

    In this work the evolution of the residual stress field in a forged and heat treated turbine disk of Alloy 718 and its subsequent relaxation during machining was simulated and measured. After forging at around 1000 °C the disks were natural air cooled to room temperature and direct aged in a furnace at 720 °C for 8 hours and at 620 °C for 8 hours. The machining of the Alloy 718 turbine disk was performed in two steps: The machining of the Alloy 718 turbine disk was performed in two steps: First, from the forging contour to a contour used for ultra-sonic testing. Second, from the latter to the final contour. The thermal boundary conditions in the finite element model for air cooling and furnace heating were estimated based on analytical equations from literature. A constitutive model developed for the unified description of rate dependent and rate independent mechanical material behavior of Alloy 718 under in-service conditions up to temperatures of 1000 °C was extended and parametrized to meet the manufacturing conditions with temperatures up to 1000 °C. The results of the finite element model were validated with measurements on real-scale turbine disks. The thermal boundary conditions were validated in-field with measured cooling curves. For that purpose holes were drilled at different positions into the turbine disk and thermocouples were mounted in these holes to record the time-temperature curves during natural cooling and heating. The simulated residual stresses were validated by using the hole drilling method and the neutron diffraction technique. The accuracy of the finite element model for the final manufacturing step investigated was ±50 MPa.

  8. Fault detection and diagnosis in an industrial fed-batch cell culture process.

    PubMed

    Gunther, Jon C; Conner, Jeremy S; Seborg, Dale E

    2007-01-01

    A flexible process monitoring method was applied to industrial pilot plant cell culture data for the purpose of fault detection and diagnosis. Data from 23 batches, 20 normal operating conditions (NOC) and three abnormal, were available. A principal component analysis (PCA) model was constructed from 19 NOC batches, and the remaining NOC batch was used for model validation. Subsequently, the model was used to successfully detect (both offline and online) abnormal process conditions and to diagnose the root causes. This research demonstrates that data from a relatively small number of batches (approximately 20) can still be used to monitor for a wide range of process faults.

  9. Model improvements and validation of TerraSAR-X precise orbit determination

    NASA Astrophysics Data System (ADS)

    Hackel, S.; Montenbruck, O.; Steigenberger, P.; Balss, U.; Gisinger, C.; Eineder, M.

    2017-05-01

    The radar imaging satellite mission TerraSAR-X requires precisely determined satellite orbits for validating geodetic remote sensing techniques. Since the achieved quality of the operationally derived, reduced-dynamic (RD) orbit solutions limits the capabilities of the synthetic aperture radar (SAR) validation, an effort is made to improve the estimated orbit solutions. This paper discusses the benefits of refined dynamical models on orbit accuracy as well as estimated empirical accelerations and compares different dynamic models in a RD orbit determination. Modeling aspects discussed in the paper include the use of a macro-model for drag and radiation pressure computation, the use of high-quality atmospheric density and wind models as well as the benefit of high-fidelity gravity and ocean tide models. The Sun-synchronous dusk-dawn orbit geometry of TerraSAR-X results in a particular high correlation of solar radiation pressure modeling and estimated normal-direction positions. Furthermore, this mission offers a unique suite of independent sensors for orbit validation. Several parameters serve as quality indicators for the estimated satellite orbit solutions. These include the magnitude of the estimated empirical accelerations, satellite laser ranging (SLR) residuals, and SLR-based orbit corrections. Moreover, the radargrammetric distance measurements of the SAR instrument are selected for assessing the quality of the orbit solutions and compared to the SLR analysis. The use of high-fidelity satellite dynamics models in the RD approach is shown to clearly improve the orbit quality compared to simplified models and loosely constrained empirical accelerations. The estimated empirical accelerations are substantially reduced by 30% in tangential direction when working with the refined dynamical models. Likewise the SLR residuals are reduced from -3 ± 17 to 2 ± 13 mm, and the SLR-derived normal-direction position corrections are reduced from 15 to 6 mm, obtained from the 2012-2014 period. The radar range bias is reduced from -10.3 to -6.1 mm with the updated orbit solutions, which coincides with the reduced standard deviation of the SLR residuals. The improvements are mainly driven by the satellite macro-model for the purpose of solar radiation pressure modeling, improved atmospheric density models, and the use of state-of-the-art gravity field models.

  10. Comparison of Analytical and Numerical Performance Predictions for an International Space Station Node 3 Internal Active Thermal Control System Regenerative Heat Exchanger

    NASA Technical Reports Server (NTRS)

    Wise, Stephen A.; Holt, James M.

    2002-01-01

    The complexity of International Space Station (ISS) systems modeling often necessitates the concurrence of various dissimilar, parallel analysis techniques to validate modeling. This was the case with a feasibility and performance study of the ISS Node 3 Regenerative Heat Exchanger (RHX). A thermo-hydraulic network model was created and analyzed in SINDA/FLUINT. A less complex, closed form solution of the systems dynamics was created using an Excel Spreadsheet. The purpose of this paper is to provide a brief description of the modeling processes utilized, the results and benefits of each to the ISS Node 3 RHX study.

  11. Comparison of Analytical and Numerical Performance Predictions for a Regenerative Heat Exchanger in the International Space Station Node 3 Internal Active Thermal Control System

    NASA Technical Reports Server (NTRS)

    Wise, Stephen A.; Holt, James M.; Turner, Larry D. (Technical Monitor)

    2001-01-01

    The complexity of International Space Station (ISS) systems modeling often necessitates the concurrence of various dissimilar, parallel analysis techniques to validate modeling. This was the case with a feasibility and performance study of the ISS Node 3 Regenerative Heat Exchanger (RHX). A thermo-hydraulic network model was created and analyzed in SINDA/FLUINT. A less complex, closed form solution of the system dynamics was created using Excel. The purpose of this paper is to provide a brief description of the modeling processes utilized, the results and benefits of each to the ISS Node 3 RHX study.

  12. Well behaved anisotropic compact star models in general relativity

    NASA Astrophysics Data System (ADS)

    Jasim, M. K.; Maurya, S. K.; Gupta, Y. K.; Dayanandan, B.

    2016-11-01

    Anisotropic compact star models have been constructed by assuming a particular form of a metric function e^{λ}. We solved the Einstein field equations for determining the metric function e^{ν}. For this purpose we have assumed a physically valid expression of radial pressure (pr). The obtained anisotropic compact star model is representing the realistic compact objects such as PSR 1937 +21. We have done an extensive study about physical parameters for anisotropic models and found that these parameters are well behaved throughout inside the star. Along with these we have also determined the equation of state for compact star which gives the radial pressure is purely the function of density i.e. pr=f(ρ).

  13. An EMTP system level model of the PMAD DC test bed

    NASA Technical Reports Server (NTRS)

    Dravid, Narayan V.; Kacpura, Thomas J.; Tam, Kwa-Sur

    1991-01-01

    A power management and distribution direct current (PMAD DC) test bed was set up at the NASA Lewis Research Center to investigate Space Station Freedom Electric Power Systems issues. Efficiency of test bed operation significantly improves with a computer simulation model of the test bed as an adjunct tool of investigation. Such a model is developed using the Electromagnetic Transients Program (EMTP) and is available to the test bed developers and experimenters. The computer model is assembled on a modular basis. Device models of different types can be incorporated into the system model with only a few lines of code. A library of the various model types is created for this purpose. Simulation results and corresponding test bed results are presented to demonstrate model validity.

  14. A neural network model of metaphor understanding with dynamic interaction based on a statistical language analysis: targeting a human-like model.

    PubMed

    Terai, Asuka; Nakagawa, Masanori

    2007-08-01

    The purpose of this paper is to construct a model that represents the human process of understanding metaphors, focusing specifically on similes of the form an "A like B". Generally speaking, human beings are able to generate and understand many sorts of metaphors. This study constructs the model based on a probabilistic knowledge structure for concepts which is computed from a statistical analysis of a large-scale corpus. Consequently, this model is able to cover the many kinds of metaphors that human beings can generate. Moreover, the model implements the dynamic process of metaphor understanding by using a neural network with dynamic interactions. Finally, the validity of the model is confirmed by comparing model simulations with the results from a psychological experiment.

  15. On-board monitoring of 2-D spatially-resolved temperatures in cylindrical lithium-ion batteries: Part I. Low-order thermal modelling

    NASA Astrophysics Data System (ADS)

    Richardson, Robert R.; Zhao, Shi; Howey, David A.

    2016-09-01

    Estimating the temperature distribution within Li-ion batteries during operation is critical for safety and control purposes. Although existing control-oriented thermal models - such as thermal equivalent circuits (TEC) - are computationally efficient, they only predict average temperatures, and are unable to predict the spatially resolved temperature distribution throughout the cell. We present a low-order 2D thermal model of a cylindrical battery based on a Chebyshev spectral-Galerkin (SG) method, capable of predicting the full temperature distribution with a similar efficiency to a TEC. The model accounts for transient heat generation, anisotropic heat conduction, and non-homogeneous convection boundary conditions. The accuracy of the model is validated through comparison with finite element simulations, which show that the 2-D temperature field (r, z) of a large format (64 mm diameter) cell can be accurately modelled with as few as 4 states. Furthermore, the performance of the model for a range of Biot numbers is investigated via frequency analysis. For larger cells or highly transient thermal dynamics, the model order can be increased for improved accuracy. The incorporation of this model in a state estimation scheme with experimental validation against thermocouple measurements is presented in the companion contribution (http://www.sciencedirect.com/science/article/pii/S0378775316308163)

  16. A new modal-based approach for modelling the bump foil structure in the simultaneous solution of foil-air bearing rotor dynamic problems

    NASA Astrophysics Data System (ADS)

    Bin Hassan, M. F.; Bonello, P.

    2017-05-01

    Recently-proposed techniques for the simultaneous solution of foil-air bearing (FAB) rotor dynamic problems have been limited to a simple bump foil model in which the individual bumps were modelled as independent spring-damper (ISD) subsystems. The present paper addresses this limitation by introducing a modal model of the bump foil structure into the simultaneous solution scheme. The dynamics of the corrugated bump foil structure are first studied using the finite element (FE) technique. This study is experimentally validated using a purpose-made corrugated foil structure. Based on the findings of this study, it is proposed that the dynamics of the full foil structure, including bump interaction and foil inertia, can be represented by a modal model comprising a limited number of modes. This full foil structure modal model (FFSMM) is then adapted into the rotordynamic FAB problem solution scheme, instead of the ISD model. Preliminary results using the FFSMM under static and unbalance excitation conditions are proven to be reliable by comparison against the corresponding ISD foil model results and by cross-correlating different methods for computing the deflection of the full foil structure. The rotor-bearing model is also validated against experimental and theoretical results in the literature.

  17. Space Weather Modeling Services at the Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Hesse, Michael

    2006-01-01

    The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership, which aims at the creation of next generation space weather models. The goal of the CCMC is to support the research and developmental work necessary to substantially increase the present-day modeling capability for space weather purposes, and to provide models for transition to the Rapid Prototyping Centers at the space weather forecast centers. This goal requires close collaborations with and substantial involvement of the research community. The physical regions to be addressed by CCMC-related activities range from the solar atmosphere to the Earth's upper atmosphere. The CCMC is an integral part of the National Space Weather Program Implementation Plan, of NASA's Living With a Star (LWS) initiative, and of the Department of Defense Space Weather Transition Plan. CCMC includes a facility at NASA Goddard Space Flight Center. CCMC also provides, to the research community, access to state-of-the-art space research models. In this paper we will provide a description of the current CCMC status, discuss current plans, research and development accomplishments and goals, and describe the model testing and validation process undertaken as part of the CCMC mandate. Special emphasis will be on solar and heliospheric models currently residing at CCMC, and on plans for validation and verification.

  18. Space Weather Modeling at the Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Hesse M.

    2005-01-01

    The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership, which aims at the creation of next generation space weather models. The goal of the CCMC is to support the research and developmental work necessary to substantially increase the present-day modeling capability for space weather purposes, and to provide models for transition to the rapid prototyping centers at the space weather forecast centers. This goal requires dose collaborations with and substantial involvement of the research community. The physical regions to be addressed by CCMC-related activities range from the solar atmosphere to the Earth's upper atmosphere. The CCMC is an integral part of the National Space Weather Program Implementation Plan, of NASA's Living With a Star (LWS) initiative, and of the Department of Defense Space Weather Transition Plan. CCMC includes a facility at NASA Goddard Space Flight Center, as well as distributed computing facilities provided by the US Air Force. CCMC also provides, to the research community, access to state-of-the-art space research models. In this paper we will provide updates on CCMC status, on current plans, research and development accomplishments and goals, and on the model testing and validation process undertaken as part of the CCMC mandate. Special emphasis will be on solar and heliospheric models currently residing at CCMC, and on plans for validation and verification.

  19. MetaKTSP: a meta-analytic top scoring pair method for robust cross-study validation of omics prediction analysis.

    PubMed

    Kim, SungHwan; Lin, Chien-Wei; Tseng, George C

    2016-07-01

    Supervised machine learning is widely applied to transcriptomic data to predict disease diagnosis, prognosis or survival. Robust and interpretable classifiers with high accuracy are usually favored for their clinical and translational potential. The top scoring pair (TSP) algorithm is an example that applies a simple rank-based algorithm to identify rank-altered gene pairs for classifier construction. Although many classification methods perform well in cross-validation of single expression profile, the performance usually greatly reduces in cross-study validation (i.e. the prediction model is established in the training study and applied to an independent test study) for all machine learning methods, including TSP. The failure of cross-study validation has largely diminished the potential translational and clinical values of the models. The purpose of this article is to develop a meta-analytic top scoring pair (MetaKTSP) framework that combines multiple transcriptomic studies and generates a robust prediction model applicable to independent test studies. We proposed two frameworks, by averaging TSP scores or by combining P-values from individual studies, to select the top gene pairs for model construction. We applied the proposed methods in simulated data sets and three large-scale real applications in breast cancer, idiopathic pulmonary fibrosis and pan-cancer methylation. The result showed superior performance of cross-study validation accuracy and biomarker selection for the new meta-analytic framework. In conclusion, combining multiple omics data sets in the public domain increases robustness and accuracy of the classification model that will ultimately improve disease understanding and clinical treatment decisions to benefit patients. An R package MetaKTSP is available online. (http://tsenglab.biostat.pitt.edu/software.htm). ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Improving Coastal Ocean Color Validation Capabilities through Application of Inherent Optical Properties (IOPs)

    NASA Technical Reports Server (NTRS)

    Mannino, Antonio

    2008-01-01

    Understanding how the different components of seawater alter the path of incident sunlight through scattering and absorption is essential to using remotely sensed ocean color observations effectively. This is particularly apropos in coastal waters where the different optically significant components (phytoplankton, detrital material, inorganic minerals, etc.) vary widely in concentration, often independently from one another. Inherent Optical Properties (IOPs) form the link between these biogeochemical constituents and the Apparent Optical Properties (AOPs). understanding this interrelationship is at the heart of successfully carrying out inversions of satellite-measured radiance to biogeochemical properties. While sufficient covariation of seawater constituents in case I waters typically allows empirical algorithms connecting AOPs and biogeochemical parameters to behave well, these empirical algorithms normally do not hold for case I1 regimes (Carder et al. 2003). Validation in the context of ocean color remote sensing refers to in-situ measurements used to verify or characterize algorithm products or any assumption used as input to an algorithm. In this project, validation capabilities are considered those measurement capabilities, techniques, methods, models, etc. that allow effective validation. Enhancing current validation capabilities by incorporating state-of-the-art IOP measurements and optical models is the purpose of this work. Involved in this pursuit is improving core IOP measurement capabilities (spectral, angular, spatio-temporal resolutions), improving our understanding of the behavior of analytical AOP-IOP approximations in complex coastal waters, and improving the spatial and temporal resolution of biogeochemical data for validation by applying biogeochemical-IOP inversion models so that these parameters can be computed from real-time IOP sensors with high sampling rates. Research cruises supported by this project provides for collection and processing of seawater samples for biogeochemical (pigments, DOC and POC) and optical (CDOM and POM absorption coefficients) analyses to enhance our understanding of the linkages between in-water optical measurements (IOPs and AOPs) and biogeochemical constituents and to provide a more comprehensive suite of validation products.

Top