Sample records for model validation illustrated

  1. Longitudinal Models of Reliability and Validity: A Latent Curve Approach.

    ERIC Educational Resources Information Center

    Tisak, John; Tisak, Marie S.

    1996-01-01

    Dynamic generalizations of reliability and validity that will incorporate longitudinal or developmental models, using latent curve analysis, are discussed. A latent curve model formulated to depict change is incorporated into the classical definitions of reliability and validity. The approach is illustrated with sociological and psychological…

  2. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    PubMed Central

    Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.

    2017-01-01

    Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237

  3. Validating for Use and Interpretation: A Mixed Methods Contribution Illustrated

    ERIC Educational Resources Information Center

    Morell, Linda; Tan, Rachael Jin Bee

    2009-01-01

    Researchers in the areas of psychology and education strive to understand the intersections among validity, educational measurement, and cognitive theory. Guided by a mixed model conceptual framework, this study investigates how respondents' opinions inform the validation argument. Validity evidence for a science assessment was collected through…

  4. Multireader multicase reader studies with binary agreement data: simulation, analysis, validation, and sizing.

    PubMed

    Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D

    2014-10-01

    We treat multireader multicase (MRMC) reader studies for which a reader's diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities ([Formula: see text]). This model can be used to validate the coverage probabilities of 95% confidence intervals (of [Formula: see text], [Formula: see text], or [Formula: see text] when [Formula: see text]), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes [Formula: see text]). To illustrate the utility of our simulation model, we adapt the Obuchowski-Rockette-Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data.

  5. Multireader multicase reader studies with binary agreement data: simulation, analysis, validation, and sizing

    PubMed Central

    Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D.

    2014-01-01

    Abstract. We treat multireader multicase (MRMC) reader studies for which a reader’s diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities (P1=P2). This model can be used to validate the coverage probabilities of 95% confidence intervals (of P1, P2, or P1−P2 when P1−P2=0), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes P1=P2). To illustrate the utility of our simulation model, we adapt the Obuchowski–Rockette–Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data. PMID:26158051

  6. Applying the Mixed Methods Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire

    ERIC Educational Resources Information Center

    Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.

    2018-01-01

    Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…

  7. Validation through Understanding Test-Taking Strategies: An Illustration With the CELPIP-General Reading Pilot Test Using Structural Equation Modeling

    ERIC Educational Resources Information Center

    Wu, Amery D.; Stone, Jake E.

    2016-01-01

    This article explores an approach for test score validation that examines test takers' strategies for taking a reading comprehension test. The authors formulated three working hypotheses about score validity pertaining to three types of test-taking strategy (comprehending meaning, test management, and test-wiseness). These hypotheses were…

  8. Viability of Cross-Flow Fan with Helical Blades for Vertical Take-off and Landing Aircraft

    DTIC Science & Technology

    2012-09-01

    fluid dynamics (CFD) software, ANSYS - CFX , a three-dimensional (3-D) straight-bladed model was validated against previous study’s experimental results...computational fluid dynamics software (CFD), ANSYS - CFX , a three-dimensional (3-D) straight-bladed model was validated against previous study’s experimental...37 B. SIZING PARAMETERS AND ILLUSTRATION ................................. 37 APPENDIX B. ANSYS CFX PARAMETERS

  9. Validation of Groundwater Models: Meaningful or Meaningless?

    NASA Astrophysics Data System (ADS)

    Konikow, L. F.

    2003-12-01

    Although numerical simulation models are valuable tools for analyzing groundwater systems, their predictive accuracy is limited. People who apply groundwater flow or solute-transport models, as well as those who make decisions based on model results, naturally want assurance that a model is "valid." To many people, model validation implies some authentication of the truth or accuracy of the model. History matching is often presented as the basis for model validation. Although such model calibration is a necessary modeling step, it is simply insufficient for model validation. Because of parameter uncertainty and solution non-uniqueness, declarations of validation (or verification) of a model are not meaningful. Post-audits represent a useful means to assess the predictive accuracy of a site-specific model, but they require the existence of long-term monitoring data. Model testing may yield invalidation, but that is an opportunity to learn and to improve the conceptual and numerical models. Examples of post-audits and of the application of a solute-transport model to a radioactive waste disposal site illustrate deficiencies in model calibration, prediction, and validation.

  10. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    NASA Astrophysics Data System (ADS)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-05-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  11. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    NASA Astrophysics Data System (ADS)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-01-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  12. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  13. A new framework to enhance the interpretation of external validation studies of clinical prediction models.

    PubMed

    Debray, Thomas P A; Vergouwe, Yvonne; Koffijberg, Hendrik; Nieboer, Daan; Steyerberg, Ewout W; Moons, Karel G M

    2015-03-01

    It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from "different but related" samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models. We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting. We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings. The proposed framework enhances the interpretation of findings at external validation of prediction models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Validation Metrics for Improving Our Understanding of Turbulent Transport - Moving Beyond Proof by Pretty Picture and Loud Assertion

    NASA Astrophysics Data System (ADS)

    Holland, C.

    2013-10-01

    Developing validated models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. This tutorial will present an overview of the key guiding principles and practices for state-of-the-art validation studies, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. The primary focus of the talk will be the development of quantiatve validation metrics, which are essential for moving beyond qualitative and subjective assessments of model performance and fidelity. Particular emphasis and discussion is given to (i) the need for utilizing synthetic diagnostics to enable quantitatively meaningful comparisons between simulation and experiment, and (ii) the importance of robust uncertainty quantification and its inclusion within the metrics. To illustrate these concepts, we first review the structure and key insights gained from commonly used ``global'' transport model metrics (e.g. predictions of incremental stored energy or radially-averaged temperature), as well as their limitations. Building upon these results, a new form of turbulent transport metrics is then proposed, which focuses upon comparisons of predicted local gradients and fluctuation characteristics against observation. We demonstrate the utility of these metrics by applying them to simulations and modeling of a newly developed ``validation database'' derived from the results of a systematic, multi-year turbulent transport validation campaign on the DIII-D tokamak, in which comprehensive profile and fluctuation measurements have been obtained from a wide variety of heating and confinement scenarios. Finally, we discuss extensions of these metrics and their underlying design concepts to other areas of plasma confinement research, including both magnetohydrodynamic stability and integrated scenario modeling. Supported by the US DOE under DE-FG02-07ER54917 and DE-FC02-08ER54977.

  15. Hierarchical multi-scale approach to validation and uncertainty quantification of hyper-spectral image modeling

    NASA Astrophysics Data System (ADS)

    Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.

    2016-05-01

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.

  16. Crash Certification by Analysis - Are We There Yet?

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Fasanella, Edwin L.; Lyle, Karen H.

    2006-01-01

    This paper addresses the issue of crash certification by analysis. This broad topic encompasses many ancillary issues including model validation procedures, uncertainty in test data and analysis models, probabilistic techniques for test-analysis correlation, verification of the mathematical formulation, and establishment of appropriate qualification requirements. This paper will focus on certification requirements for crashworthiness of military helicopters; capabilities of the current analysis codes used for crash modeling and simulation, including some examples of simulations from the literature to illustrate the current approach to model validation; and future directions needed to achieve "crash certification by analysis."

  17. Prognosis Research Strategy (PROGRESS) 3: prognostic model research.

    PubMed

    Steyerberg, Ewout W; Moons, Karel G M; van der Windt, Danielle A; Hayden, Jill A; Perel, Pablo; Schroter, Sara; Riley, Richard D; Hemingway, Harry; Altman, Douglas G

    2013-01-01

    Prognostic models are abundant in the medical literature yet their use in practice seems limited. In this article, the third in the PROGRESS series, the authors review how such models are developed and validated, and then address how prognostic models are assessed for their impact on practice and patient outcomes, illustrating these ideas with examples.

  18. Reverse-translational biomarker validation of Abnormal Repetitive Behaviors in mice: an illustration of the 4P's modeling approach

    PubMed Central

    Garner, Joseph P.; Thogerson, Collette M.; Dufour, Brett D.; Würbel, Hanno; Murray, James D.; Mench, Joy A.

    2011-01-01

    The NIMH's new strategic plan, with its emphasis on the “4P's” (Prediction, Preemption, Personalization, & Populations) and biomarker-based medicine requires a radical shift in animal modeling methodology. In particular 4P's models will be non-determinant (i.e. disease severity will depend on secondary environmental and genetic factors); and validated by reverse-translation of animal homologues to human biomarkers. A powerful consequence of the biomarker approach is that different closely-related disorders have a unique fingerprint of biomarkers. Animals can be validated as a highly-specific model of a single disorder by matching this `fingerprint'; or as a model of a symptom seen in multiple disorders by matching common biomarkers. Here we illustrate this approach with two Abnormal Repetitive Behaviors (ARBs) in mice: stereotypies; and barbering (hair pulling). We developed animal versions of the neuropsychological biomarkers that distinguish human ARBs, and tested the fingerprint of the different mouse ARBs. As predicted, the two mouse ARBs were associated with different biomarkers. Both barbering and stereotypy could be discounted as models of OCD (even though they are widely used as such), due to the absence of limbic biomarkers which are characteristic of OCD and hence are necessary for a valid model. Conversely barbering matched the fingerprint of trichotillomania (i.e. selective deficits in set-shifting), suggesting it may be a highly specific model of this disorder. In contrast stereotypies were correlated only with a biomarker (deficits in response shifting) correlated with stereotypies in multiple disorders, suggesting that animal stereotypies model stereotypies in multiple disorders. PMID:21219937

  19. Hierarchical Multi-Scale Approach To Validation and Uncertainty Quantification of Hyper-Spectral Image Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less

  20. Multiple Subtypes among Vocationally Undecided College Students: A Model and Assessment Instrument.

    ERIC Educational Resources Information Center

    Jones, Lawrence K.; Chenery, Mary Faeth

    1980-01-01

    A model of vocational decision status was developed, and an instrument was constructed and used to assess its three dimensions. Results demonstrated the utility of the model, supported the reliability and validity of the instrument, and illustrated the value of viewing vocationally undecided students as multiple subtypes. (Author)

  1. Pitfalls in Prediction Modeling for Normal Tissue Toxicity in Radiation Therapy: An Illustration With the Individual Radiation Sensitivity and Mammary Carcinoma Risk Factor Investigation Cohorts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbah, Chamberlain, E-mail: chamberlain.mbah@ugent.be; Department of Mathematical Modeling, Statistics, and Bioinformatics, Faculty of Bioscience Engineering, Ghent University, Ghent; Thierens, Hubert

    Purpose: To identify the main causes underlying the failure of prediction models for radiation therapy toxicity to replicate. Methods and Materials: Data were used from two German cohorts, Individual Radiation Sensitivity (ISE) (n=418) and Mammary Carcinoma Risk Factor Investigation (MARIE) (n=409), of breast cancer patients with similar characteristics and radiation therapy treatments. The toxicity endpoint chosen was telangiectasia. The LASSO (least absolute shrinkage and selection operator) logistic regression method was used to build a predictive model for a dichotomized endpoint (Radiation Therapy Oncology Group/European Organization for the Research and Treatment of Cancer score 0, 1, or ≥2). Internal areas undermore » the receiver operating characteristic curve (inAUCs) were calculated by a naïve approach whereby the training data (ISE) were also used for calculating the AUC. Cross-validation was also applied to calculate the AUC within the same cohort, a second type of inAUC. Internal AUCs from cross-validation were calculated within ISE and MARIE separately. Models trained on one dataset (ISE) were applied to a test dataset (MARIE) and AUCs calculated (exAUCs). Results: Internal AUCs from the naïve approach were generally larger than inAUCs from cross-validation owing to overfitting the training data. Internal AUCs from cross-validation were also generally larger than the exAUCs, reflecting heterogeneity in the predictors between cohorts. The best models with largest inAUCs from cross-validation within both cohorts had a number of common predictors: hypertension, normalized total boost, and presence of estrogen receptors. Surprisingly, the effect (coefficient in the prediction model) of hypertension on telangiectasia incidence was positive in ISE and negative in MARIE. Other predictors were also not common between the 2 cohorts, illustrating that overcoming overfitting does not solve the problem of replication failure of prediction models completely. Conclusions: Overfitting and cohort heterogeneity are the 2 main causes of replication failure of prediction models across cohorts. Cross-validation and similar techniques (eg, bootstrapping) cope with overfitting, but the development of validated predictive models for radiation therapy toxicity requires strategies that deal with cohort heterogeneity.« less

  2. An Overview of NASA's IM&S Verification and Validation Process Plan and Specification for Space Exploration

    NASA Technical Reports Server (NTRS)

    Gravitz, Robert M.; Hale, Joseph

    2006-01-01

    NASA's Exploration Systems Mission Directorate (ESMD) is implementing a management approach for modeling and simulation (M&S) that will provide decision-makers information on the model's fidelity, credibility, and quality. This information will allow the decision-maker to understand the risks involved in using a model's results in the decision-making process. This presentation will discuss NASA's approach for verification and validation (V&V) of its models or simulations supporting space exploration. This presentation will describe NASA's V&V process and the associated M&S verification and validation (V&V) activities required to support the decision-making process. The M&S V&V Plan and V&V Report templates for ESMD will also be illustrated.

  3. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  4. Reverse-translational biomarker validation of Abnormal Repetitive Behaviors in mice: an illustration of the 4P's modeling approach.

    PubMed

    Garner, Joseph P; Thogerson, Collette M; Dufour, Brett D; Würbel, Hanno; Murray, James D; Mench, Joy A

    2011-06-01

    The NIMH's new strategic plan, with its emphasis on the "4P's" (Prediction, Pre-emption, Personalization, and Populations) and biomarker-based medicine requires a radical shift in animal modeling methodology. In particular 4P's models will be non-determinant (i.e. disease severity will depend on secondary environmental and genetic factors); and validated by reverse-translation of animal homologues to human biomarkers. A powerful consequence of the biomarker approach is that different closely related disorders have a unique fingerprint of biomarkers. Animals can be validated as a highly specific model of a single disorder by matching this 'fingerprint'; or as a model of a symptom seen in multiple disorders by matching common biomarkers. Here we illustrate this approach with two Abnormal Repetitive Behaviors (ARBs) in mice: stereotypies and barbering (hair pulling). We developed animal versions of the neuropsychological biomarkers that distinguish human ARBs, and tested the fingerprint of the different mouse ARBs. As predicted, the two mouse ARBs were associated with different biomarkers. Both barbering and stereotypy could be discounted as models of OCD (even though they are widely used as such), due to the absence of limbic biomarkers which are characteristic of OCD and hence are necessary for a valid model. Conversely barbering matched the fingerprint of trichotillomania (i.e. selective deficits in set-shifting), suggesting it may be a highly specific model of this disorder. In contrast stereotypies were correlated only with a biomarker (deficits in response shifting) correlated with stereotypies in multiple disorders, suggesting that animal stereotypies model stereotypies in multiple disorders. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Alternating Renewal Process Models for Behavioral Observation: Simulation Methods, Software, and Validity Illustrations

    ERIC Educational Resources Information Center

    Pustejovsky, James E.; Runyon, Christopher

    2014-01-01

    Direct observation recording procedures produce reductive summary measurements of an underlying stream of behavior. Previous methodological studies of these recording procedures have employed simulation methods for generating random behavior streams, many of which amount to special cases of a statistical model known as the alternating renewal…

  6. The Challenge of Grounding Planning in Simulation with an Interactive Model Development Environment

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Frank, Jeremy D.; Chachere, John M.; Smith, Tristan B.; Swanson, Keith J.

    2011-01-01

    A principal obstacle to fielding automated planning systems is the difficulty of modeling. Physical systems are modeled conventionally based on specification documents and the modeler's understanding of the system. Thus, the model is developed in a way that is disconnected from the system's actual behavior and is vulnerable to manual error. Another obstacle to fielding planners is testing and validation. For a space mission, generated plans must be validated often by translating them into command sequences that are run in a simulation testbed. Testing in this way is complex and onerous because of the large number of possible plans and states of the spacecraft. Though, if used as a source of domain knowledge, the simulator can ease validation. This paper poses a challenge: to ground planning models in the system physics represented by simulation. A proposed, interactive model development environment illustrates the integration of planning and simulation to meet the challenge. This integration reveals research paths for automated model construction and validation.

  7. On Nomological Validity and Auxiliary Assumptions: The Importance of Simultaneously Testing Effects in Social Cognitive Theories Applied to Health Behavior and Some Guidelines

    PubMed Central

    Hagger, Martin S.; Gucciardi, Daniel F.; Chatzisarantis, Nikos L. D.

    2017-01-01

    Tests of social cognitive theories provide informative data on the factors that relate to health behavior, and the processes and mechanisms involved. In the present article, we contend that tests of social cognitive theories should adhere to the principles of nomological validity, defined as the degree to which predictions in a formal theoretical network are confirmed. We highlight the importance of nomological validity tests to ensure theory predictions can be disconfirmed through observation. We argue that researchers should be explicit on the conditions that lead to theory disconfirmation, and identify any auxiliary assumptions on which theory effects may be conditional. We contend that few researchers formally test the nomological validity of theories, or outline conditions that lead to model rejection and the auxiliary assumptions that may explain findings that run counter to hypotheses, raising potential for ‘falsification evasion.’ We present a brief analysis of studies (k = 122) testing four key social cognitive theories in health behavior to illustrate deficiencies in reporting theory tests and evaluations of nomological validity. Our analysis revealed that few articles report explicit statements suggesting that their findings support or reject the hypotheses of the theories tested, even when findings point to rejection. We illustrate the importance of explicit a priori specification of fundamental theory hypotheses and associated auxiliary assumptions, and identification of the conditions which would lead to rejection of theory predictions. We also demonstrate the value of confirmatory analytic techniques, meta-analytic structural equation modeling, and Bayesian analyses in providing robust converging evidence for nomological validity. We provide a set of guidelines for researchers on how to adopt and apply the nomological validity approach to testing health behavior models. PMID:29163307

  8. Independent external validation of predictive models for urinary dysfunction following external beam radiotherapy of the prostate: Issues in model development and reporting.

    PubMed

    Yahya, Noorazrul; Ebert, Martin A; Bulsara, Max; Kennedy, Angel; Joseph, David J; Denham, James W

    2016-08-01

    Most predictive models are not sufficiently validated for prospective use. We performed independent external validation of published predictive models for urinary dysfunctions following radiotherapy of the prostate. Multivariable models developed to predict atomised and generalised urinary symptoms, both acute and late, were considered for validation using a dataset representing 754 participants from the TROG 03.04-RADAR trial. Endpoints and features were harmonised to match the predictive models. The overall performance, calibration and discrimination were assessed. 14 models from four publications were validated. The discrimination of the predictive models in an independent external validation cohort, measured using the area under the receiver operating characteristic (ROC) curve, ranged from 0.473 to 0.695, generally lower than in internal validation. 4 models had ROC >0.6. Shrinkage was required for all predictive models' coefficients ranging from -0.309 (prediction probability was inverse to observed proportion) to 0.823. Predictive models which include baseline symptoms as a feature produced the highest discrimination. Two models produced a predicted probability of 0 and 1 for all patients. Predictive models vary in performance and transferability illustrating the need for improvements in model development and reporting. Several models showed reasonable potential but efforts should be increased to improve performance. Baseline symptoms should always be considered as potential features for predictive models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Assessing cross-cultural validity of scales: a methodological review and illustrative example.

    PubMed

    Beckstead, Jason W; Yang, Chiu-Yueh; Lengacher, Cecile A

    2008-01-01

    In this article, we assessed the cross-cultural validity of the Women's Role Strain Inventory (WRSI), a multi-item instrument that assesses the degree of strain experienced by women who juggle the roles of working professional, student, wife and mother. Cross-cultural validity is evinced by demonstrating the measurement invariance of the WRSI. Measurement invariance is the extent to which items of multi-item scales function in the same way across different samples of respondents. We assessed measurement invariance by comparing a sample of working women in Taiwan with a similar sample from the United States. Structural equation models (SEMs) were employed to determine the invariance of the WRSI and to estimate the unique validity variance of its items. This article also provides nurse-researchers with the necessary underlying measurement theory and illustrates how SEMs may be applied to assess cross-cultural validity of instruments used in nursing research. Overall performance of the WRSI was acceptable but our analysis showed that some items did not display invariance properties across samples. Item analysis is presented and recommendations for improving the instrument are discussed.

  10. Assessing the impact of modeling limits on intelligent systems

    NASA Technical Reports Server (NTRS)

    Rouse, William B.; Hammer, John M.

    1990-01-01

    The knowledge bases underlying intelligent systems are validated. A general conceptual framework is provided for considering the roles in intelligent systems of models of physical, behavioral, and operational phenomena. A methodology is described for identifying limits in particular intelligent systems, and the use of the methodology is illustrated via an experimental evaluation of the pilot-vehicle interface within the Pilot's Associate. The requirements and functionality are outlined for a computer based knowledge engineering environment which would embody the approach advocated and illustrated in earlier discussions. Issues considered include the specific benefits of this functionality, the potential breadth of applicability, and technical feasibility.

  11. CheckMyMetal: a macromolecular metal-binding validation tool

    PubMed Central

    Porebski, Przemyslaw J.

    2017-01-01

    Metals are essential in many biological processes, and metal ions are modeled in roughly 40% of the macromolecular structures in the Protein Data Bank (PDB). However, a significant fraction of these structures contain poorly modeled metal-binding sites. CheckMyMetal (CMM) is an easy-to-use metal-binding site validation server for macromolecules that is freely available at http://csgid.org/csgid/metal_sites. The CMM server can detect incorrect metal assignments as well as geometrical and other irregularities in the metal-binding sites. Guidelines for metal-site modeling and validation in macromolecules are illustrated by several practical examples grouped by the type of metal. These examples show CMM users (and crystallographers in general) problems they may encounter during the modeling of a specific metal ion. PMID:28291757

  12. Incremental Validity of Multidimensional Proficiency Scores from Diagnostic Classification Models: An Illustration for Elementary School Mathematics

    ERIC Educational Resources Information Center

    Kunina-Habenicht, Olga; Rupp, André A.; Wilhelm, Oliver

    2017-01-01

    Diagnostic classification models (DCMs) hold great potential for applications in summative and formative assessment by providing discrete multivariate proficiency scores that yield statistically driven classifications of students. Using data from a newly developed diagnostic arithmetic assessment that was administered to 2032 fourth-grade students…

  13. Finding One's Voice: The Pacesetter Model for More Equitable Assessment.

    ERIC Educational Resources Information Center

    Badger, Elizabeth

    1996-01-01

    Describes the College Board's Pacesetter Program, high school courses developed using principles of ongoing performance testing and portfolios, standards, and curriculum. The model is illustrated in a description of the Voices of Modern Culture language arts course. Argues that this assessment process has systemic validity and is more relevant to…

  14. High frequency, multi-axis dynamic stiffness analysis of a fractionally damped elastomeric isolator using continuous system theory

    NASA Astrophysics Data System (ADS)

    Fredette, Luke; Singh, Rajendra

    2017-02-01

    A spectral element approach is proposed to determine the multi-axis dynamic stiffness terms of elastomeric isolators with fractional damping over a broad range of frequencies. The dynamic properties of a class of cylindrical isolators are modeled by using the continuous system theory in terms of homogeneous rods or Timoshenko beams. The transfer matrix type dynamic stiffness expressions are developed from exact harmonic solutions given translational or rotational displacement excitations. Broadband dynamic stiffness magnitudes (say up to 5 kHz) are computationally verified for axial, torsional, shear, flexural, and coupled stiffness terms using a finite element model. Some discrepancies are found between finite element and spectral element models for the axial and flexural motions, illustrating certain limitations of each method. Experimental validation is provided for an isolator with two cylindrical elements (that work primarily in the shear mode) using dynamic measurements, as reported in the prior literature, up to 600 Hz. Superiority of the fractional damping formulation over structural or viscous damping models is illustrated via experimental validation. Finally, the strengths and limitations of the spectral element approach are briefly discussed.

  15. Validation of Community Models: Identifying Events in Space Weather Model Timelines

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    I develop and document a set of procedures which test the quality of predictions of solar wind speed and polarity of the interplanetary magnetic field (IMF) made by coupled models of the ambient solar corona and heliosphere. The Wang-Sheeley-Arge (WSA) model is used to illustrate the application of these validation procedures. I present an algorithm which detects transitions of the solar wind from slow to high speed. I also present an algorithm which processes the measured polarity of the outward directed component of the IMF. This removes high-frequency variations to expose the longer-scale changes that reflect IMF sector changes. I apply these algorithms to WSA model predictions made using a small set of photospheric synoptic magnetograms obtained by the Global Oscillation Network Group as input to the model. The results of this preliminary validation of the WSA model (version 1.6) are summarized.

  16. Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames

    NASA Astrophysics Data System (ADS)

    Schlup, Jason; Blanquart, Guillaume

    2018-03-01

    The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.

  17. In vitro burn model illustrating heat conduction patterns using compressed thermal papers.

    PubMed

    Lee, Jun Yong; Jung, Sung-No; Kwon, Ho

    2015-01-01

    To date, heat conduction from heat sources to tissue has been estimated by complex mathematical modeling. In the present study, we developed an intuitive in vitro skin burn model that illustrates heat conduction patterns inside the skin. This was composed of tightly compressed thermal papers with compression frames. Heat flow through the model left a trace by changing the color of thermal papers. These were digitized and three-dimensionally reconstituted to reproduce the heat conduction patterns in the skin. For standardization, we validated K91HG-CE thermal paper using a printout test and bivariate correlation analysis. We measured the papers' physical properties and calculated the estimated depth of heat conduction using Fourier's equation. Through contact burns of 5, 10, 15, 20, and 30 seconds on porcine skin and our burn model using a heated brass comb, and comparing the burn wound and heat conduction trace, we validated our model. The heat conduction pattern correlation analysis (intraclass correlation coefficient: 0.846, p < 0.001) and the heat conduction depth correlation analysis (intraclass correlation coefficient: 0.93, p < 0.001) showed statistically significant high correlations between the porcine burn wound and our model. Our model showed good correlation with porcine skin burn injury and replicated its heat conduction patterns. © 2014 by the Wound Healing Society.

  18. Illustrating a Mixed-Method Approach for Validating Culturally Specific Constructs

    ERIC Educational Resources Information Center

    Hitchcock, J.H.; Nastasi, B.K.; Dai, D.Y.; Newman, J.; Jayasena, A.; Bernstein-Moore, R.; Sarkar, S.; Varjas, K.

    2005-01-01

    The purpose of this article is to illustrate a mixed-method approach (i.e., combining qualitative and quantitative methods) for advancing the study of construct validation in cross-cultural research. The article offers a detailed illustration of the approach using the responses 612 Sri Lankan adolescents provided to an ethnographic survey. Such…

  19. An Approach to Comprehensive and Sustainable Solar Wind Model Validation

    NASA Astrophysics Data System (ADS)

    Rastaetter, L.; MacNeice, P. J.; Mays, M. L.; Boblitt, J. M.; Wiegand, C.

    2017-12-01

    The number of models of the corona and inner heliosphere and of their updates and upgrades grows steadily, as does the number and character of the model inputs. Maintaining up to date validation of these models, in the face of this constant model evolution, is a necessary but very labor intensive activity. In the last year alone, both NASA's LWS program and the CCMC's ongoing support of model forecasting activities at NOAA SWPC have sought model validation reports on the quality of all aspects of the community's coronal and heliospheric models, including both ambient and CME related wind solutions at L1. In this presentation I will give a brief review of the community's previous model validation results of L1 wind representation. I will discuss the semi-automated web based system we are constructing at the CCMC to present comparative visualizations of all interesting aspects of the solutions from competing models.This system is designed to be easily queried to provide the essential comprehensive inputs to repeat andupdate previous validation studies and support extensions to them. I will illustrate this by demonstrating how the system is being used to support the CCMC/LWS Model Assessment Forum teams focused on the ambient and time dependent corona and solar wind, including CME arrival time and IMF Bz.I will also discuss plans to extend the system to include results from the Forum teams addressing SEP model validation.

  20. Development and validation of a ten-item questionnaire with explanatory illustrations to assess upper extremity disorders: favorable effect of illustrations in the item reduction process.

    PubMed

    Kurimoto, Shigeru; Suzuki, Mikako; Yamamoto, Michiro; Okui, Nobuyuki; Imaeda, Toshihiko; Hirata, Hitoshi

    2011-11-01

    The purpose of this study is to develop a short and valid measure for upper extremity disorders and to assess the effect of attached illustrations in item reduction of a self-administered disability questionnaire while retaining psychometric properties. A validated questionnaire used to assess upper extremity disorders, the Hand20, was reduced to ten items using two item-reduction techniques. The psychometric properties of the abbreviated form, the Hand10, were evaluated on an independent sample that was used for the shortening process. Validity, reliability, and responsiveness of the Hand10 were retained in the item reduction process. It was possible that the use of explanatory illustrations attached to the Hand10 helped with its reproducibility. The illustrations for the Hand10 promoted text comprehension and motivation to answer the items. These changes resulted in high acceptability; more than 99.3% of patients, including 98.5% of elderly patients, could complete the Hand10 properly. The illustrations had favorable effects on the item reduction process and made it possible to retain precision of the instrument. The Hand10 is a reliable and valid instrument for individual-level applications with the advantage of being compact and broadly applicable, even in elderly individuals.

  1. A cross-validation package driving Netica with python

    USGS Publications Warehouse

    Fienen, Michael N.; Plant, Nathaniel G.

    2014-01-01

    Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).

  2. Validation metrics for turbulent plasma transport

    DOE PAGES

    Holland, C.

    2016-06-22

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less

  3. Validation metrics for turbulent plasma transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, C.

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less

  4. Towards Personalized Cardiology: Multi-Scale Modeling of the Failing Heart

    PubMed Central

    Amr, Ali; Neumann, Dominik; Georgescu, Bogdan; Seegerer, Philipp; Kamen, Ali; Haas, Jan; Frese, Karen S.; Irawati, Maria; Wirsz, Emil; King, Vanessa; Buss, Sebastian; Mereles, Derliz; Zitron, Edgar; Keller, Andreas; Katus, Hugo A.; Comaniciu, Dorin; Meder, Benjamin

    2015-01-01

    Background Despite modern pharmacotherapy and advanced implantable cardiac devices, overall prognosis and quality of life of HF patients remain poor. This is in part due to insufficient patient stratification and lack of individualized therapy planning, resulting in less effective treatments and a significant number of non-responders. Methods and Results State-of-the-art clinical phenotyping was acquired, including magnetic resonance imaging (MRI) and biomarker assessment. An individualized, multi-scale model of heart function covering cardiac anatomy, electrophysiology, biomechanics and hemodynamics was estimated using a robust framework. The model was computed on n=46 HF patients, showing for the first time that advanced multi-scale models can be fitted consistently on large cohorts. Novel multi-scale parameters derived from the model of all cases were analyzed and compared against clinical parameters, cardiac imaging, lab tests and survival scores to evaluate the explicative power of the model and its potential for better patient stratification. Model validation was pursued by comparing clinical parameters that were not used in the fitting process against model parameters. Conclusion This paper illustrates how advanced multi-scale models can complement cardiovascular imaging and how they could be applied in patient care. Based on obtained results, it becomes conceivable that, after thorough validation, such heart failure models could be applied for patient management and therapy planning in the future, as we illustrate in one patient of our cohort who received CRT-D implantation. PMID:26230546

  5. A dynamic model of the human postural control system.

    NASA Technical Reports Server (NTRS)

    Hill, J. C.

    1971-01-01

    Description of a digital simulation of the pitch axis dynamics of a stick man. The difficulties encountered in linearizing the equations of motion are discussed; the conclusion reached is that a completely linear simulation is of such restricted validity that only a nonlinear simulation is of any practical use. Typical simulation results obtained from the full nonlinear model are illustrated.

  6. Twin Data That Made a Big Difference, and That Deserve to Be Better-Known and Used in Teaching

    ERIC Educational Resources Information Center

    Campbell, Harlan; Hanley, James A.

    2017-01-01

    Because of their efficiency and ability to keep many other factors constant, twin studies have a special appeal for investigators. Just as with any teaching dataset, a "matched-sets" dataset used to illustrate a statistical model should be compelling, still relevant, and valid. Indeed, such a "model dataset" should meet the…

  7. Rank-based methods for modeling dependence between loss triangles.

    PubMed

    Côté, Marie-Pier; Genest, Christian; Abdallah, Anas

    2016-01-01

    In order to determine the risk capital for their aggregate portfolio, property and casualty insurance companies must fit a multivariate model to the loss triangle data relating to each of their lines of business. As an inadequate choice of dependence structure may have an undesirable effect on reserve estimation, a two-stage inference strategy is proposed in this paper to assist with model selection and validation. Generalized linear models are first fitted to the margins. Standardized residuals from these models are then linked through a copula selected and validated using rank-based methods. The approach is illustrated with data from six lines of business of a large Canadian insurance company for which two hierarchical dependence models are considered, i.e., a fully nested Archimedean copula structure and a copula-based risk aggregation model.

  8. Development and validation of a cost-utility model for Type 1 diabetes mellitus.

    PubMed

    Wolowacz, S; Pearson, I; Shannon, P; Chubb, B; Gundgaard, J; Davies, M; Briggs, A

    2015-08-01

    To develop a health economic model to evaluate the cost-effectiveness of new interventions for Type 1 diabetes mellitus by their effects on long-term complications (measured through mean HbA1c ) while capturing the impact of treatment on hypoglycaemic events. Through a systematic review, we identified complications associated with Type 1 diabetes mellitus and data describing the long-term incidence of these complications. An individual patient simulation model was developed and included the following complications: cardiovascular disease, peripheral neuropathy, microalbuminuria, end-stage renal disease, proliferative retinopathy, ketoacidosis, cataract, hypoglycemia and adverse birth outcomes. Risk equations were developed from published cumulative incidence data and hazard ratios for the effect of HbA1c , age and duration of diabetes. We validated the model by comparing model predictions with observed outcomes from studies used to build the model (internal validation) and from other published data (external validation). We performed illustrative analyses for typical patient cohorts and a hypothetical intervention. Model predictions were within 2% of expected values in the internal validation and within 8% of observed values in the external validation (percentages represent absolute differences in the cumulative incidence). The model utilized high-quality, recent data specific to people with Type 1 diabetes mellitus. In the model validation, results deviated less than 8% from expected values. © 2014 Research Triangle Institute d/b/a RTI Health Solutions. Diabetic Medicine © 2014 Diabetes UK.

  9. Computational Phenotyping in Psychiatry: A Worked Example

    PubMed Central

    2016-01-01

    Abstract Computational psychiatry is a rapidly emerging field that uses model-based quantities to infer the behavioral and neuronal abnormalities that underlie psychopathology. If successful, this approach promises key insights into (pathological) brain function as well as a more mechanistic and quantitative approach to psychiatric nosology—structuring therapeutic interventions and predicting response and relapse. The basic procedure in computational psychiatry is to build a computational model that formalizes a behavioral or neuronal process. Measured behavioral (or neuronal) responses are then used to infer the model parameters of a single subject or a group of subjects. Here, we provide an illustrative overview over this process, starting from the modeling of choice behavior in a specific task, simulating data, and then inverting that model to estimate group effects. Finally, we illustrate cross-validation to assess whether between-subject variables (e.g., diagnosis) can be recovered successfully. Our worked example uses a simple two-step maze task and a model of choice behavior based on (active) inference and Markov decision processes. The procedural steps and routines we illustrate are not restricted to a specific field of research or particular computational model but can, in principle, be applied in many domains of computational psychiatry. PMID:27517087

  10. Computational Phenotyping in Psychiatry: A Worked Example.

    PubMed

    Schwartenbeck, Philipp; Friston, Karl

    2016-01-01

    Computational psychiatry is a rapidly emerging field that uses model-based quantities to infer the behavioral and neuronal abnormalities that underlie psychopathology. If successful, this approach promises key insights into (pathological) brain function as well as a more mechanistic and quantitative approach to psychiatric nosology-structuring therapeutic interventions and predicting response and relapse. The basic procedure in computational psychiatry is to build a computational model that formalizes a behavioral or neuronal process. Measured behavioral (or neuronal) responses are then used to infer the model parameters of a single subject or a group of subjects. Here, we provide an illustrative overview over this process, starting from the modeling of choice behavior in a specific task, simulating data, and then inverting that model to estimate group effects. Finally, we illustrate cross-validation to assess whether between-subject variables (e.g., diagnosis) can be recovered successfully. Our worked example uses a simple two-step maze task and a model of choice behavior based on (active) inference and Markov decision processes. The procedural steps and routines we illustrate are not restricted to a specific field of research or particular computational model but can, in principle, be applied in many domains of computational psychiatry.

  11. Using Model Replication to Improve the Reliability of Agent-Based Models

    NASA Astrophysics Data System (ADS)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  12. Modeling Collective Animal Behavior with a Cognitive Perspective: A Methodological Framework

    PubMed Central

    Weitz, Sebastian; Blanco, Stéphane; Fournier, Richard; Gautrais, Jacques; Jost, Christian; Theraulaz, Guy

    2012-01-01

    The last decades have seen an increasing interest in modeling collective animal behavior. Some studies try to reproduce as accurately as possible the collective dynamics and patterns observed in several animal groups with biologically plausible, individual behavioral rules. The objective is then essentially to demonstrate that the observed collective features may be the result of self-organizing processes involving quite simple individual behaviors. Other studies concentrate on the objective of establishing or enriching links between collective behavior researches and cognitive or physiological ones, which then requires that each individual rule be carefully validated. Here we discuss the methodological consequences of this additional requirement. Using the example of corpse clustering in ants, we first illustrate that it may be impossible to discriminate among alternative individual rules by considering only observational data collected at the group level. Six individual behavioral models are described: They are clearly distinct in terms of individual behaviors, they all reproduce satisfactorily the collective dynamics and distribution patterns observed in experiments, and we show theoretically that it is strictly impossible to discriminate two of these models even in the limit of an infinite amount of data whatever the accuracy level. A set of methodological steps are then listed and discussed as practical ways to partially overcome this problem. They involve complementary experimental protocols specifically designed to address the behavioral rules successively, conserving group-level data for the overall model validation. In this context, we highlight the importance of maintaining a sharp distinction between model enunciation, with explicit references to validated biological concepts, and formal translation of these concepts in terms of quantitative state variables and fittable functional dependences. Illustrative examples are provided of the benefits expected during the often long and difficult process of refining a behavioral model, designing adapted experimental protocols and inversing model parameters. PMID:22761685

  13. Modeling collective animal behavior with a cognitive perspective: a methodological framework.

    PubMed

    Weitz, Sebastian; Blanco, Stéphane; Fournier, Richard; Gautrais, Jacques; Jost, Christian; Theraulaz, Guy

    2012-01-01

    The last decades have seen an increasing interest in modeling collective animal behavior. Some studies try to reproduce as accurately as possible the collective dynamics and patterns observed in several animal groups with biologically plausible, individual behavioral rules. The objective is then essentially to demonstrate that the observed collective features may be the result of self-organizing processes involving quite simple individual behaviors. Other studies concentrate on the objective of establishing or enriching links between collective behavior researches and cognitive or physiological ones, which then requires that each individual rule be carefully validated. Here we discuss the methodological consequences of this additional requirement. Using the example of corpse clustering in ants, we first illustrate that it may be impossible to discriminate among alternative individual rules by considering only observational data collected at the group level. Six individual behavioral models are described: They are clearly distinct in terms of individual behaviors, they all reproduce satisfactorily the collective dynamics and distribution patterns observed in experiments, and we show theoretically that it is strictly impossible to discriminate two of these models even in the limit of an infinite amount of data whatever the accuracy level. A set of methodological steps are then listed and discussed as practical ways to partially overcome this problem. They involve complementary experimental protocols specifically designed to address the behavioral rules successively, conserving group-level data for the overall model validation. In this context, we highlight the importance of maintaining a sharp distinction between model enunciation, with explicit references to validated biological concepts, and formal translation of these concepts in terms of quantitative state variables and fittable functional dependences. Illustrative examples are provided of the benefits expected during the often long and difficult process of refining a behavioral model, designing adapted experimental protocols and inversing model parameters.

  14. Analysis of a homemade Edison tinfoil phonograph.

    PubMed

    Sagers, Jason D; McNeese, Andrew R; Lenhart, Richard D; Wilson, Preston S

    2012-10-01

    Thomas Edison's phonograph was a landmark acoustic invention. In this paper, the phonograph is presented as a tool for education in acoustics. A brief history of the phonograph is outlined and an analogous circuit model that describes its dynamic response is discussed. Microphone and scanning laser Doppler vibrometer (SLDV) measurements were made on a homemade phonograph for model validation and inversion for unknown model parameters. SLDV measurements also conclusively illustrate where model assumptions are violated. The model elements which dominate the dynamic response are discussed.

  15. A hybrid model for traffic flow and crowd dynamics with random individual properties.

    PubMed

    Schleper, Veronika

    2015-04-01

    Based on an established mathematical model for the behavior of large crowds, a new model is derived that is able to take into account the statistical variation of individual maximum walking speeds. The same model is shown to be valid also in traffic flow situations, where for instance the statistical variation of preferred maximum speeds can be considered. The model involves explicit bounds on the state variables, such that a special Riemann solver is derived that is proved to respect the state constraints. Some care is devoted to a valid construction of random initial data, necessary for the use of the new model. The article also includes a numerical method that is shown to respect the bounds on the state variables and illustrative numerical examples, explaining the properties of the new model in comparison with established models.

  16. Bias-dependent hybrid PKI empirical-neural model of microwave FETs

    NASA Astrophysics Data System (ADS)

    Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera

    2011-10-01

    Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.

  17. Aircraft Fire Safety held in Sintra (Portugal) on 22-26 May 1989

    DTIC Science & Technology

    1989-10-01

    range of building fire problems including the stable species of CO, and H.O. Figure 4 illustrates predictions of the JASMINE model, here applied to a...Validation of JASMINE , Transport and Road Research Laboratory Contractor Report No. 28, 1986. 16. Liew, S.K., Bray, K.N.C. and Moss, J.B. A...Mange and Air Cleaning Conf, US Dept of Energy, Conf 840806, 1985, 629. L4 17-8 [11]Kumar-S, Hoffmann N and Cox G, "Some validation of JASMINE for

  18. Testability of evolutionary game dynamics based on experimental economics data

    NASA Astrophysics Data System (ADS)

    Wang, Yijia; Chen, Xiaojie; Wang, Zhijian

    2017-11-01

    Understanding the dynamic processes of a real game system requires an appropriate dynamics model, and rigorously testing a dynamics model is nontrivial. In our methodological research, we develop an approach to testing the validity of game dynamics models that considers the dynamic patterns of angular momentum and speed as measurement variables. Using Rock-Paper-Scissors (RPS) games as an example, we illustrate the geometric patterns in the experiment data. We then derive the related theoretical patterns from a series of typical dynamics models. By testing the goodness-of-fit between the experimental and theoretical patterns, we show that the validity of these models can be evaluated quantitatively. Our approach establishes a link between dynamics models and experimental systems, which is, to the best of our knowledge, the most effective and rigorous strategy for ascertaining the testability of evolutionary game dynamics models.

  19. Facultative Stabilization Pond: Measuring Biological Oxygen Demand using Mathematical Approaches

    NASA Astrophysics Data System (ADS)

    Wira S, Ihsan; Sunarsih, Sunarsih

    2018-02-01

    Pollution is a man-made phenomenon. Some pollutants which discharged directly to the environment could create serious pollution problems. Untreated wastewater will cause contamination and even pollution on the water body. Biological Oxygen Demand (BOD) is the amount of oxygen required for the oxidation by bacteria. The higher the BOD concentration, the greater the organic matter would be. The purpose of this study was to predict the value of BOD contained in wastewater. Mathematical modeling methods were chosen in this study to depict and predict the BOD values contained in facultative wastewater stabilization ponds. Measurements of sampling data were carried out to validate the model. The results of this study indicated that a mathematical approach can be applied to predict the BOD contained in the facultative wastewater stabilization ponds. The model was validated using Absolute Means Error with 10% tolerance limit, and AME for model was 7.38% (< 10%), so the model is valid. Furthermore, a mathematical approach can also be applied to illustrate and predict the contents of wastewater.

  20. Bayesian cross-entropy methodology for optimal design of validation experiments

    NASA Astrophysics Data System (ADS)

    Jiang, X.; Mahadevan, S.

    2006-07-01

    An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.

  1. SPR Hydrostatic Column Model Verification and Validation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bettin, Giorgia; Lord, David; Rudeen, David Keith

    2015-10-01

    A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extendedmore » nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.« less

  2. Validation metrics for turbulent plasma transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, C., E-mail: chholland@ucsd.edu

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. The utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak [J. L. Luxon, Nucl. Fusion 42, 614 (2002)], as part of a multi-year transport model validation activity.« less

  3. Chemical Kinetics, Heat Transfer, and Sensor Dynamics Revisited in a Simple Experiment

    ERIC Educational Resources Information Center

    Sad, Maria E.; Sad, Mario R.; Castro, Alberto A.; Garetto, Teresita F.

    2008-01-01

    A simple experiment about thermal effects in chemical reactors is described, which can be used to illustrate chemical reactor models, the determination and validation of their parameters, and some simple principles of heat transfer and sensor dynamics. It is based in the exothermic reaction between aqueous solutions of sodium thiosulfate and…

  4. Why Multicollinearity Matters: A Reexamination of Relations Between Self-Efficacy, Self-Concept, and Achievement

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Dowson, Martin; Pietsch, James; Walker, Richard

    2004-01-01

    Multicollinearity is a well-known general problem, but it also seriously threatens valid interpretations in structural equation models. Illustrating this problem, J. Pietsch, R. Walker, and E. Chapman (2003) found paths leading to achievement were apparently much larger for self-efficacy (.55) than self-concept (-.05), suggesting--erroneously, as…

  5. A model-based design and validation approach with OMEGA-UML and the IF toolset

    NASA Astrophysics Data System (ADS)

    Ben-hafaiedh, Imene; Constant, Olivier; Graf, Susanne; Robbana, Riadh

    2009-03-01

    Intelligent, embedded systems such as autonomous robots and other industrial systems are becoming increasingly more heterogeneous with respect to the platforms on which they are implemented, and thus the software architecture more complex to design and analyse. In this context, it is important to have well-defined design methodologies which should be supported by (1) high level design concepts allowing to master the design complexity, (2) concepts for the expression of non-functional requirements and (3) analysis tools allowing to verify or invalidate that the system under development will be able to conform to its requirements. We illustrate here such an approach for the design of complex embedded systems on hand of a small case study used as a running example for illustration purposes. We briefly present the important concepts of the OMEGA-RT UML profile, we show how we use this profile in a modelling approach, and explain how these concepts are used in the IFx verification toolbox to integrate validation into the design flow and make scalable verification possible.

  6. Animal models of addiction

    PubMed Central

    Spanagel, Rainer

    2017-01-01

    In recent years, animal models in psychiatric research have been criticized for their limited translational value to the clinical situation. Failures in clinical trials have thus often been attributed to the lack of predictive power of preclinical animal models. Here, I argue that animal models of voluntary drug intake—under nonoperant and operant conditions—and addiction models based on the Diagnostic and Statistical Manual of Mental Disorders are crucial and informative tools for the identification of pathological mechanisms, target identification, and drug development. These models provide excellent face validity, and it is assumed that the neurochemical and neuroanatomical substrates involved in drug-intake behavior are similar in laboratory rodents and humans. Consequently, animal models of drug consumption and addiction provide predictive validity. This predictive power is best illustrated in alcohol research, in which three approved medications—acamprosate, naltrexone, and nalmefene—were developed by means of animal models and then successfully translated into the clinical situation. PMID:29302222

  7. Optimization study on multiple train formation scheme of urban rail transit

    NASA Astrophysics Data System (ADS)

    Xia, Xiaomei; Ding, Yong; Wen, Xin

    2018-05-01

    The new organization method, represented by the mixed operation of multi-marshalling trains, can adapt to the characteristics of the uneven distribution of passenger flow, but the research on this aspect is still not perfect enough. This paper introduced the passenger sharing rate and congestion penalty coefficient with different train formations. On this basis, this paper established an optimization model with the minimum passenger cost and operation cost as objective, and operation frequency and passenger demand as constraint. The ideal point method is used to solve this model. Compared with the fixed marshalling operation model, the overall cost of this scheme saves 9.24% and 4.43% respectively. This result not only validates the validity of the model, but also illustrate the advantages of the multiple train formations scheme.

  8. Approximate probabilistic cellular automata for the dynamics of single-species populations under discrete logisticlike growth with and without weak Allee effects.

    PubMed

    Mendonça, J Ricardo G; Gevorgyan, Yeva

    2017-05-01

    We investigate one-dimensional elementary probabilistic cellular automata (PCA) whose dynamics in first-order mean-field approximation yields discrete logisticlike growth models for a single-species unstructured population with nonoverlapping generations. Beginning with a general six-parameter model, we find constraints on the transition probabilities of the PCA that guarantee that the ensuing approximations make sense in terms of population dynamics and classify the valid combinations thereof. Several possible models display a negative cubic term that can be interpreted as a weak Allee factor. We also investigate the conditions under which a one-parameter PCA derived from the more general six-parameter model can generate valid population growth dynamics. Numerical simulations illustrate the behavior of some of the PCA found.

  9. Validation of Vehicle Panel/Equipment Response from Diffuse Acoustic Field Excitation Using Spatially Correlated Transfer Function Approach

    NASA Technical Reports Server (NTRS)

    Smith, Andrew; LaVerde, Bruce; Fulcher, Clay; Hunt, Ron

    2012-01-01

    An approach for predicting the vibration, strain, and force responses of a flight-like vehicle panel assembly to acoustic pressures is presented. Important validation for the approach is provided by comparison to ground test measurements in a reverberant chamber. The test article and the corresponding analytical model were assembled in several configurations to demonstrate the suitability of the approach for response predictions when the vehicle panel is integrated with equipment. Critical choices in the analysis necessary for convergence of the predicted and measured responses are illustrated through sensitivity studies. The methodology includes representation of spatial correlation of the pressure field over the panel surface. Therefore, it is possible to demonstrate the effects of hydrodynamic coincidence in the response. The sensitivity to pressure patch density clearly illustrates the onset of coincidence effects on the panel response predictions.

  10. Bayesian cross-validation for model evaluation and selection, with application to the North American Breeding Bird Survey

    USGS Publications Warehouse

    Link, William; Sauer, John R.

    2016-01-01

    The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.

  11. An externally validated model for predicting long-term survival after exercise treadmill testing in patients with suspected coronary artery disease and a normal electrocardiogram.

    PubMed

    Lauer, Michael S; Pothier, Claire E; Magid, David J; Smith, S Scott; Kattan, Michael W

    2007-12-18

    The exercise treadmill test is recommended for risk stratification among patients with intermediate to high pretest probability of coronary artery disease. Posttest risk stratification is based on the Duke treadmill score, which includes only functional capacity and measures of ischemia. To develop and externally validate a post-treadmill test, multivariable mortality prediction rule for adults with suspected coronary artery disease and normal electrocardiograms. Prospective cohort study conducted from September 1990 to May 2004. Exercise treadmill laboratories in a major medical center (derivation set) and a separate HMO (validation set). 33,268 patients in the derivation set and 5821 in the validation set. All patients had normal electrocardiograms and were referred for evaluation of suspected coronary artery disease. The derivation set patients were followed for a median of 6.2 years. A nomogram-illustrated model was derived on the basis of variables easily obtained in the stress laboratory, including age; sex; history of smoking, hypertension, diabetes, or typical angina; and exercise findings of functional capacity, ST-segment changes, symptoms, heart rate recovery, and frequent ventricular ectopy in recovery. The derivation data set included 1619 deaths. Although both the Duke treadmill score and our nomogram-illustrated model were significantly associated with death (P < 0.001), the nomogram was better at discrimination (concordance index for right-censored data, 0.83 vs. 0.73) and calibration. We reclassified many patients with intermediate- to high-risk Duke treadmill scores as low risk on the basis of the nomogram. The model also predicted 3-year mortality rates well in the validation set: Based on an optimal cut-point for a negative predictive value of 0.97, derivation and validation rates were, respectively, 1.7% and 2.5% below the cut-point and 25% and 29% above the cut-point. Blood test-based measures or left ventricular ejection fraction were not included. The nomogram can be applied only to patients with a normal electrocardiogram. Clinical utility remains to be tested. A simple nomogram based on easily obtained pretest and exercise test variables predicted all-cause mortality in adults with suspected coronary artery disease and normal electrocardiograms.

  12. Conventional Energy and Macronutrient Variables Distort the Accuracy of Children’s Dietary Reports: Illustrative Data from a Validation Study of Effect of Order Prompts

    PubMed Central

    Baxter, Suzanne Domel; Smith, Albert F.; Hardin, James W.; Nichols, Michele D.

    2008-01-01

    Objective Validation-study data are used to illustrate that conventional energy and macronutrient (protein, carbohydrate, fat) variables, which disregard accuracy of reported items and amounts, misrepresent reporting accuracy. Reporting-error-sensitive variables are proposed which classify reported items as matches or intrusions, and reported amounts as corresponding or overreported. Methods 58 girls and 63 boys were each observed eating school meals on 2 days separated by ≥4 weeks, and interviewed the morning after each observation day. One interview per child had forward-order (morning-to-evening) prompts; one had reverse-order prompts. Original food-item-level analyses found a sex-x-order prompt interaction for omission rates. Current analyses compared reference (observed) and reported information transformed to energy and macronutrients. Results Using conventional variables, reported amounts were less than reference amounts (ps<0.001; paired t-tests); report rates were higher for the first than second interview for energy, protein, and carbohydrate (ps≤0.049; mixed models). Using reporting-error-sensitive variables, correspondence rates were higher for girls with forward- but boys with reverse-order prompts (ps≤0.041; mixed models); inflation ratios were lower with reverse- than forward-order prompts for energy, carbohydrate, and fat (ps≤0.045; mixed models). Conclusions Conventional variables overestimated reporting accuracy and masked order prompt and sex effects. Reporting-error-sensitive variables are recommended when assessing accuracy for energy and macronutrients in validation studies. PMID:16959308

  13. Transport Phenomena During Equiaxed Solidification of Alloys

    NASA Technical Reports Server (NTRS)

    Beckermann, C.; deGroh, H. C., III

    1997-01-01

    Recent progress in modeling of transport phenomena during dendritic alloy solidification is reviewed. Starting from the basic theorems of volume averaging, a general multiphase modeling framework is outlined. This framework allows for the incorporation of a variety of microscale phenomena in the macroscopic transport equations. For the case of diffusion dominated solidification, a simplified set of model equations is examined in detail and validated through comparisons with numerous experimental data for both columnar and equiaxed dendritic growth. This provides a critical assessment of the various model assumptions. Models that include melt flow and solid phase transport are also discussed, although their validation is still at an early stage. Several numerical results are presented that illustrate some of the profound effects of convective transport on the final compositional and structural characteristics of a solidified part. Important issues that deserve continuing attention are identified.

  14. Group-level self-definition and self-investment: a hierarchical (multicomponent) model of in-group identification.

    PubMed

    Leach, Colin Wayne; van Zomeren, Martijn; Zebel, Sven; Vliek, Michael L W; Pennekamp, Sjoerd F; Doosje, Bertjan; Ouwerkerk, Jaap W; Spears, Russell

    2008-07-01

    Recent research shows individuals' identification with in-groups to be psychologically important and socially consequential. However, there is little agreement about how identification should be conceptualized or measured. On the basis of previous work, the authors identified 5 specific components of in-group identification and offered a hierarchical 2-dimensional model within which these components are organized. Studies 1 and 2 used confirmatory factor analysis to validate the proposed model of self-definition (individual self-stereotyping, in-group homogeneity) and self-investment (solidarity, satisfaction, and centrality) dimensions, across 3 different group identities. Studies 3 and 4 demonstrated the construct validity of the 5 components by examining their (concurrent) correlations with established measures of in-group identification. Studies 5-7 demonstrated the predictive and discriminant validity of the 5 components by examining their (prospective) prediction of individuals' orientation to, and emotions about, real intergroup relations. Together, these studies illustrate the conceptual and empirical value of a hierarchical multicomponent model of in-group identification.

  15. Computational Materials: Modeling and Simulation of Nanostructured Materials and Systems

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Hinkley, Jeffrey A.

    2003-01-01

    The paper provides details on the structure and implementation of the Computational Materials program at the NASA Langley Research Center. Examples are given that illustrate the suggested approaches to predicting the behavior and influencing the design of nanostructured materials such as high-performance polymers, composites, and nanotube-reinforced polymers. Primary simulation and measurement methods applicable to multi-scale modeling are outlined. Key challenges including verification and validation of models are highlighted and discussed within the context of NASA's broad mission objectives.

  16. Transport and dispersion of pollutants in surface impoundments: a finite difference model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, G.T.

    1980-07-01

    A surface impoundment model by finite-difference (SIMFD) has been developed. SIMFD computes the flow rate, velocity field, and the concentration distribution of pollutants in surface impoundments with any number of islands located within the region of interest. Theoretical derivations and numerical algorithm are described in detail. Instructions for the application of SIMFD and listings of the FORTRAN IV source program are provided. Two sample problems are given to illustrate the application and validity of the model.

  17. Why do verification and validation?

    DOE PAGES

    Hu, Kenneth T.; Paez, Thomas L.

    2016-02-19

    In this discussion paper, we explore different ways to assess the value of verification and validation (V&V) of engineering models. We first present a literature review on the value of V&V and then use value chains and decision trees to show how value can be assessed from a decision maker's perspective. In this context, the value is what the decision maker is willing to pay for V&V analysis with the understanding that the V&V results are uncertain. As a result, the 2014 Sandia V&V Challenge Workshop is used to illustrate these ideas.

  18. Expert system for web based collaborative CAE

    NASA Astrophysics Data System (ADS)

    Hou, Liang; Lin, Zusheng

    2006-11-01

    An expert system for web based collaborative CAE was developed based on knowledge engineering, relational database and commercial FEA (Finite element analysis) software. The architecture of the system was illustrated. In this system, the experts' experiences, theories and typical examples and other related knowledge, which will be used in the stage of pre-process in FEA, were categorized into analysis process and object knowledge. Then, the integrated knowledge model based on object-oriented method and rule based method was described. The integrated reasoning process based on CBR (case based reasoning) and rule based reasoning was presented. Finally, the analysis process of this expert system in web based CAE application was illustrated, and an analysis example of a machine tool's column was illustrated to prove the validity of the system.

  19. Evaluation, modification and validation of a set of asthma illustrations in children with chronic asthma in the emergency department

    PubMed Central

    Tulloch, Joanie; Vaillancourt, Régis; Irwin, Danica; Pascuet, Elena

    2012-01-01

    OBJECTIVES: To test, modify and validate a set of illustrations depicting different levels of asthma control and common asthma triggers in pediatric patients (and/or their parents) with chronic asthma who presented to the emergency department at the Children’s Hospital of Eastern Ontario, Ottawa, Ontario. METHODS: Semistructured interviews using guessability and translucency questionnaires tested the comprehensibility of 15 illustrations depicting different levels of asthma control and common asthma triggers in children 10 to 17 years of age, and parents of children one to nine years of age who presented to the emergency department. Illustrations with an overall guessability score <80% and/or translucency median score <6, were reviewed by the study team and modified by the study’s graphic designer. Modifications were made based on key concepts identified by study participants. RESULTS: A total of 80 patients were interviewed. Seven of the original 15 illustrations (47%) required modifications to obtain the prespecified guessability and translucency goals. CONCLUSION: The authors successfully developed, modified and validated a set of 15 illustrations representing different levels of asthma control and common asthma triggers. PRACTICE IMPLICATIONS: These illustrations will be incorporated into a child-friendly asthma action plan that enables the child to be involved in his or her asthma self-management care. PMID:22332128

  20. Evaluation, modification and validation of a set of asthma illustrations in children with chronic asthma in the emergency department.

    PubMed

    Tulloch, Joanie; Irwin, Danica; Pascuet, Elena; Vaillancourt, Régis

    2012-01-01

    To test, modify and validate a set of illustrations depicting different levels of asthma control and common asthma triggers in pediatric patients (and⁄or their parents) with chronic asthma who presented to the emergency department at the Children's Hospital of Eastern Ontario, Ottawa, Ontario. Semistructured interviews using guessability and translucency questionnaires tested the comprehensibility of 15 illustrations depicting different levels of asthma control and common asthma triggers in children 10 to 17 years of age, and parents of children one to nine years of age who presented to the emergency department. Illustrations with an overall guessability score <80% and⁄or translucency median score <6, were reviewed by the study team and modified by the study's graphic designer. Modifications were made based on key concepts identified by study participants. A total of 80 patients were interviewed. Seven of the original 15 illustrations (47%) required modifications to obtain the prespecified guessability and translucency goals. The authors successfully developed, modified and validated a set of 15 illustrations representing different levels of asthma control and common asthma triggers. These illustrations will be incorporated into a child-friendly asthma action plan that enables the child to be involved in his or her asthma self-management care.

  1. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    PubMed

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed framework should usefully inform guidelines for preparing submissions to reimbursement bodies.

  2. Man-machine analysis of translation and work tasks of Skylab films

    NASA Technical Reports Server (NTRS)

    Hosler, W. W.; Boelter, J. G.; Morrow, J. R., Jr.; Jackson, J. T.

    1979-01-01

    An objective approach to determine the concurrent validity of computer-graphic models is real time film analysis. This technique was illustrated through the procedures and results obtained in an evaluation of translation of Skylab mission astronauts. The quantitative analysis was facilitated by the use of an electronic film analyzer, minicomputer, and specifically supportive software. The uses of this technique for human factors research are: (1) validation of theoretical operator models; (2) biokinetic analysis; (3) objective data evaluation; (4) dynamic anthropometry; (5) empirical time-line analysis; and (6) consideration of human variability. Computer assisted techniques for interface design and evaluation have the potential for improving the capability for human factors engineering.

  3. Validation of individual and aggregate global flood hazard models for two major floods in Africa.

    NASA Astrophysics Data System (ADS)

    Trigg, M.; Bernhofen, M.; Whyman, C.

    2017-12-01

    A recent intercomparison of global flood hazard models undertaken by the Global Flood Partnership shows that there is an urgent requirement to undertake more validation of the models against flood observations. As part of the intercomparison, the aggregated model dataset resulting from the project was provided as open access data. We compare the individual and aggregated flood extent output from the six global models and test these against two major floods in the African Continent within the last decade, namely severe flooding on the Niger River in Nigeria in 2012, and on the Zambezi River in Mozambique in 2007. We test if aggregating different number and combination of models increases model fit to the observations compared with the individual model outputs. We present results that illustrate some of the challenges of comparing imperfect models with imperfect observations and also that of defining the probability of a real event in order to test standard model output probabilities. Finally, we propose a collective set of open access validation flood events, with associated observational data and descriptions that provide a standard set of tests across different climates and hydraulic conditions.

  4. What's wrong with my mouse cage? Methodological considerations for modeling lifestyle factors and gene-environment interactions in mice.

    PubMed

    Mo, Christina; Renoir, Thibault; Hannan, Anthony J

    2016-05-30

    The mechanistic understanding of lifestyle contributions to disease has been largely driven by work in laboratory rodent models using environmental interventions. These interventions show an array of methodologies and sometimes unclear collective conclusions, hampering clinical interpretations. Here we discuss environmental enrichment, exercise and stress interventions to illustrate how different protocols can affect the interpretations of environmental factors in disease. We use Huntington's disease (HD) as an example because its mouse models exhibit excellent validity and HD was the first genetic animal model in which environmental stimulation was found to be beneficial. We make a number of observations and recommendations. Firstly, environmental enrichment and voluntary exercise generally show benefits across laboratories and mouse models. However, the extent to which these environmental interventions have beneficial effects depends on parameters such as the structural complexity of the cage in the case of enrichment, the timing of the intervention and the nature of the control conditions. In particular, clinical interpretations should consider deprived control living conditions and the ethological relevance of the enrichment. Secondly, stress can have negative effects on the phenotype in mouse models of HD and other brain disorders. When modeling stress, the effects of more than one type of experimental stressor should be investigated due to the heterogeneity and complexity of stress responses. With stress in particular, but ideally in all studies, both sexes should be used and the randomized group sizes need to be sufficiently powered to detect any sex effects. Opportunities for clinical translation will be guided by the 'environmental construct validity' of the preclinical data, including the culmination of complementary protocols across multiple animal models. Environmental interventions in mouse models of HD provide illustrative examples of how valid preclinical studies can lead to conclusions relevant to clinical populations. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Coping with the Crickets: A Fusion Autoethnography of Silence, Schooling, and the Continuum of Biracial Identity Formation

    ERIC Educational Resources Information Center

    Mawhinney, Lynnette; Petchauer, Emery Marc

    2013-01-01

    This study explores biracial identity development in the adolescent years through fusion autoethnography. Using an ecological model of biracial identity development, this study illustrates how family, peers, and school curricula validate and reject racial self-presentations. We pay specific attention to the different forms of silence (i.e.…

  6. Simple additive effects are rare: A quantitative review of plant biomass and soil process responses to combined manipulations of CO2 and temperature

    USDA-ARS?s Scientific Manuscript database

    In recent years, increased awareness of the potential interactions between rising atmospheric CO2 concentrations ([CO2]) and temperature has illustrated the importance of multi-factorial ecosystem manipulation experiments for validating Earth System models. To address the urgent need for increased u...

  7. TUTORIAL: Validating biorobotic models

    NASA Astrophysics Data System (ADS)

    Webb, Barbara

    2006-09-01

    Some issues in neuroscience can be addressed by building robot models of biological sensorimotor systems. What we can conclude from building models or simulations, however, is determined by a number of factors in addition to the central hypothesis we intend to test. These include the way in which the hypothesis is represented and implemented in simulation, how the simulation output is interpreted, how it is compared to the behaviour of the biological system, and the conditions under which it is tested. These issues will be illustrated by discussing a series of robot models of cricket phonotaxis behaviour. .

  8. A single-vendor and a single-buyer integrated inventory model with ordering cost reduction dependent on lead time

    NASA Astrophysics Data System (ADS)

    Vijayashree, M.; Uthayakumar, R.

    2017-09-01

    Lead time is one of the major limits that affect planning at every stage of the supply chain system. In this paper, we study a continuous review inventory model. This paper investigates the ordering cost reductions are dependent on lead time. This study addressed two-echelon supply chain problem consisting of a single vendor and a single buyer. The main contribution of this study is that the integrated total cost of the single vendor and the single buyer integrated system is analyzed by adopting two different (linear and logarithmic) types ordering cost reductions act dependent on lead time. In both cases, we develop effective solution procedures for finding the optimal solution and then illustrative numerical examples are given to illustrate the results. The solution procedure is to determine the optimal solutions of order quantity, ordering cost, lead time and the number of deliveries from the single vendor and the single buyer in one production run, so that the integrated total cost incurred has the minimum value. Ordering cost reduction is the main aspect of the proposed model. A numerical example is given to validate the model. Numerical example solved by using Matlab software. The mathematical model is solved analytically by minimizing the integrated total cost. Furthermore, the sensitivity analysis is included and the numerical examples are given to illustrate the results. The results obtained in this paper are illustrated with the help of numerical examples. The sensitivity of the proposed model has been checked with respect to the various major parameters of the system. Results reveal that the proposed integrated inventory model is more applicable for the supply chain manufacturing system. For each case, an algorithm procedure of finding the optimal solution is developed. Finally, the graphical representation is presented to illustrate the proposed model and also include the computer flowchart in each model.

  9. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753

  10. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    PubMed

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W

    2016-01-01

    External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  11. Validation of a heteroscedastic hazards regression model.

    PubMed

    Wu, Hong-Dar Isaac; Hsieh, Fushing; Chen, Chen-Hsin

    2002-03-01

    A Cox-type regression model accommodating heteroscedasticity, with a power factor of the baseline cumulative hazard, is investigated for analyzing data with crossing hazards behavior. Since the approach of partial likelihood cannot eliminate the baseline hazard, an overidentified estimating equation (OEE) approach is introduced in the estimation procedure. It by-product, a model checking statistic, is presented to test for the overall adequacy of the heteroscedastic model. Further, under the heteroscedastic model setting, we propose two statistics to test the proportional hazards assumption. Implementation of this model is illustrated in a data analysis of a cancer clinical trial.

  12. Toxicodynamic analysis of the combined cholinesterase inhibition by paraoxon and methamidophos in human whole blood

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosgra, Sieto; Eijkeren, Jan C.H. van; Schans, Marcel J. van der

    2009-04-01

    Theoretical work has shown that the isobole method is not generally valid as a method for testing the absence or presence of interaction (in the biochemical sense) between chemicals. The present study illustrates how interaction can be tested by fitting a toxicodynamic model to the results of a mixture experiment. The inhibition of cholinesterases (ChE) in human whole blood by various dose combinations of paraoxon and methamidophos was measured in vitro. A toxicodynamic model describing the processes related to both OPs in inhibiting AChE activity was developed, and fit to the observed activities. This model, not containing any interaction betweenmore » the two OPs, described the results from the mixture experiment well, and it was concluded that the OPs did not interact in the whole blood samples. While this approach of toxicodynamic modeling is the most appropriate method for predicting combined effects, it is not rapidly applicable. Therefore, we illustrate how toxicodynamic modeling can be used to explore under which conditions dose addition would give an acceptable approximation of the combined effects from various chemicals. In the specific case of paraoxon and methamidophos in whole blood samples, it was found that dose addition gave a reasonably accurate prediction of the combined effects, despite considerable difference in some of their rate constants, and mildly non-parallel dose-response curves. Other possibilities of validating dose-addition using toxicodynamic modeling are briefly discussed.« less

  13. Automated determination of fibrillar structures by simultaneous model building and fiber diffraction refinement.

    PubMed

    Potrzebowski, Wojciech; André, Ingemar

    2015-07-01

    For highly oriented fibrillar molecules, three-dimensional structures can often be determined from X-ray fiber diffraction data. However, because of limited information content, structure determination and validation can be challenging. We demonstrate that automated structure determination of protein fibers can be achieved by guiding the building of macromolecular models with fiber diffraction data. We illustrate the power of our approach by determining the structures of six bacteriophage viruses de novo using fiber diffraction data alone and together with solid-state NMR data. Furthermore, we demonstrate the feasibility of molecular replacement from monomeric and fibrillar templates by solving the structure of a plant virus using homology modeling and protein-protein docking. The generated models explain the experimental data to the same degree as deposited reference structures but with improved structural quality. We also developed a cross-validation method for model selection. The results highlight the power of fiber diffraction data as structural constraints.

  14. Modelling a hydropower plant with reservoir with the micropower optimisation model (HOMER)

    NASA Astrophysics Data System (ADS)

    Canales, Fausto A.; Beluco, Alexandre; Mendes, Carlos André B.

    2017-08-01

    Hydropower with water accumulation is an interesting option to consider in hybrid systems, because it helps dealing with the intermittence characteristics of renewable energy resources. The software HOMER (version Legacy) is extensively used in research works related to these systems, but it does not include a specific option for modelling hydro with reservoir. This paper describes a method for modelling a hydropower plant with reservoir with HOMER by adapting an existing procedure used for modelling pumped storage. An example with two scenarios in southern Brazil is presented for illustrating and validating the method explained in this paper. The results validate the method by showing a direct correspondence between an equivalent battery and the reservoir. The refill of the reservoir, its power output as a function of the flow rate and installed hydropower capacity are effectively simulated, indicating an adequate representation of a hydropower plant with reservoir is possible with HOMER.

  15. Selecting the "Best" Factor Structure and Moving Measurement Validation Forward: An Illustration.

    PubMed

    Schmitt, Thomas A; Sass, Daniel A; Chappelle, Wayne; Thompson, William

    2018-04-09

    Despite the broad literature base on factor analysis best practices, research seeking to evaluate a measure's psychometric properties frequently fails to consider or follow these recommendations. This leads to incorrect factor structures, numerous and often overly complex competing factor models and, perhaps most harmful, biased model results. Our goal is to demonstrate a practical and actionable process for factor analysis through (a) an overview of six statistical and psychometric issues and approaches to be aware of, investigate, and report when engaging in factor structure validation, along with a flowchart for recommended procedures to understand latent factor structures; (b) demonstrating these issues to provide a summary of the updated Posttraumatic Stress Disorder Checklist (PCL-5) factor models and a rationale for validation; and (c) conducting a comprehensive statistical and psychometric validation of the PCL-5 factor structure to demonstrate all the issues we described earlier. Considering previous research, the PCL-5 was evaluated using a sample of 1,403 U.S. Air Force remotely piloted aircraft operators with high levels of battlefield exposure. Previously proposed PCL-5 factor structures were not supported by the data, but instead a bifactor model is arguably more statistically appropriate.

  16. Numerical studies and metric development for validation of magnetohydrodynamic models on the HIT-SI experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, C., E-mail: hansec@uw.edu; Columbia University, New York, New York 10027; Victor, B.

    We present application of three scalar metrics derived from the Biorthogonal Decomposition (BD) technique to evaluate the level of agreement between macroscopic plasma dynamics in different data sets. BD decomposes large data sets, as produced by distributed diagnostic arrays, into principal mode structures without assumptions on spatial or temporal structure. These metrics have been applied to validation of the Hall-MHD model using experimental data from the Helicity Injected Torus with Steady Inductive helicity injection experiment. Each metric provides a measure of correlation between mode structures extracted from experimental data and simulations for an array of 192 surface-mounted magnetic probes. Numericalmore » validation studies have been performed using the NIMROD code, where the injectors are modeled as boundary conditions on the flux conserver, and the PSI-TET code, where the entire plasma volume is treated. Initial results from a comprehensive validation study of high performance operation with different injector frequencies are presented, illustrating application of the BD method. Using a simplified (constant, uniform density and temperature) Hall-MHD model, simulation results agree with experimental observation for two of the three defined metrics when the injectors are driven with a frequency of 14.5 kHz.« less

  17. Development of an Input Suite for an Orthotropic Composite Material Model

    NASA Technical Reports Server (NTRS)

    Hoffarth, Canio; Shyamsunder, Loukham; Khaled, Bilal; Rajan, Subramaniam; Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Blankenhorn, Gunther

    2017-01-01

    An orthotropic three-dimensional material model suitable for use in modeling impact tests has been developed that has three major components elastic and inelastic deformations, damage and failure. The material model has been implemented as MAT213 into a special version of LS-DYNA and uses tabulated data obtained from experiments. The prominent features of the constitutive model are illustrated using a widely-used aerospace composite the T800S3900-2B[P2352W-19] BMS8-276 Rev-H-Unitape fiber resin unidirectional composite. The input for the deformation model consists of experimental data from 12 distinct experiments at a known temperature and strain rate: tension and compression along all three principal directions, shear in all three principal planes, and off axis tension or compression tests in all three principal planes, along with other material constants. There are additional input associated with the damage and failure models. The steps in using this model are illustrated composite characterization tests, verification tests and a validation test. The results show that the developed and implemented model is stable and yields acceptably accurate results.

  18. On the Impact of Illustrated Assessment Tool on Paragraph Writing of High School Graduates of Qom, Iran

    ERIC Educational Resources Information Center

    Bagheridoust, Esmaeil; Husseini, Zahra

    2011-01-01

    Writing as one important skill in language proficiency demands validity, hence high schools are real places in which valid results are needed for high-stake decisions. Unrealistic and non-viable tests result in improper and invalid interpretation and use. Illustrations without any written research have proved their effectiveness in whatsoever…

  19. Estimating the Uncertainty and Predictive Capabilities of Three-Dimensional Earth Models (Postprint)

    DTIC Science & Technology

    2012-03-22

    www.isc.ac.uk). This global database includes more than 7,000 events whose epicentral location accuracy is known to at least 5 km. GT events with...region, which illustrates the difficulty of validating a model with travel times alone. However, the IASPEI REL database is currently the highest...S (right) paths in the IASPEI REL ground-truth database . Stations are represented by purple triangles and events by gray circles. Note the sparse

  20. Understanding Dynamic Model Validation of a Wind Turbine Generator and a Wind Power Plant: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muljadi, Eduard; Zhang, Ying Chen; Gevorgian, Vahan

    Regional reliability organizations require power plants to validate the dynamic models that represent them to ensure that power systems studies are performed to the best representation of the components installed. In the process of validating a wind power plant (WPP), one must be cognizant of the parameter settings of the wind turbine generators (WTGs) and the operational settings of the WPP. Validating the dynamic model of a WPP is required to be performed periodically. This is because the control parameters of the WTGs and the other supporting components within a WPP may be modified to comply with new grid codesmore » or upgrades to the WTG controller with new capabilities developed by the turbine manufacturers or requested by the plant owners or operators. The diversity within a WPP affects the way we represent it in a model. Diversity within a WPP may be found in the way the WTGs are controlled, the wind resource, the layout of the WPP (electrical diversity), and the type of WTGs used. Each group of WTGs constitutes a significant portion of the output power of the WPP, and their unique and salient behaviors should be represented individually. The objective of this paper is to illustrate the process of dynamic model validations of WTGs and WPPs, the available data recorded that must be screened before it is used for the dynamic validations, and the assumptions made in the dynamic models of the WTG and WPP that must be understood. Without understanding the correct process, the validations may lead to the wrong representations of the WTG and WPP modeled.« less

  1. Preliminary Structural Sensitivity Study of Hypersonic Inflatable Aerodynamic Decelerator Using Probabilistic Methods

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.

    2014-01-01

    Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology validation via flighttesting. This paper explores the implementation of probabilistic methods in the sensitivity analysis of the structural response of a Hypersonic Inflatable Aerodynamic Decelerator (HIAD). HIAD architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during re-entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. In the example presented here, the structural parameters of an existing HIAD model have been varied to illustrate the design approach utilizing uncertainty-based methods. Surrogate models have been used to reduce computational expense several orders of magnitude. The suitability of the design is based on assessing variation in the resulting cone angle. The acceptable cone angle variation would rely on the aerodynamic requirements.

  2. Global exponential stability of positive periodic solution of the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays.

    PubMed

    Zhao, Kaihong

    2018-12-01

    In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.

  3. A Simple and Efficient Computational Approach to Chafed Cable Time-Domain Reflectometry Signature Prediction

    NASA Technical Reports Server (NTRS)

    Kowalski, Marc Edward

    2009-01-01

    A method for the prediction of time-domain signatures of chafed coaxial cables is presented. The method is quasi-static in nature, and is thus efficient enough to be included in inference and inversion routines. Unlike previous models proposed, no restriction on the geometry or size of the chafe is required in the present approach. The model is validated and its speed is illustrated via comparison to simulations from a commercial, three-dimensional electromagnetic simulator.

  4. Injection-Molded Long-Fiber Thermoplastic Composites: From Process Modeling to Prediction of Mechanical Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Kunc, Vlastimil; Jin, Xiaoshi

    2013-12-18

    This article illustrates the predictive capabilities for long-fiber thermoplastic (LFT) composites that first simulate the injection molding of LFT structures by Autodesk® Simulation Moldflow® Insight (ASMI) to accurately predict fiber orientation and length distributions in these structures. After validating fiber orientation and length predictions against the experimental data, the predicted results are used by ASMI to compute distributions of elastic properties in the molded structures. In addition, local stress-strain responses and damage accumulation under tensile loading are predicted by an elastic-plastic damage model of EMTA-NLA, a nonlinear analysis tool implemented in ABAQUS® via user-subroutines using an incremental Eshelby-Mori-Tanaka approach. Predictedmore » stress-strain responses up to failure and damage accumulations are compared to the experimental results to validate the model.« less

  5. Computational Modeling of Electrochemical-Poroelastic Bending Behaviors of Conducting Polymer (PPy) Membranes

    NASA Astrophysics Data System (ADS)

    Toi, Yutaka; Jung, Woosang

    The electrochemical-poroelastic bending behavior of conducting polymer actuators has an attractive feature, considering their potential applications such as artificial muscles or MEMS. In the present study, a computational modeling is presented for the bending behavior of polypyrrole-based actuators. The one-dimensional governing equation for the ionic transportation in electrolytes given by Tadokoro et al. is combined with the finite element modeling for the poroelastic behavior of polypyrroles considering the effect of finite deformation. The validity of the proposed model has been illustrated by comparing the computed results with the experimental results in the literatures.

  6. Novel Real-Time Facial Wound Recovery Synthesis Using Subsurface Scattering

    PubMed Central

    Chin, Seongah

    2014-01-01

    We propose a wound recovery synthesis model that illustrates the appearance of a wound healing on a 3-dimensional (3D) face. The H3 model is used to determine the size of the recovering wound. Furthermore, we present our subsurface scattering model that is designed to take the multilayered skin structure of the wound into consideration to represent its color transformation. We also propose a novel real-time rendering method based on the results of an analysis of the characteristics of translucent materials. Finally, we validate the proposed methods with 3D wound-simulation experiments using shading models. PMID:25197721

  7. Aerodynamic analysis of an isolated vehicle wheel

    NASA Astrophysics Data System (ADS)

    Leśniewicz, P.; Kulak, M.; Karczewski, M.

    2014-08-01

    Increasing fuel prices force the manufacturers to look into all aspects of car aerodynamics including wheels, tyres and rims in order to minimize their drag. By diminishing the aerodynamic drag of vehicle the fuel consumption will decrease, while driving safety and comfort will improve. In order to properly illustrate the impact of a rotating wheel aerodynamics on the car body, precise analysis of an isolated wheel should be performed beforehand. In order to represent wheel rotation in contact with the ground, presented CFD simulations included Moving Wall boundary as well as Multiple Reference Frame should be performed. Sliding mesh approach is favoured but too costly at the moment. Global and local flow quantities obtained during simulations were compared to an experiment in order to assess the validity of the numerical model. Results of investigation illustrates dependency between type of simulation and coefficients (drag and lift). MRF approach proved to be a better solution giving result closer to experiment. Investigation of the model with contact area between the wheel and the ground helps to illustrate the impact of rotating wheel aerodynamics on the car body.

  8. Decoding the Principles of Emergence and Resiliency in Biological Collective Systems - A Multi-Scale Approach: Final Report

    DTIC Science & Technology

    2018-02-15

    models and approaches are also valid using other invasive and non - invasive technologies. Finally, we illustrate and experimentally evaluate this...2017 Project Outline q  Pattern formation diversity in wild microbial societies q  Experimental and mathematical analysis methodology q  Skeleton...chemotaxis, nutrient degradation, and the exchange of amino acids between cells. Using both quantitative experimental methods and several theoretical

  9. Modal mass estimation from ambient vibrations measurement: A method for civil buildings

    NASA Astrophysics Data System (ADS)

    Acunzo, G.; Fiorini, N.; Mori, F.; Spina, D.

    2018-01-01

    A new method for estimating the modal mass ratios of buildings from unscaled mode shapes identified from ambient vibrations is presented. The method is based on the Multi Rigid Polygons (MRP) model in which each floor of the building is ideally divided in several non-deformable polygons that move independent of each other. The whole mass of the building is concentrated in the centroid of the polygons and the experimental mode shapes are expressed in term of rigid translations and of rotations. In this way, the mass matrix of the building can be easily computed on the basis of simple information about the geometry and the materials of the structure. The modal mass ratios can be then obtained through the classical equation of structural dynamics. Ambient vibrations measurement must be performed according to this MRP models, using at least two biaxial accelerometers per polygon. After a brief illustration of the theoretical background of the method, numerical validations are presented analysing the method sensitivity for possible different source of errors. Quality indexes are defined for evaluating the approximation of the modal mass ratios obtained from a certain MRP model. The capability of the proposed model to be applied to real buildings is illustrated through two experimental applications. In the first one, a geometrically irregular reinforced concrete building is considered, using a calibrated Finite Element Model for validating the results of the method. The second application refers to a historical monumental masonry building, with a more complex geometry and with less information available. In both cases, MRP models with a different number of rigid polygons per floor are compared.

  10. Modeling and validation of photometric characteristics of space targets oriented to space-based observation.

    PubMed

    Wang, Hongyuan; Zhang, Wei; Dong, Aotuo

    2012-11-10

    A modeling and validation method of photometric characteristics of the space target was presented in order to track and identify different satellites effectively. The background radiation characteristics models of the target were built based on blackbody radiation theory. The geometry characteristics of the target were illustrated by the surface equations based on its body coordinate system. The material characteristics of the target surface were described by a bidirectional reflectance distribution function model, which considers the character of surface Gauss statistics and microscale self-shadow and is obtained by measurement and modeling in advance. The contributing surfaces of the target to observation system were determined by coordinate transformation according to the relative position of the space-based target, the background radiation sources, and the observation platform. Then a mathematical model on photometric characteristics of the space target was built by summing reflection components of all the surfaces. Photometric characteristics simulation of the space-based target was achieved according to its given geometrical dimensions, physical parameters, and orbital parameters. Experimental validation was made based on the scale model of the satellite. The calculated results fit well with the measured results, which indicates the modeling method of photometric characteristics of the space target is correct.

  11. Multimodal electromechanical model of piezoelectric transformers by Hamilton's principle.

    PubMed

    Nadal, Clement; Pigache, Francois

    2009-11-01

    This work deals with a general energetic approach to establish an accurate electromechanical model of a piezoelectric transformer (PT). Hamilton's principle is used to obtain the equations of motion for free vibrations. The modal characteristics (mass, stiffness, primary and secondary electromechanical conversion factors) are also deduced. Then, to illustrate this general electromechanical method, the variational principle is applied to both homogeneous and nonhomogeneous Rosen-type PT models. A comparison of modal parameters, mechanical displacements, and electrical potentials are presented for both models. Finally, the validity of the electrodynamical model of nonhomogeneous Rosen-type PT is confirmed by a numerical comparison based on a finite elements method and an experimental identification.

  12. A compact model of the reverse gate-leakage current in GaN-based HEMTs

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoyu; Huang, Junkai; Fang, Jielin; Deng, Wanling

    2016-12-01

    The gate-leakage behavior in GaN-based high electron mobility transistors (HEMTs) is studied as a function of applied bias and temperature. A model to calculate this current is given, which shows that trap-assisted tunneling, trap-assisted Frenkel-Poole (FP) emission, and direct Fowler-Nordheim (FN) tunneling have their main contributions at different electric field regions. In addition, the proposed model clearly illustrates the effect of traps and their assistance to the gate leakage. We have demonstrated the validity of the model by comparisons between model simulation results and measured experimental data of HEMTs, and a good agreement is obtained.

  13. Numerical modelling of transdermal delivery from matrix systems: parametric study and experimental validation with silicone matrices.

    PubMed

    Snorradóttir, Bergthóra S; Jónsdóttir, Fjóla; Sigurdsson, Sven Th; Másson, Már

    2014-08-01

    A model is presented for transdermal drug delivery from single-layered silicone matrix systems. The work is based on our previous results that, in particular, extend the well-known Higuchi model. Recently, we have introduced a numerical transient model describing matrix systems where the drug dissolution can be non-instantaneous. Furthermore, our model can describe complex interactions within a multi-layered matrix and the matrix to skin boundary. The power of the modelling approach presented here is further illustrated by allowing the possibility of a donor solution. The model is validated by a comparison with experimental data, as well as validating the parameter values against each other, using various configurations with donor solution, silicone matrix and skin. Our results show that the model is a good approximation to real multi-layered delivery systems. The model offers the ability of comparing drug release for ibuprofen and diclofenac, which cannot be analysed by the Higuchi model because the dissolution in the latter case turns out to be limited. The experiments and numerical model outlined in this study could also be adjusted to more general formulations, which enhances the utility of the numerical model as a design tool for the development of drug-loaded matrices for trans-membrane and transdermal delivery. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  14. An Integrated Approach Linking Process to Structural Modeling With Microstructural Characterization for Injections-Molded Long-Fiber Thermoplastics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Bapanapalli, Satish K.; Smith, Mark T.

    2008-09-01

    The objective of our work is to enable the optimum design of lightweight automotive structural components using injection-molded long fiber thermoplastics (LFTs). To this end, an integrated approach that links process modeling to structural analysis with experimental microstructural characterization and validation is developed. First, process models for LFTs are developed and implemented into processing codes (e.g. ORIENT, Moldflow) to predict the microstructure of the as-formed composite (i.e. fiber length and orientation distributions). In parallel, characterization and testing methods are developed to obtain necessary microstructural data to validate process modeling predictions. Second, the predicted LFT composite microstructure is imported into amore » structural finite element analysis by ABAQUS to determine the response of the as-formed composite to given boundary conditions. At this stage, constitutive models accounting for the composite microstructure are developed to predict various types of behaviors (i.e. thermoelastic, viscoelastic, elastic-plastic, damage, fatigue, and impact) of LFTs. Experimental methods are also developed to determine material parameters and to validate constitutive models. Such a process-linked-structural modeling approach allows an LFT composite structure to be designed with confidence through numerical simulations. Some recent results of our collaborative research will be illustrated to show the usefulness and applications of this integrated approach.« less

  15. Modeling exposure–lag–response associations with distributed lag non-linear models

    PubMed Central

    Gasparrini, Antonio

    2014-01-01

    In biomedical research, a health effect is frequently associated with protracted exposures of varying intensity sustained in the past. The main complexity of modeling and interpreting such phenomena lies in the additional temporal dimension needed to express the association, as the risk depends on both intensity and timing of past exposures. This type of dependency is defined here as exposure–lag–response association. In this contribution, I illustrate a general statistical framework for such associations, established through the extension of distributed lag non-linear models, originally developed in time series analysis. This modeling class is based on the definition of a cross-basis, obtained by the combination of two functions to flexibly model linear or nonlinear exposure-responses and the lag structure of the relationship, respectively. The methodology is illustrated with an example application to cohort data and validated through a simulation study. This modeling framework generalizes to various study designs and regression models, and can be applied to study the health effects of protracted exposures to environmental factors, drugs or carcinogenic agents, among others. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24027094

  16. Beam dynamics validation of the Halbach Technology FFAG Cell for Cornell-BNL Energy Recovery Linac

    NASA Astrophysics Data System (ADS)

    Méot, F.; Tsoupas, N.; Brooks, S.; Trbojevic, D.

    2018-07-01

    The Cornell-BNL Electron Test Accelerator (CBETA), a 150 MeV energy recovery linac (ERL) now in construction at Cornell, employs a fixed-field alternating gradient optics return loop: a single beam line comprised of FFAG cells, which accepts four recirculated energies. CBETA FFAG cell uses Halbach permanent magnet technology, its design studies have covered an extended period of time supported by extensive particle dynamics simulations using computed 3-D field map models. This approach is discussed, and illustrated here, based on the final stage in these beam dynamics studies, namely the validation of a ultimate, optimized design of the Halbach cell.

  17. Flux-split algorithms for flows with non-equilibrium chemistry and vibrational relaxation

    NASA Technical Reports Server (NTRS)

    Grossman, B.; Cinnella, P.

    1990-01-01

    The present consideration of numerical computation methods for gas flows with nonequilibrium chemistry thermodynamics gives attention to an equilibrium model, a general nonequilibrium model, and a simplified model based on vibrational relaxation. Flux-splitting procedures are developed for the fully-coupled inviscid equations encompassing fluid dynamics and both chemical and internal energy-relaxation processes. A fully coupled and implicit large-block structure is presented which embodies novel forms of flux-vector split and flux-difference split algorithms valid for nonequilibrium flow; illustrative high-temperature shock tube and nozzle flow examples are given.

  18. Introduction to Bayesian statistical approaches to compositional analyses of transgenic crops 1. Model validation and setting the stage.

    PubMed

    Harrison, Jay M; Breeze, Matthew L; Harrigan, George G

    2011-08-01

    Statistical comparisons of compositional data generated on genetically modified (GM) crops and their near-isogenic conventional (non-GM) counterparts typically rely on classical significance testing. This manuscript presents an introduction to Bayesian methods for compositional analysis along with recommendations for model validation. The approach is illustrated using protein and fat data from two herbicide tolerant GM soybeans (MON87708 and MON87708×MON89788) and a conventional comparator grown in the US in 2008 and 2009. Guidelines recommended by the US Food and Drug Administration (FDA) in conducting Bayesian analyses of clinical studies on medical devices were followed. This study is the first Bayesian approach to GM and non-GM compositional comparisons. The evaluation presented here supports a conclusion that a Bayesian approach to analyzing compositional data can provide meaningful and interpretable results. We further describe the importance of method validation and approaches to model checking if Bayesian approaches to compositional data analysis are to be considered viable by scientists involved in GM research and regulation. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. A new class of enhanced kinetic sampling methods for building Markov state models

    NASA Astrophysics Data System (ADS)

    Bhoutekar, Arti; Ghosh, Susmita; Bhattacharya, Swati; Chatterjee, Abhijit

    2017-10-01

    Markov state models (MSMs) and other related kinetic network models are frequently used to study the long-timescale dynamical behavior of biomolecular and materials systems. MSMs are often constructed bottom-up using brute-force molecular dynamics (MD) simulations when the model contains a large number of states and kinetic pathways that are not known a priori. However, the resulting network generally encompasses only parts of the configurational space, and regardless of any additional MD performed, several states and pathways will still remain missing. This implies that the duration for which the MSM can faithfully capture the true dynamics, which we term as the validity time for the MSM, is always finite and unfortunately much shorter than the MD time invested to construct the model. A general framework that relates the kinetic uncertainty in the model to the validity time, missing states and pathways, network topology, and statistical sampling is presented. Performing additional calculations for frequently-sampled states/pathways may not alter the MSM validity time. A new class of enhanced kinetic sampling techniques is introduced that aims at targeting rare states/pathways that contribute most to the uncertainty so that the validity time is boosted in an effective manner. Examples including straightforward 1D energy landscapes, lattice models, and biomolecular systems are provided to illustrate the application of the method. Developments presented here will be of interest to the kinetic Monte Carlo community as well.

  20. An experimental and theoretical investigation of deposition patterns from an agricultural airplane

    NASA Technical Reports Server (NTRS)

    Morris, D. J.; Croom, C. C.; Vandam, C. P.; Holmes, B. J.

    1984-01-01

    A flight test program has been conducted with a representative agricultural airplane to provide data for validating a computer program model which predicts aerially applied particle deposition. Test procedures and the data from this test are presented and discussed. The computer program features are summarized, and comparisons of predicted and measured particle deposition are presented. Applications of the computer program for spray pattern improvement are illustrated.

  1. Human performance measurement: Validation procedures applicable to advanced manned telescience systems

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.

    1990-01-01

    As telescience systems become more and more complex, autonomous, and opaque to their operators it becomes increasingly difficult to determine whether the total system is performing as it should. Some of the complex and interrelated human performance measurement issues are addressed as they relate to total system validation. The assumption is made that human interaction with the automated system will be required well into the Space Station Freedom era. Candidate human performance measurement-validation techniques are discussed for selected ground-to-space-to-ground and space-to-space situations. Most of these measures may be used in conjunction with an information throughput model presented elsewhere (Haines, 1990). Teleoperations, teleanalysis, teleplanning, teledesign, and teledocumentation are considered, as are selected illustrative examples of space related telescience activities.

  2. From bedside to bench and back again: research issues in animal models of human disease.

    PubMed

    Tkacs, Nancy C; Thompson, Hilaire J

    2006-07-01

    To improve outcomes for patients with many serious clinical problems, multifactorial research approaches by nurse scientists, including the use of animal models, are necessary. Animal models serve as analogies for clinical problems seen in humans and must meet certain criteria, including validity and reliability, to be useful in moving research efforts forward. This article describes research considerations in the development of rodent models. As the standard of diabetes care evolves to emphasize intensive insulin therapy, rates of severe hypoglycemia are increasing among patients with type 1 and type 2 diabetes mellitus. A consequence of this change in clinical practice is an increase in rates of two hypoglycemia-related diabetes complications: hypoglycemia-associated autonomic failure (HAAF) and resulting hypoglycemia unawareness. Work on an animal model of HAAF is in an early developmental stage, with several labs reporting different approaches to model this complication of type 1 diabetes mellitus. This emerging model serves as an example illustrating how evaluation of validity and reliability is critically important at each stage of developing and testing animal models to support inquiry into human disease.

  3. Cognitive Support During High-Consequence Episodes of Care in Cardiovascular Surgery.

    PubMed

    Conboy, Heather M; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Christov, Stefan C; Goldman, Julian M; Yule, Steven J; Zenati, Marco A

    2017-03-01

    Despite significant efforts to reduce preventable adverse events in medical processes, such events continue to occur at unacceptable rates. This paper describes a computer science approach that uses formal process modeling to provide situationally aware monitoring and management support to medical professionals performing complex processes. These process models represent both normative and non-normative situations, and are validated by rigorous automated techniques such as model checking and fault tree analysis, in addition to careful review by experts. Context-aware Smart Checklists are then generated from the models, providing cognitive support during high-consequence surgical episodes. The approach is illustrated with a case study in cardiovascular surgery.

  4. Validation of model predictions of pore-scale fluid distributions during two-phase flow

    NASA Astrophysics Data System (ADS)

    Bultreys, Tom; Lin, Qingyang; Gao, Ying; Raeini, Ali Q.; AlRatrout, Ahmed; Bijeljic, Branko; Blunt, Martin J.

    2018-05-01

    Pore-scale two-phase flow modeling is an important technology to study a rock's relative permeability behavior. To investigate if these models are predictive, the calculated pore-scale fluid distributions which determine the relative permeability need to be validated. In this work, we introduce a methodology to quantitatively compare models to experimental fluid distributions in flow experiments visualized with microcomputed tomography. First, we analyzed five repeated drainage-imbibition experiments on a single sample. In these experiments, the exact fluid distributions were not fully repeatable on a pore-by-pore basis, while the global properties of the fluid distribution were. Then two fractional flow experiments were used to validate a quasistatic pore network model. The model correctly predicted the fluid present in more than 75% of pores and throats in drainage and imbibition. To quantify what this means for the relevant global properties of the fluid distribution, we compare the main flow paths and the connectivity across the different pore sizes in the modeled and experimental fluid distributions. These essential topology characteristics matched well for drainage simulations, but not for imbibition. This suggests that the pore-filling rules in the network model we used need to be improved to make reliable predictions of imbibition. The presented analysis illustrates the potential of our methodology to systematically and robustly test two-phase flow models to aid in model development and calibration.

  5. On the Validity of Useless Tests

    ERIC Educational Resources Information Center

    Sireci, Stephen G.

    2016-01-01

    A misconception exists that validity may refer only to the "interpretation" of test scores and not to the "uses" of those scores. The development and evolution of validity theory illustrate test score interpretation was a primary focus in the earliest days of modern testing, and that validating interpretations derived from test…

  6. Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.

    PubMed

    Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy

    2010-02-01

    This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.

  7. Validating a Fidelity Scale to Understand Intervention Effects in Classroom-Based Studies

    ERIC Educational Resources Information Center

    Buckley, Pamela; Moore, Brooke; Boardman, Alison G.; Arya, Diana J.; Maul, Andrew

    2017-01-01

    K-12 intervention studies often include fidelity of implementation (FOI) as a mediating variable, though most do not report the validity of fidelity measures. This article discusses the critical need for validated FOI scales. To illustrate our point, we describe the development and validation of the Implementation Validity Checklist (IVC-R), an…

  8. Revisioning Clinical Psychology: Integrating Cultural Psychology into Clinical Research and Practice with Portuguese Immigrants

    PubMed Central

    James, Susan; Harris, Sara; Foster, Gary; Clarke, Juanne; Gadermann, Anne; Morrison, Marie; Bezanson, Birdie Jane

    2013-01-01

    This article outlines a model for conducting psychotherapy with people of diverse cultural backgrounds. The theoretical foundation for the model is based on clinical and cultural psychology. Cultural psychology integrates psychology and anthropology in order to provide a complex understanding of both culture and the individual within his or her cultural context. The model proposed in this article is also based on our clinical experience and mixed-method research with the Portuguese community. The model demonstrates its value with ethnic minority clients by situating the clients within the context of their multi-layered social reality. The individual, familial, socio-cultural, and religio-moral domains are explored in two research projects, revealing the interrelation of these levels/contexts. The article is structured according to these domains. Study 1 is a quantitative study that validates the Agonias Questionnaire in Ontario. The results of this study are used to illustrate the individual domain of our proposed model. Study 2 is an ethnography conducted in the Azorean Islands, and the results of this study are integrated to illustrate the other three levels of the model, namely family, socio-cultural, and the religio-moral levels. PMID:23720642

  9. Spherically-symmetric solutions in general relativity using a tetrad-based approach

    NASA Astrophysics Data System (ADS)

    Kim, Do Young; Lasenby, Anthony N.; Hobson, Michael P.

    2018-03-01

    We present a tetrad-based method for solving the Einstein field equations for spherically-symmetric systems and compare it with the widely-used Lemaître-Tolman-Bondi (LTB) model. In particular, we focus on the issues of gauge ambiguity and the use of comoving versus `physical' coordinate systems. We also clarify the correspondences between the two approaches, and illustrate their differences by applying them to the classic examples of the Schwarzschild and Friedmann-Lemaître-Robertson-Walker spacetimes. We demonstrate that the tetrad-based method does not suffer from the gauge freedoms inherent to the LTB model, naturally accommodates non-uniform pressure and has a more transparent physical interpretation. We further apply our tetrad-based method to a generalised form of `Swiss cheese' model, which consists of an interior spherical region surrounded by a spherical shell of vacuum that is embedded in an exterior background universe. In general, we allow the fluid in the interior and exterior regions to support pressure, and do not demand that the interior region be compensated. We pay particular attention to the form of the solution in the intervening vacuum region and illustrate the validity of Birkhoff's theorem at both the metric and tetrad level. We then reconsider critically the original theoretical arguments underlying the so-called Rh = ct cosmological model, which has recently received considerable attention. These considerations in turn illustrate the interesting behaviour of a number of `horizons' in general cosmological models.

  10. Bad Behavior: Improving Reproducibility in Behavior Testing.

    PubMed

    Andrews, Anne M; Cheng, Xinyi; Altieri, Stefanie C; Yang, Hongyan

    2018-01-24

    Systems neuroscience research is increasingly possible through the use of integrated molecular and circuit-level analyses. These studies depend on the use of animal models and, in many cases, molecular and circuit-level analyses. Associated with genetic, pharmacologic, epigenetic, and other types of environmental manipulations. We illustrate typical pitfalls resulting from poor validation of behavior tests. We describe experimental designs and enumerate controls needed to improve reproducibility in investigating and reporting of behavioral phenotypes.

  11. Three-Dimensional Shallow Water Adaptive Hydraulics (ADH-SW3) Validation: Galveston Bay Hydrodynamics and Salinity Transport

    DTIC Science & Technology

    2015-04-01

    model mesh with elements (vertical co-ordinate in meters). ....................... 5 Figure 3. Ocean tidal boundary (Hour 0 = 1 Jan 1990, 12:00 a.m...7 Figure 4. Ocean salt boundary (Hour 0 = 1 Jan 1990, 12:00 a.m...channel and the connections of Galveston Bay to the open ocean . Figures 1 and 2 illustrate the distribution of vertical layers and resolution in the

  12. The application of sensitivity analysis to models of large scale physiological systems

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  13. Software Validation via Model Animation

    NASA Technical Reports Server (NTRS)

    Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.

    2015-01-01

    This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.

  14. Simulation of the UT inspection of planar defects using a generic GTD-Kirchhoff approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorval, Vincent, E-mail: vincent.dorval@cea.fr; Darmon, Michel, E-mail: vincent.dorval@cea.fr; Chatillon, Sylvain, E-mail: vincent.dorval@cea.fr

    2015-03-31

    The modeling of ultrasonic Non Destructive Evaluation often plays an important part in the assessment of detection capabilities or as a help to interpret experiments. The ultrasonic modeling tool of the CIVA platform uses semi-analytical approximations for fast computations. Kirchhoff and GTD are two classical approximations for the modeling of echoes from plane-like defects such as cracks, and they aim at taking into account two different types of physical phenomena. The Kirchhoff approximation is mainly suitable to predict specular reflections from the flaw surface, whereas GTD is dedicated to the modeling of edge diffraction. As a consequence, these two approximationsmore » have distinct and complementary validity domains. Choosing between them requires expertise and is problematic in some inspection configurations. The Physical Theory of Diffraction (PTD) was developed based on both Kirchhoff and GTD in order to combine their advantages and overcome their limitations. The theoretical basis for PTD and its integration in the CIVA modeling approach are discussed in this communication. Several results that validate this newly developed model and illustrate its advantages are presented.« less

  15. Use of the FDA nozzle model to illustrate validation techniques in computational fluid dynamics (CFD) simulations

    PubMed Central

    Hariharan, Prasanna; D’Souza, Gavin A.; Horner, Marc; Morrison, Tina M.; Malinauskas, Richard A.; Myers, Matthew R.

    2017-01-01

    A “credible” computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing “model credibility” is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a “threshold-based” validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results (“S”) of velocity and viscous shear stress were compared with inter-laboratory experimental measurements (“D”). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student’s t-test. However, following the threshold-based approach, a Student’s t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices. PMID:28594889

  16. Use of the FDA nozzle model to illustrate validation techniques in computational fluid dynamics (CFD) simulations.

    PubMed

    Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R

    2017-01-01

    A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices.

  17. Model selection for the North American Breeding Bird Survey: A comparison of methods

    USGS Publications Warehouse

    Link, William; Sauer, John; Niven, Daniel

    2017-01-01

    The North American Breeding Bird Survey (BBS) provides data for >420 bird species at multiple geographic scales over 5 decades. Modern computational methods have facilitated the fitting of complex hierarchical models to these data. It is easy to propose and fit new models, but little attention has been given to model selection. Here, we discuss and illustrate model selection using leave-one-out cross validation, and the Bayesian Predictive Information Criterion (BPIC). Cross-validation is enormously computationally intensive; we thus evaluate the performance of the Watanabe-Akaike Information Criterion (WAIC) as a computationally efficient approximation to the BPIC. Our evaluation is based on analyses of 4 models as applied to 20 species covered by the BBS. Model selection based on BPIC provided no strong evidence of one model being consistently superior to the others; for 14/20 species, none of the models emerged as superior. For the remaining 6 species, a first-difference model of population trajectory was always among the best fitting. Our results show that WAIC is not reliable as a surrogate for BPIC. Development of appropriate model sets and their evaluation using BPIC is an important innovation for the analysis of BBS data.

  18. Beam dynamics validation of the Halbach Technology FFAG Cell for Cornell-BNL Energy Recovery Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meot, Francois; Tsoupas, N.; Brooks, S.

    The Cornell-BNL Electron Test Accelerator (CBETA), a 150 MeV energy recovery linac (ERL) now in construction at Cornell, employs a fixed-field alternating gradient optics return loop: a single beam line comprised of FFAG cells, which accepts four recirculated energies. CBETA FFAG cell uses Halbach permanent magnet technology, its design studies have covered an extended period of time supported by extensive particle dynamics simulations using computed 3-D field map models. As a result, this approach is discussed, and illustrated here, based on the final stage in these beam dynamics studies, namely the validation of a ultimate, optimized design of the Halbachmore » cell.« less

  19. Beam dynamics validation of the Halbach Technology FFAG Cell for Cornell-BNL Energy Recovery Linac

    DOE PAGES

    Meot, Francois; Tsoupas, N.; Brooks, S.; ...

    2018-04-16

    The Cornell-BNL Electron Test Accelerator (CBETA), a 150 MeV energy recovery linac (ERL) now in construction at Cornell, employs a fixed-field alternating gradient optics return loop: a single beam line comprised of FFAG cells, which accepts four recirculated energies. CBETA FFAG cell uses Halbach permanent magnet technology, its design studies have covered an extended period of time supported by extensive particle dynamics simulations using computed 3-D field map models. As a result, this approach is discussed, and illustrated here, based on the final stage in these beam dynamics studies, namely the validation of a ultimate, optimized design of the Halbachmore » cell.« less

  20. Measurement of change in health status with Rasch models.

    PubMed

    Anselmi, Pasquale; Vidotto, Giulio; Bettinardi, Ornella; Bertolotti, Giorgio

    2015-02-07

    The traditional approach to the measurement of change presents important drawbacks (no information at individual level, ordinal scores, variance of the measurement instrument across time points), which Rasch models overcome. The article aims to illustrate the features of the measurement of change with Rasch models. To illustrate the measurement of change using Rasch models, the quantitative data of a longitudinal study of heart-surgery patients (N = 98) were used. The scale "Perception of Positive Change" was used as an example of measurement instrument. All patients underwent cardiac rehabilitation, individual psychological intervention, and educational intervention. Nineteen patients also attended progressive muscle relaxation group trainings. The scale was administered before and after the interventions. Three Rasch approaches were used. Two separate analyses were run on the data from the two time points to test the invariance of the instrument. An analysis was run on the stacked data from both time points to measure change in a common frame of reference. Results of the latter analysis were compared with those of an analysis that removed the influence of local dependency on patient measures. Statistics t, χ(2) and F were used for comparing the patient and item measures estimated in the Rasch analyses (a-priori α = .05). Infit, Outfit, R and item Strata were used for investigating Rasch model fit, reliability, and validity of the instrument. Data of all 98 patients were included in the analyses. The instrument was reliable, valid, and substantively unidimensional (Infit, Outfit < 2 for all items, R = .84, item Strata range = 3.93-6.07). Changes in the functioning of the instrument occurred across the two time, which prevented the use of the two separate analyses to unambiguously measure change. Local dependency had a negligible effect on patient measures (p ≥ .8674). Thirteen patients improved, whereas 3 worsened. The patients who attended the relaxation group trainings did not report greater improvement than those who did not (p = .1007). Rasch models represent a valid framework for the measurement of change and a useful complement to traditional approaches.

  1. Proposal and Testing of a Methodology for Evaluating an Occupational Health Program at a U.S. Army Installation

    DTIC Science & Technology

    1983-08-01

    the literature for ten of the thirteen peptic ulcer diagnostic criteria most often cited in predeveloped listings compiled by various professional...Dershewitz, M.D., Richard A. Gross, M.D., and John W. Williamson, M.D., "Validating Audit Criteria: An Analytic Approach Illustrated by Peptic Ulcer ...A., M.D.; Gross, Richard A., M.D.; and Williamson, John W., M.D., "Validating Audit Criteria: An Analytic Approach Illustrated by Peptic Ulcer Disease

  2. Large space structure model reduction and control system design based upon actuator and sensor influence functions

    NASA Technical Reports Server (NTRS)

    Yam, Y.; Lang, J. H.; Johnson, T. L.; Shih, S.; Staelin, D. H.

    1983-01-01

    A model reduction procedure based on aggregation with respect to sensor and actuator influences rather than modes is presented for large systems of coupled second-order differential equations. Perturbation expressions which can predict the effects of spillover on both the aggregated and residual states are derived. These expressions lead to the development of control system design constraints which are sufficient to guarantee, to within the validity of the perturbations, that the residual states are not destabilized by control systems designed from the reduced model. A numerical example is provided to illustrate the application of the aggregation and control system design method.

  3. Wires in the soup: quantitative models of cell signaling

    PubMed Central

    Cheong, Raymond; Levchenko, Andre

    2014-01-01

    Living cells are capable of extracting information from their environments and mounting appropriate responses to a variety of associated challenges. The underlying signal transduction networks enabling this can be quite complex, necessitating for their unraveling by sophisticated computational modeling coupled with precise experimentation. Although we are still at the beginning of this process, some recent examples of integrative analysis of cell signaling are very encouraging. This review highlights the case of the NF-κB pathway in order to illustrate how a quantitative model of a signaling pathway can be gradually constructed through continuous experimental validation, and what lessons one might learn from such exercises. PMID:18291655

  4. Are there reliable constitutive laws for dynamic friction?

    PubMed

    Woodhouse, Jim; Putelat, Thibaut; McKay, Andrew

    2015-09-28

    Structural vibration controlled by interfacial friction is widespread, ranging from friction dampers in gas turbines to the motion of violin strings. To predict, control or prevent such vibration, a constitutive description of frictional interactions is inevitably required. A variety of friction models are discussed to assess their scope and validity, in the light of constraints provided by different experimental observations. Three contrasting case studies are used to illustrate how predicted behaviour can be extremely sensitive to the choice of frictional constitutive model, and to explore possible experimental paths to discriminate between and calibrate dynamic friction models over the full parameter range needed for real applications. © 2015 The Author(s).

  5. Tournament Validity: Testing Golfer Competence

    ERIC Educational Resources Information Center

    Sachau, Daniel; Andrews, Lance; Gibson, Bryan; DeNeui, Daniel

    2009-01-01

    The concept of tournament validity was explored in three studies. In the first study, measures of tournament validity, difficulty, and discrimination were introduced. These measures were illustrated with data from the 2003 Professional Golf Association (PGA) Tour. In the second study, the relationship between difficulty and discrimination was…

  6. Microscopic pressure-cooker model for studying molecules in confinement

    NASA Astrophysics Data System (ADS)

    Santamaria, Ruben; Adamowicz, Ludwik; Rosas-Acevedo, Hortensia

    2015-04-01

    A model for a system of a finite number of molecules in confinement is presented and expressions for determining the temperature, pressure, and volume of the system are derived. The present model is a generalisation of the Zwanzig-Langevin model because it includes pressure effects in the system. It also has general validity, preserves the ergodic hypothesis, and provides a formal framework for previous studies of hydrogen clusters in confinement. The application of the model is illustrated by an investigation of a set of prebiotic compounds exposed to varying pressure and temperature. The simulations performed within the model involve the use of a combination of molecular dynamics and density functional theory methods implemented on a computer system with a mixed CPU-GPU architecture.

  7. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equationsmore » for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.« less

  8. Experimental Study of Flexible Plate Vibration Control by Using Two-Loop Sliding Mode Control Strategy

    NASA Astrophysics Data System (ADS)

    Yang, Jingyu; Lin, Jiahui; Liu, Yuejun; Yang, Kang; Zhou, Lanwei; Chen, Guoping

    2017-08-01

    It is well known that intelligent control theory has been used in many research fields, novel modeling method (DROMM) is used for flexible rectangular active vibration control, and then the validity of new model is confirmed by comparing finite element model with new model. In this paper, taking advantage of the dynamics of flexible rectangular plate, a two-loop sliding mode (TSM) MIMO approach is introduced for designing multiple-input multiple-output continuous vibration control system, which can overcome uncertainties, disturbances or unstable dynamics. An illustrative example is given in order to show the feasibility of the method. Numerical simulations and experiment confirm the effectiveness of the proposed TSM MIMO controller.

  9. Genetic mouse models relevant to schizophrenia: taking stock and looking forward.

    PubMed

    Harrison, Paul J; Pritchett, David; Stumpenhorst, Katharina; Betts, Jill F; Nissen, Wiebke; Schweimer, Judith; Lane, Tracy; Burnet, Philip W J; Lamsa, Karri P; Sharp, Trevor; Bannerman, David M; Tunbridge, Elizabeth M

    2012-03-01

    Genetic mouse models relevant to schizophrenia complement, and have to a large extent supplanted, pharmacological and lesion-based rat models. The main attraction is that they potentially have greater construct validity; however, they share the fundamental limitations of all animal models of psychiatric disorder, and must also be viewed in the context of the uncertain and complex genetic architecture of psychosis. Some of the key issues, including the choice of gene to target, the manner of its manipulation, gene-gene and gene-environment interactions, and phenotypic characterization, are briefly considered in this commentary, illustrated by the relevant papers reported in this special issue. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Model-based clinical dose optimization for phenobarbital in neonates: An illustration of the importance of data sharing and external validation.

    PubMed

    Völler, Swantje; Flint, Robert B; Stolk, Leo M; Degraeuwe, Pieter L J; Simons, Sinno H P; Pokorna, Paula; Burger, David M; de Groot, Ronald; Tibboel, Dick; Knibbe, Catherijne A J

    2017-11-15

    Particularly in the pediatric clinical pharmacology field, data-sharing offers the possibility of making the most of all available data. In this study, we utilize previously collected therapeutic drug monitoring (TDM) data of term and preterm newborns to develop a population pharmacokinetic model for phenobarbital. We externally validate the model using prospective phenobarbital data from an ongoing pharmacokinetic study in preterm neonates. TDM data from 53 neonates (gestational age (GA): 37 (24-42) weeks, bodyweight: 2.7 (0.45-4.5) kg; postnatal age (PNA): 4.5 (0-22) days) contained information on dosage histories, concentration and covariate data (including birth weight, actual weight, post-natal age (PNA), postmenstrual age, GA, sex, liver and kidney function, APGAR-score). Model development was carried out using NONMEM ® 7.3. After assessment of model fit, the model was validated using data of 17 neonates included in the DINO (Drug dosage Improvement in NeOnates)-study. Modelling of 229 plasma concentrations, ranging from 3.2 to 75.2mg/L, resulted in a one compartment model for phenobarbital. Clearance (CL) and volume (V d ) for a child with a birthweight of 2.6kg at PNA day 4.5 was 0.0091L/h (9%) and 2.38L (5%), respectively. Birthweight and PNA were the best predictors for CL maturation, increasing CL by 36.7% per kg birthweight and 5.3% per postnatal day of living, respectively. The best predictor for the increase in V d was actual bodyweight (0.31L/kg). External validation showed that the model can adequately predict the pharmacokinetics in a prospective study. Data-sharing can help to successfully develop and validate population pharmacokinetic models in neonates. From the results it seems that both PNA and bodyweight are required to guide dosing of phenobarbital in term and preterm neonates. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Validation of the alternating conditional estimation algorithm for estimation of flexible extensions of Cox's proportional hazards model with nonlinear constraints on the parameters.

    PubMed

    Wynant, Willy; Abrahamowicz, Michal

    2016-11-01

    Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Towards Application of NASA Standard for Models and Simulations in Aeronautical Design Process

    NASA Astrophysics Data System (ADS)

    Vincent, Luc; Dunyach, Jean-Claude; Huet, Sandrine; Pelissier, Guillaume; Merlet, Joseph

    2012-08-01

    Even powerful computational techniques like simulation endure limitations in their validity domain. Consequently using simulation models requires cautions to avoid making biased design decisions for new aeronautical products on the basis of inadequate simulation results. Thus the fidelity, accuracy and validity of simulation models shall be monitored in context all along the design phases to build confidence in achievement of the goals of modelling and simulation.In the CRESCENDO project, we adapt the Credibility Assessment Scale method from NASA standard for models and simulations from space programme to the aircraft design in order to assess the quality of simulations. The proposed eight quality assurance metrics aggregate information to indicate the levels of confidence in results. They are displayed in management dashboard and can secure design trade-off decisions at programme milestones.The application of this technique is illustrated in aircraft design context with specific thermal Finite Elements Analysis. This use case shows how to judge the fitness- for-purpose of simulation as Virtual testing means and then green-light the continuation of Simulation Lifecycle Management (SLM) process.

  13. Scoring Rubric Development: Validity and Reliability.

    ERIC Educational Resources Information Center

    Moskal, Barbara M.; Leydens, Jon A.

    2000-01-01

    Provides clear definitions of the terms "validity" and "reliability" in the context of developing scoring rubrics and illustrates these definitions through examples. Also clarifies how validity and reliability may be addressed in the development of scoring rubrics, defined as descriptive scoring schemes developed to guide the analysis of the…

  14. Whole-body Motion Planning with Simple Dynamics and Full Kinematics

    DTIC Science & Technology

    2014-08-01

    optimizations can take an excessively long time to run, and may also suffer from local minima. Thus, this approach can become intractable for complex robots...motions like jumping and climbing. Additionally, the point-mass model suggests that the centroidal angular momentum is zero, which is not valid for motions...use in the DARPA Robotics Challenge. A. Jumping Our first example is to command the robot to jump off the ground, as illustrated in Fig.4. We assign

  15. Extended Full Computation-Tree Logic with Sequence Modal Operator: Representing Hierarchical Tree Structures

    NASA Astrophysics Data System (ADS)

    Kamide, Norihiro; Kaneiwa, Ken

    An extended full computation-tree logic, CTLS*, is introduced as a Kripke semantics with a sequence modal operator. This logic can appropriately represent hierarchical tree structures where sequence modal operators in CTLS* are applied to tree structures. An embedding theorem of CTLS* into CTL* is proved. The validity, satisfiability and model-checking problems of CTLS* are shown to be decidable. An illustrative example of biological taxonomy is presented using CTLS* formulas.

  16. Dealing with Diversity in Computational Cancer Modeling

    PubMed Central

    Johnson, David; McKeever, Steve; Stamatakos, Georgios; Dionysiou, Dimitra; Graf, Norbert; Sakkalis, Vangelis; Marias, Konstantinos; Wang, Zhihui; Deisboeck, Thomas S.

    2013-01-01

    This paper discusses the need for interconnecting computational cancer models from different sources and scales within clinically relevant scenarios to increase the accuracy of the models and speed up their clinical adaptation, validation, and eventual translation. We briefly review current interoperability efforts drawing upon our experiences with the development of in silico models for predictive oncology within a number of European Commission Virtual Physiological Human initiative projects on cancer. A clinically relevant scenario, addressing brain tumor modeling that illustrates the need for coupling models from different sources and levels of complexity, is described. General approaches to enabling interoperability using XML-based markup languages for biological modeling are reviewed, concluding with a discussion on efforts towards developing cancer-specific XML markup to couple multiple component models for predictive in silico oncology. PMID:23700360

  17. Structural and thermal response of 30 cm diameter ion thruster optics

    NASA Technical Reports Server (NTRS)

    Macrae, G. S.; Zavesky, R. J.; Gooder, S. T.

    1989-01-01

    Tabular and graphical data are presented which are intended for use in calibrating and validating structural and thermal models of ion thruster optics. A 30 cm diameter, two electrode, mercury ion thruster was operated using two different electrode assembly designs. With no beam extraction, the transient and steady state temperature profiles and center electrode gaps were measured for three discharge powers. The data showed that the electrode mount design had little effect on the temperatures, but significantly impacted the motion of the electrode center. Equilibrium electrode gaps increased with one design and decreased with the other. Equilibrium displacements in excess of 0.5 mm and gap changes of 0.08 mm were measured at 450 W discharge power. Variations in equilibrium gaps were also found among assemblies of the same design. The presented data illustrate the necessity for high fidelity ion optics models and development of experimental techniques to allow their validation.

  18. Making Predictions in a Changing World: The Benefits of Individual-Based Ecology

    PubMed Central

    Stillman, Richard A.; Railsback, Steven F.; Giske, Jarl; Berger, Uta; Grimm, Volker

    2014-01-01

    Ecologists urgently need a better ability to predict how environmental change affects biodiversity. We examine individual-based ecology (IBE), a research paradigm that promises better a predictive ability by using individual-based models (IBMs) to represent ecological dynamics as arising from how individuals interact with their environment and with each other. A key advantage of IBMs is that the basis for predictions—fitness maximization by individual organisms—is more general and reliable than the empirical relationships that other models depend on. Case studies illustrate the usefulness and predictive success of long-term IBE programs. The pioneering programs had three phases: conceptualization, implementation, and diversification. Continued validation of models runs throughout these phases. The breakthroughs that make IBE more productive include standards for describing and validating IBMs, improved and standardized theory for individual traits and behavior, software tools, and generalized instead of system-specific IBMs. We provide guidelines for pursuing IBE and a vision for future IBE research. PMID:26955076

  19. What Do You Think You Are Measuring? A Mixed-Methods Procedure for Assessing the Content Validity of Test Items and Theory-Based Scaling

    PubMed Central

    Koller, Ingrid; Levenson, Michael R.; Glück, Judith

    2017-01-01

    The valid measurement of latent constructs is crucial for psychological research. Here, we present a mixed-methods procedure for improving the precision of construct definitions, determining the content validity of items, evaluating the representativeness of items for the target construct, generating test items, and analyzing items on a theoretical basis. To illustrate the mixed-methods content-scaling-structure (CSS) procedure, we analyze the Adult Self-Transcendence Inventory, a self-report measure of wisdom (ASTI, Levenson et al., 2005). A content-validity analysis of the ASTI items was used as the basis of psychometric analyses using multidimensional item response models (N = 1215). We found that the new procedure produced important suggestions concerning five subdimensions of the ASTI that were not identifiable using exploratory methods. The study shows that the application of the suggested procedure leads to a deeper understanding of latent constructs. It also demonstrates the advantages of theory-based item analysis. PMID:28270777

  20. Social Validity: Perceptions of Check and Connect with Early Literacy Support

    ERIC Educational Resources Information Center

    Miltich Lyst, Aimee; Gabriel, Stacey; O'Shaughnessy, Tam E.; Meyers, Joel; Meyers, Barbara

    2005-01-01

    This article underscores the potential advantages of qualitative methods to illustrate the depth and complexity of social validity. This investigation evaluates the social validity of Check and Connect with Early Literacy Support (CCEL), through the perspectives of teachers and caregivers whose children participated in the intervention. Teachers…

  1. Content Validity in Evaluation and Policy-Relevant Research.

    ERIC Educational Resources Information Center

    Mark, Melvin M.; And Others

    1985-01-01

    The role of content validity in policy-relevant research is illustrated in a study contrasting results of surveys concerning public opinion toward gun control. Inadequate content validity threatened inferences about the overall level of support for gun control, but not about opinion difference between sexes or respondents of varying political…

  2. Online Cross-Validation-Based Ensemble Learning

    PubMed Central

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2017-01-01

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. PMID:28474419

  3. Validation of the instrument of health literacy competencies for Chinese-speaking health professionals.

    PubMed

    Chang, Li-Chun; Chen, Yu-Chi; Liao, Li-Ling; Wu, Fei Ling; Hsieh, Pei-Lin; Chen, Hsiao-Jung

    2017-01-01

    The study aimed to illustrate the constructs and test the psychometric properties of an instrument of health literacy competencies (IOHLC) for health professionals. A multi-phase questionnaire development method was used to develop the scale. The categorization of the knowledge and practice domains achieved consensus through a modified Delphi process. To reduce the number of items, the 92-item IOHLC was psychometrically evaluated through internal consistency, Rasch modeling, and two-stage factor analysis. In total, 736 practitioners, including nurses, nurse practitioners, health educators, case managers, and dieticians completed the 92-item IOHLC online from May 2012 to January 2013. The final version of the IOHLC covered 9 knowledge items and 40 skill items containing 9 dimensions, with good model fit, and explaining 72% of total variance. All domains had acceptable internal consistency and discriminant validity. The tool in this study is the first to verify health literacy competencies rigorously. Moreover, through psychometric testing, the 49-item IOHLC demonstrates adequate reliability and validity. The IOHLC may serve as a reference for the theoretical and in-service training of Chinese-speaking individuals' health literacy competencies.

  4. Sparse Additive Ordinary Differential Equations for Dynamic Gene Regulatory Network Modeling.

    PubMed

    Wu, Hulin; Lu, Tao; Xue, Hongqi; Liang, Hua

    2014-04-02

    The gene regulation network (GRN) is a high-dimensional complex system, which can be represented by various mathematical or statistical models. The ordinary differential equation (ODE) model is one of the popular dynamic GRN models. High-dimensional linear ODE models have been proposed to identify GRNs, but with a limitation of the linear regulation effect assumption. In this article, we propose a sparse additive ODE (SA-ODE) model, coupled with ODE estimation methods and adaptive group LASSO techniques, to model dynamic GRNs that could flexibly deal with nonlinear regulation effects. The asymptotic properties of the proposed method are established and simulation studies are performed to validate the proposed approach. An application example for identifying the nonlinear dynamic GRN of T-cell activation is used to illustrate the usefulness of the proposed method.

  5. Three validation metrics for automated probabilistic image segmentation of brain tumours

    PubMed Central

    Zou, Kelly H.; Wells, William M.; Kikinis, Ron; Warfield, Simon K.

    2005-01-01

    SUMMARY The validity of brain tumour segmentation is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on three two-sample validation metrics against the estimated composite latent gold standard, which was derived from several experts’ manual segmentations by an EM algorithm. The distribution functions of the tumour and control pixel data were parametrically assumed to be a mixture of two beta distributions with different shape parameters. We estimated the corresponding receiver operating characteristic curve, Dice similarity coefficient, and mutual information, over all possible decision thresholds. Based on each validation metric, an optimal threshold was then computed via maximization. We illustrated these methods on MR imaging data from nine brain tumour cases of three different tumour types, each consisting of a large number of pixels. The automated segmentation yielded satisfactory accuracy with varied optimal thresholds. The performances of these validation metrics were also investigated via Monte Carlo simulation. Extensions of incorporating spatial correlation structures using a Markov random field model were considered. PMID:15083482

  6. A methodology for ecosystem-scale modeling of selenium

    USGS Publications Warehouse

    Presser, T.S.; Luoma, S.N.

    2010-01-01

    The main route of exposure for selenium (Se) is dietary, yet regulations lack biologically based protocols for evaluations of risk. We propose here an ecosystem-scale model that conceptualizes and quantifies the variables that determinehow Se is processed from water through diet to predators. This approach uses biogeochemical and physiological factors from laboratory and field studies and considers loading, speciation, transformation to particulate material, bioavailability, bioaccumulation in invertebrates, and trophic transfer to predators. Validation of the model is through data sets from 29 historic and recent field case studies of Se-exposed sites. The model links Se concentrations across media (water, particulate, tissue of different food web species). It can be used to forecast toxicity under different management or regulatory proposals or as a methodology for translating a fish-tissue (or other predator tissue) Se concentration guideline to a dissolved Se concentration. The model illustrates some critical aspects of implementing a tissue criterion: 1) the choice of fish species determines the food web through which Se should be modeled, 2) the choice of food web is critical because the particulate material to prey kinetics of bioaccumulation differs widely among invertebrates, 3) the characterization of the type and phase of particulate material is important to quantifying Se exposure to prey through the base of the food web, and 4) the metric describing partitioning between particulate material and dissolved Se concentrations allows determination of a site-specific dissolved Se concentration that would be responsible for that fish body burden in the specific environment. The linked approach illustrates that environmentally safe dissolved Se concentrations will differ among ecosystems depending on the ecological pathways and biogeochemical conditions in that system. Uncertainties and model sensitivities can be directly illustrated by varying exposure scenarios based on site-specific knowledge. The model can also be used to facilitate site-specific regulation and to present generic comparisons to illustrate limitations imposed by ecosystem setting and inhabitants. Used optimally, the model provides a tool for framing a site-specific ecological problem or occurrence of Se exposure, quantify exposure within that ecosystem, and narrow uncertainties abouthowto protect it by understanding the specifics of the underlying system ecology, biogeochemistry, and hydrology.?? 2010 SETAC.

  7. A methodology for ecosystem-scale modeling of selenium.

    PubMed

    Presser, Theresa S; Luoma, Samuel N

    2010-10-01

    The main route of exposure for selenium (Se) is dietary, yet regulations lack biologically based protocols for evaluations of risk. We propose here an ecosystem-scale model that conceptualizes and quantifies the variables that determine how Se is processed from water through diet to predators. This approach uses biogeochemical and physiological factors from laboratory and field studies and considers loading, speciation, transformation to particulate material, bioavailability, bioaccumulation in invertebrates, and trophic transfer to predators. Validation of the model is through data sets from 29 historic and recent field case studies of Se-exposed sites. The model links Se concentrations across media (water, particulate, tissue of different food web species). It can be used to forecast toxicity under different management or regulatory proposals or as a methodology for translating a fish-tissue (or other predator tissue) Se concentration guideline to a dissolved Se concentration. The model illustrates some critical aspects of implementing a tissue criterion: 1) the choice of fish species determines the food web through which Se should be modeled, 2) the choice of food web is critical because the particulate material to prey kinetics of bioaccumulation differs widely among invertebrates, 3) the characterization of the type and phase of particulate material is important to quantifying Se exposure to prey through the base of the food web, and 4) the metric describing partitioning between particulate material and dissolved Se concentrations allows determination of a site-specific dissolved Se concentration that would be responsible for that fish body burden in the specific environment. The linked approach illustrates that environmentally safe dissolved Se concentrations will differ among ecosystems depending on the ecological pathways and biogeochemical conditions in that system. Uncertainties and model sensitivities can be directly illustrated by varying exposure scenarios based on site-specific knowledge. The model can also be used to facilitate site-specific regulation and to present generic comparisons to illustrate limitations imposed by ecosystem setting and inhabitants. Used optimally, the model provides a tool for framing a site-specific ecological problem or occurrence of Se exposure, quantify exposure within that ecosystem, and narrow uncertainties about how to protect it by understanding the specifics of the underlying system ecology, biogeochemistry, and hydrology. © 2010 SETAC.

  8. System identification methods for aircraft flight control development and validation

    NASA Technical Reports Server (NTRS)

    Tischler, Mark B.

    1995-01-01

    System-identification methods compose a mathematical model, or series of models, from measurements of inputs and outputs of dynamic systems. The extracted models allow the characterization of the response of the overall aircraft or component subsystem behavior, such as actuators and on-board signal processing algorithms. This paper discusses the use of frequency-domain system-identification methods for the development and integration of aircraft flight-control systems. The extraction and analysis of models of varying complexity from nonparametric frequency-responses to transfer-functions and high-order state-space representations is illustrated using the Comprehensive Identification from FrEquency Responses (CIFER) system-identification facility. Results are presented for test data of numerous flight and simulation programs at the Ames Research Center including rotorcraft, fixed-wing aircraft, advanced short takeoff and vertical landing (ASTOVL), vertical/short takeoff and landing (V/STOL), tiltrotor aircraft, and rotor experiments in the wind tunnel. Excellent system characterization and dynamic response prediction is achieved for this wide class of systems. Examples illustrate the role of system-identification technology in providing an integrated flow of dynamic response data around the entire life-cycle of aircraft development from initial specifications, through simulation and bench testing, and into flight-test optimization.

  9. Multivariate normality

    NASA Technical Reports Server (NTRS)

    Crutcher, H. L.; Falls, L. W.

    1976-01-01

    Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.

  10. Requirements for facilities and measurement techniques to support CFD development for hypersonic aircraft

    NASA Technical Reports Server (NTRS)

    Sellers, William L., III; Dwoyer, Douglas L.

    1992-01-01

    The design of a hypersonic aircraft poses unique challenges to the engineering community. Problems with duplicating flight conditions in ground based facilities have made performance predictions risky. Computational fluid dynamics (CFD) has been proposed as an additional means of providing design data. At the present time, CFD codes are being validated based on sparse experimental data and then used to predict performance at flight conditions with generally unknown levels of uncertainty. This paper will discuss the facility and measurement techniques that are required to support CFD development for the design of hypersonic aircraft. Illustrations are given of recent success in combining experimental and direct numerical simulation in CFD model development and validation for hypersonic perfect gas flows.

  11. A Unified Fault-Tolerance Protocol

    NASA Technical Reports Server (NTRS)

    Miner, Paul; Gedser, Alfons; Pike, Lee; Maddalon, Jeffrey

    2004-01-01

    Davies and Wakerly show that Byzantine fault tolerance can be achieved by a cascade of broadcasts and middle value select functions. We present an extension of the Davies and Wakerly protocol, the unified protocol, and its proof of correctness. We prove that it satisfies validity and agreement properties for communication of exact values. We then introduce bounded communication error into the model. Inexact communication is inherent for clock synchronization protocols. We prove that validity and agreement properties hold for inexact communication, and that exact communication is a special case. As a running example, we illustrate the unified protocol using the SPIDER family of fault-tolerant architectures. In particular we demonstrate that the SPIDER interactive consistency, distributed diagnosis, and clock synchronization protocols are instances of the unified protocol.

  12. Strengthening the SDP Relaxation of AC Power Flows with Convex Envelopes, Bound Tightening, and Valid Inequalities

    DOE PAGES

    Coffrin, Carleton James; Hijazi, Hassan L; Van Hentenryck, Pascal R

    2016-12-01

    Here this work revisits the Semidefine Programming (SDP) relaxation of the AC power flow equations in light of recent results illustrating the benefits of bounds propagation, valid inequalities, and the Convex Quadratic (QC) relaxation. By integrating all of these results into the SDP model a new hybrid relaxation is proposed, which combines the benefits from all of these recent works. This strengthened SDP formulation is evaluated on 71 AC Optimal Power Flow test cases from the NESTA archive and is shown to have an optimality gap of less than 1% on 63 cases. This new hybrid relaxation closes 50% ofmore » the open cases considered, leaving only 8 for future investigation.« less

  13. Backstepping Design of Adaptive Neural Fault-Tolerant Control for MIMO Nonlinear Systems.

    PubMed

    Gao, Hui; Song, Yongduan; Wen, Changyun

    In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.

  14. Point of truth calibration for disease prioritisation-A case study of prioritisation of exotic diseases for the pig industry in Australia.

    PubMed

    Brookes, V J; Barry, S C; Hernández-Jover, M; Ward, M P

    2017-04-01

    The objective of this study was to trial point of truth calibration (POTCal) as a novel method for disease prioritisation. To illustrate the application of this method, we used a previously described case-study of prioritisation of exotic diseases for the pig industry in Australia. Disease scenarios were constructed from criteria which described potential impact and pig-producers were asked to score the importance of each scenario. POTCal was used to model participants' estimates of disease importance as a function of the criteria, to derive a predictive model to prioritise a range of exotic diseases. The best validation of producers' estimates was achieved using a model derived from all responses. The highest weighted criteria were attack rate, case fatality rate and market loss, and the highest priority diseases were the vesicular diseases followed by swine fevers and zoonotic encephalitides. Comparison of results with a previous study in which probabilistic inversion was used to prioritise diseases for the same group of producers highlighted differences between disease prioritisation methods. Overall, this study demonstrated that POTCal can be used for disease prioritisation. An advantage of POTCal is that valid models can be developed that reflect decision-makers' heuristics. Specifically, this evaluation of the use of POTCal in animal health illustrates how the judgements of participants can be incorporated into a decision-making process. Further research is needed to investigate the influence of scenarios presented to participants during POTCal evaluations, and the robustness of this approach applied to different disease issues (e.g. exotic versus endemic) and production types (e.g. intensive versus extensive). To our knowledge, this is the first report of the use of POTCal for disease prioritisation. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  15. E&V (Evaluation and Validation) Reference Manual, Version 1.1

    DTIC Science & Technology

    1988-10-20

    E&V. This model will allow the user to arrive at E&V techniques through many different paths, and provides a means to extract useful information...electronically (preferred) to szymansk@ajpo.sei.cmu.edu or by regular mail to Mr. Raymond Szymanski , AFWAL/AAAF, Wright Patterson AFB, OH 45433-6543. ES-2 E&V...1, 1-3 illustrate the types of infor- mation to be extracted from each document. Chapter 2 provides a more detailed description of the structure and

  16. A systemic approach for modeling biological evolution using Parallel DEVS.

    PubMed

    Heredia, Daniel; Sanz, Victorino; Urquia, Alfonso; Sandín, Máximo

    2015-08-01

    A new model for studying the evolution of living organisms is proposed in this manuscript. The proposed model is based on a non-neodarwinian systemic approach. The model is focused on considering several controversies and open discussions about modern evolutionary biology. Additionally, a simplification of the proposed model, named EvoDEVS, has been mathematically described using the Parallel DEVS formalism and implemented as a computer program using the DEVSLib Modelica library. EvoDEVS serves as an experimental platform to study different conditions and scenarios by means of computer simulations. Two preliminary case studies are presented to illustrate the behavior of the model and validate its results. EvoDEVS is freely available at http://www.euclides.dia.uned.es. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Nonlinear system modeling based on bilinear Laguerre orthonormal bases.

    PubMed

    Garna, Tarek; Bouzrara, Kais; Ragot, José; Messaoud, Hassani

    2013-05-01

    This paper proposes a new representation of discrete bilinear model by developing its coefficients associated to the input, to the output and to the crossed product on three independent Laguerre orthonormal bases. Compared to classical bilinear model, the resulting model entitled bilinear-Laguerre model ensures a significant parameter number reduction as well as simple recursive representation. However, such reduction still constrained by an optimal choice of Laguerre pole characterizing each basis. To do so, we develop a pole optimization algorithm which constitutes an extension of that proposed by Tanguy et al.. The bilinear-Laguerre model as well as the proposed pole optimization algorithm are illustrated and tested on a numerical simulations and validated on the Continuous Stirred Tank Reactor (CSTR) System. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Validation of different pediatric triage systems in the emergency department

    PubMed Central

    Aeimchanbanjong, Kanokwan; Pandee, Uthen

    2017-01-01

    BACKGROUND: Triage system in children seems to be more challenging compared to adults because of their different response to physiological and psychosocial stressors. This study aimed to determine the best triage system in the pediatric emergency department. METHODS: This was a prospective observational study. This study was divided into two phases. The first phase determined the inter-rater reliability of five triage systems: Manchester Triage System (MTS), Emergency Severity Index (ESI) version 4, Pediatric Canadian Triage and Acuity Scale (CTAS), Australasian Triage Scale (ATS), and Ramathibodi Triage System (RTS) by triage nurses and pediatric residents. In the second phase, to analyze the validity of each triage system, patients were categorized as two groups, i.e., high acuity patients (triage level 1, 2) and low acuity patients (triage level 3, 4, and 5). Then we compared the triage acuity with actual admission. RESULTS: In phase I, RTS illustrated almost perfect inter-rater reliability with kappa of 1.0 (P<0.01). ESI and CTAS illustrated good inter-rater reliability with kappa of 0.8–0.9 (P<0.01). Meanwhile, ATS and MTS illustrated moderate to good inter-rater reliability with kappa of 0.5–0.7 (P<0.01). In phase II, we included 1 041 participants with average age of 4.7±4.2 years, of which 55% were male and 45% were female. In addition 32% of the participants had underlying diseases, and 123 (11.8%) patients were admitted. We found that ESI illustrated the most appropriate predicting ability for admission with sensitivity of 52%, specificity of 81%, and AUC 0.78 (95%CI 0.74–0.81). CONCLUSION: RTS illustrated almost perfect inter-rater reliability. Meanwhile, ESI and CTAS illustrated good inter-rater reliability. Finally, ESI illustrated the appropriate validity for triage system. PMID:28680520

  19. Flood loss modelling with FLF-IT: a new flood loss function for Italian residential structures

    NASA Astrophysics Data System (ADS)

    Hasanzadeh Nafari, Roozbeh; Amadio, Mattia; Ngo, Tuan; Mysiak, Jaroslav

    2017-07-01

    The damage triggered by different flood events costs the Italian economy millions of euros each year. This cost is likely to increase in the future due to climate variability and economic development. In order to avoid or reduce such significant financial losses, risk management requires tools which can provide a reliable estimate of potential flood impacts across the country. Flood loss functions are an internationally accepted method for estimating physical flood damage in urban areas. In this study, we derived a new flood loss function for Italian residential structures (FLF-IT), on the basis of empirical damage data collected from a recent flood event in the region of Emilia-Romagna. The function was developed based on a new Australian approach (FLFA), which represents the confidence limits that exist around the parameterized functional depth-damage relationship. After model calibration, the performance of the model was validated for the prediction of loss ratios and absolute damage values. It was also contrasted with an uncalibrated relative model with frequent usage in Europe. In this regard, a three-fold cross-validation procedure was carried out over the empirical sample to measure the range of uncertainty from the actual damage data. The predictive capability has also been studied for some sub-classes of water depth. The validation procedure shows that the newly derived function performs well (no bias and only 10 % mean absolute error), especially when the water depth is high. Results of these validation tests illustrate the importance of model calibration. The advantages of the FLF-IT model over other Italian models include calibration with empirical data, consideration of the epistemic uncertainty of data, and the ability to change parameters based on building practices across Italy.

  20. Review and analysis of the DNW/Model 360 rotor acoustic data base

    NASA Technical Reports Server (NTRS)

    Zinner, R. A.; Boxwell, D. A.; Spencer, R. H.

    1989-01-01

    A comprehensive model rotor aeroacoustic data base was collected in a large anechoic wind tunnel in 1986. Twenty-six microphones were positioned around the azimuth to collect acoustic data for approximately 150 different test conditions. A dynamically scaled, blade-pressure-instrumented model of the forward rotor of the BH360 helicopter simultaneously provided blade pressures for correlation with the acoustic data. High-speed impulsive noise, blade-vortex interaction noise, low-frequency noise, and broadband noise were all captured in this extensive data base. Trends are presentes for each noise source, with important parametric variations. The purpose of this paper is to introduce this data base and illustrate its potential for predictive code validation.

  1. Augmented twin-nonlinear two-box behavioral models for multicarrier LTE power amplifiers.

    PubMed

    Hammi, Oualid

    2014-01-01

    A novel class of behavioral models is proposed for LTE-driven Doherty power amplifiers with strong memory effects. The proposed models, labeled augmented twin-nonlinear two-box models, are built by cascading a highly nonlinear memoryless function with a mildly nonlinear memory polynomial with cross terms. Experimental validation on gallium nitride based Doherty power amplifiers illustrates the accuracy enhancement and complexity reduction achieved by the proposed models. When strong memory effects are observed, the augmented twin-nonlinear two-box models can improve the normalized mean square error by up to 3 dB for the same number of coefficients when compared to state-of-the-art twin-nonlinear two-box models. Furthermore, the augmented twin-nonlinear two-box models lead to the same performance as previously reported twin-nonlinear two-box models while requiring up to 80% less coefficients.

  2. Dragons and Dinosaurs: Directing Inquiry in Biology Using the Notions of "Milieu" and "Validation"

    ERIC Educational Resources Information Center

    Achiam, Marianne; Solberg, Jan; Evans, Robert

    2013-01-01

    This article describes how inquiry teaching can be directed towards specific content learning goals while allowing for student exploration and validation of hypotheses. Drawing from the Theory of Didactical Situations, the concepts of "milieu" and "validation" are illustrated through two sample biology lessons designed to engage and challenge…

  3. Truth and Evidence in Validity Theory

    ERIC Educational Resources Information Center

    Borsboom, Denny; Markus, Keith A.

    2013-01-01

    According to Kane (this issue), "the validity of a proposed interpretation or use depends on how well the evidence supports" the claims being made. Because truth and evidence are distinct, this means that the validity of a test score interpretation could be high even though the interpretation is false. As an illustration, we discuss the case of…

  4. Pricing end-of-life components

    NASA Astrophysics Data System (ADS)

    Vadde, Srikanth; Kamarthi, Sagar V.; Gupta, Surendra M.

    2005-11-01

    The main objective of a product recovery facility (PRF) is to disassemble end-of-life (EOL) products and sell the reclaimed components for reuse and recovered materials in second-hand markets. Variability in the inflow of EOL products and fluctuation in demand for reusable components contribute to the volatility in inventory levels. To stay profitable the PRFs ought to manage their inventory by regulating the price appropriately to minimize holding costs. This work presents two deterministic pricing models for a PRF bounded by environmental regulations. In the first model, the demand is price dependent and in the second, the demand is both price and time dependent. The models are valid for single component with no inventory replenishment sale during the selling horizon . Numerical examples are presented to illustrate the models.

  5. Considerations for ex vivo thermal tissue testing exemplified using the fresh porcine longissimus muscle model for endometrial ablation

    NASA Astrophysics Data System (ADS)

    Fugett, James H.; Bennett, Haydon E.; Shrout, Joshua L.; Coad, James E.

    2017-02-01

    Expansions in minimally invasive medical devices and technologies with thermal mechanisms of action are continuing to advance the practice of medicine. These expansions have led to an increasing need for appropriate animal models to validate and quantify device performance. The planning of these studies should take into consideration a variety of parameters, including the appropriate animal model (test system - ex vivo or in vivo; species; tissue type), treatment conditions (test conditions), predicate device selection (as appropriate, control article), study timing (Day 0 acute to more than Day 90 chronic survival studies), and methods of tissue analysis (tissue dissection - staining methods). These considerations are discussed and illustrated using the fresh extirpated porcine longissimus muscle model for endometrial ablation.

  6. A big-data model for multi-modal public transportation with application to macroscopic control and optimisation

    NASA Astrophysics Data System (ADS)

    Faizrahnemoon, Mahsa; Schlote, Arieh; Maggi, Lorenzo; Crisostomi, Emanuele; Shorten, Robert

    2015-11-01

    This paper describes a Markov-chain-based approach to modelling multi-modal transportation networks. An advantage of the model is the ability to accommodate complex dynamics and handle huge amounts of data. The transition matrix of the Markov chain is built and the model is validated using the data extracted from a traffic simulator. A realistic test-case using multi-modal data from the city of London is given to further support the ability of the proposed methodology to handle big quantities of data. Then, we use the Markov chain as a control tool to improve the overall efficiency of a transportation network, and some practical examples are described to illustrate the potentials of the approach.

  7. Comprehensive Validation of an Intermittency Transport Model for Transitional Low-Pressure Turbine Flows

    NASA Technical Reports Server (NTRS)

    Suzen, Y. B.; Huang, P. G.

    2005-01-01

    A transport equation for the intermittency factor is employed to predict transitional flows under the effects of pressure gradients, freestream turbulence intensities, Reynolds number variations, flow separation and reattachment. and unsteady wake-blade interactions representing diverse operating conditions encountered in low-pressure turbines. The intermittent behaviour of the transitional flows is taken into account and incorporated into computations by modifying the eddy viscosity, Mu(sub t), with the intermittency factor, gamma. Turbulent quantities are predicted by using Menter's two-equation turbulence model (SST). The onset location of transition is obtained from correlations based on boundary-layer momentum thickness, acceleration parameter, and turbulence intensity. The intermittency factor is obtained from a transport model which can produce both the experimentally observed streamwise variation of intermittency and a realistic profile in the cross stream direction. The intermittency transport model is tested and validated against several well documented low pressure turbine experiments ranging from flat plate cases to unsteady wake-blade interaction experiments. Overall, good agreement between the experimental data and computational results is obtained illustrating the predicting capabilities of the model and the current intermittency transport modelling approach for transitional flow simulations.

  8. Validating a biometric authentication system: sample size requirements.

    PubMed

    Dass, Sarat C; Zhu, Yongfang; Jain, Anil K

    2006-12-01

    Authentication systems based on biometric features (e.g., fingerprint impressions, iris scans, human face images, etc.) are increasingly gaining widespread use and popularity. Often, vendors and owners of these commercial biometric systems claim impressive performance that is estimated based on some proprietary data. In such situations, there is a need to independently validate the claimed performance levels. System performance is typically evaluated by collecting biometric templates from n different subjects, and for convenience, acquiring multiple instances of the biometric for each of the n subjects. Very little work has been done in 1) constructing confidence regions based on the ROC curve for validating the claimed performance levels and 2) determining the required number of biometric samples needed to establish confidence regions of prespecified width for the ROC curve. To simplify the analysis that address these two problems, several previous studies have assumed that multiple acquisitions of the biometric entity are statistically independent. This assumption is too restrictive and is generally not valid. We have developed a validation technique based on multivariate copula models for correlated biometric acquisitions. Based on the same model, we also determine the minimum number of samples required to achieve confidence bands of desired width for the ROC curve. We illustrate the estimation of the confidence bands as well as the required number of biometric samples using a fingerprint matching system that is applied on samples collected from a small population.

  9. Definition and Demonstration of a Methodology for Validating Aircraft Trajectory Predictors

    NASA Technical Reports Server (NTRS)

    Vivona, Robert A.; Paglione, Mike M.; Cate, Karen T.; Enea, Gabriele

    2010-01-01

    This paper presents a new methodology for validating an aircraft trajectory predictor, inspired by the lessons learned from a number of field trials, flight tests and simulation experiments for the development of trajectory-predictor-based automation. The methodology introduces new techniques and a new multi-staged approach to reduce the effort in identifying and resolving validation failures, avoiding the potentially large costs associated with failures during a single-stage, pass/fail approach. As a case study, the validation effort performed by the Federal Aviation Administration for its En Route Automation Modernization (ERAM) system is analyzed to illustrate the real-world applicability of this methodology. During this validation effort, ERAM initially failed to achieve six of its eight requirements associated with trajectory prediction and conflict probe. The ERAM validation issues have since been addressed, but to illustrate how the methodology could have benefited the FAA effort, additional techniques are presented that could have been used to resolve some of these issues. Using data from the ERAM validation effort, it is demonstrated that these new techniques could have identified trajectory prediction error sources that contributed to several of the unmet ERAM requirements.

  10. A process model of technology innovation in governmental agencies: Insights from NASA’s science directorate

    NASA Astrophysics Data System (ADS)

    Szajnfarber, Zoe; Weigel, Annalisa L.

    2013-03-01

    This paper investigates the process through which new technical concepts are matured in the NASA innovation ecosystem. We propose an "epoch-shock" conceptualization as an alternative mental model to the traditional stage-gate view. The epoch-shock model is developed inductively, based on detailed empirical observations of the process, and validated, to the extent possible, through expert review. The paper concludes by illustrating how the new epoch-shock conceptualization could provide a useful basis for rethinking feasible interventions to improve innovation management in the space agency context. Where the more traditional stage-gate model leads to an emphasis on centralized flow control, the epoch-shock model acknowledges the decentralized, probabilistic nature of key interactions and highlights which aspects may be influenced.

  11. Flight test trajectory control analysis

    NASA Technical Reports Server (NTRS)

    Walker, R.; Gupta, N.

    1983-01-01

    Recent extensions to optimal control theory applied to meaningful linear models with sufficiently flexible software tools provide powerful techniques for designing flight test trajectory controllers (FTTCs). This report describes the principal steps for systematic development of flight trajectory controllers, which can be summarized as planning, modeling, designing, and validating a trajectory controller. The techniques have been kept as general as possible and should apply to a wide range of problems where quantities must be computed and displayed to a pilot to improve pilot effectiveness and to reduce workload and fatigue. To illustrate the approach, a detailed trajectory guidance law is developed and demonstrated for the F-15 aircraft flying the zoom-and-pushover maneuver.

  12. A parametric study of harmonic rotor hub loads

    NASA Technical Reports Server (NTRS)

    He, Chengjian

    1993-01-01

    A parametric study of vibratory rotor hub loads in a nonrotating system is presented. The study is based on a CAMRAD/JA model constructed for the GBH (Growth Version of Blackhawk Helicopter) Mach-scaled wind tunnel rotor model with high blade twist (-16 deg). The theoretical hub load predictions are validated by correlation with available measured data. Effects of various blade aeroelastic design changes on the harmonic nonrotating frame hub loads at both low and high forward flight speeds are investigated. The study aims to illustrate some of the physical mechanisms for change in the harmonic rotor hub loads due to blade design variations.

  13. Alternative Vocabularies in the Test Validity Literature

    ERIC Educational Resources Information Center

    Markus, Keith A.

    2016-01-01

    Justification of testing practice involves moving from one state of knowledge about the test to another. Theories of test validity can (a) focus on the beginning of the process, (b) focus on the end, or (c) encompass the entire process. Analyses of four case studies test and illustrate three claims: (a) restrictions on validity entail a supplement…

  14. Validation sampling can reduce bias in health care database studies: an illustration using influenza vaccination effectiveness.

    PubMed

    Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L

    2013-08-01

    Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Validation sampling can reduce bias in healthcare database studies: an illustration using influenza vaccination effectiveness

    PubMed Central

    Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael

    2014-01-01

    Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144

  16. Statistical Learning Theory for High Dimensional Prediction: Application to Criterion-Keyed Scale Development

    PubMed Central

    Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul

    2016-01-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257

  17. Evaluating Evidence for Conceptually Related Constructs Using Bivariate Correlations

    ERIC Educational Resources Information Center

    Swank, Jacqueline M.; Mullen, Patrick R.

    2017-01-01

    The article serves as a guide for researchers in developing evidence of validity using bivariate correlations, specifically construct validity. The authors outline the steps for calculating and interpreting bivariate correlations. Additionally, they provide an illustrative example and discuss the implications.

  18. Semi-Empirical Validation of the Cross-Band Relative Absorption Technique for the Measurement of Molecular Mixing Ratios

    NASA Technical Reports Server (NTRS)

    Pliutau, Denis; Prasad, Narasimha S

    2013-01-01

    Studies were performed to carry out semi-empirical validation of a new measurement approach we propose for molecular mixing ratios determination. The approach is based on relative measurements in bands of O2 and other molecules and as such may be best described as cross band relative absorption (CoBRA). . The current validation studies rely upon well verified and established theoretical and experimental databases, satellite data assimilations and modeling codes such as HITRAN, line-by-line radiative transfer model (LBLRTM), and the modern-era retrospective analysis for research and applications (MERRA). The approach holds promise for atmospheric mixing ratio measurements of CO2 and a variety of other molecules currently under investigation for several future satellite lidar missions. One of the advantages of the method is a significant reduction of the temperature sensitivity uncertainties which is illustrated with application to the ASCENDS mission for the measurement of CO2 mixing ratios (XCO2). Additional advantages of the method include the possibility to closely match cross-band weighting function combinations which is harder to achieve using conventional differential absorption techniques and the potential for additional corrections for water vapor and other interferences without using the data from numerical weather prediction (NWP) models.

  19. Fractional viscoelasticity of soft elastomers and auxetic foams

    NASA Astrophysics Data System (ADS)

    Solheim, Hannah; Stanisauskis, Eugenia; Miles, Paul; Oates, William

    2018-03-01

    Dielectric elastomers are commonly implemented in adaptive structures due to their unique capabilities for real time control of a structure's shape, stiffness, and damping. These active polymers are often used in applications where actuator control or dynamic tunability are important, making an accurate understanding of the viscoelastic behavior critical. This challenge is complicated as these elastomers often operate over a broad range of deformation rates. Whereas research has demonstrated success in applying a nonlinear viscoelastic constitutive model to characterize the behavior of Very High Bond (VHB) 4910, robust predictions of the viscoelastic response over the entire range of time scales is still a significant challenge. An alternative formulation for viscoelastic modeling using fractional order calculus has shown significant improvement in predictive capabilities. While fractional calculus has been explored theoretically in the field of linear viscoelasticity, limited experimental validation and statistical evaluation of the underlying phenomena have been considered. In the present study, predictions across several orders of magnitude in deformation rates are validated against data using a single set of model parameters. Moreover, we illustrate the fractional order is material dependent by running complementary experiments and parameter estimation on the elastomer VHB 4949 as well as an auxetic foam. All results are statistically validated using Bayesian uncertainty methods to obtain posterior densities for the fractional order as well as the hyperelastic parameters.

  20. Construct measurement quality improves predictive accuracy in violence risk assessment: an illustration using the personality assessment inventory.

    PubMed

    Hendry, Melissa C; Douglas, Kevin S; Winter, Elizabeth A; Edens, John F

    2013-01-01

    Much of the risk assessment literature has focused on the predictive validity of risk assessment tools. However, these tools often comprise a list of risk factors that are themselves complex constructs, and focusing on the quality of measurement of individual risk factors may improve the predictive validity of the tools. The present study illustrates this concern using the Antisocial Features and Aggression scales of the Personality Assessment Inventory (Morey, 1991). In a sample of 1,545 prison inmates and offenders undergoing treatment for substance abuse (85% male), we evaluated (a) the factorial validity of the ANT and AGG scales, (b) the utility of original ANT and AGG scales and newly derived ANT and AGG scales for predicting antisocial outcomes (recidivism and institutional infractions), and (c) whether items with a stronger relationship to the underlying constructs (higher factor loadings) were in turn more strongly related to antisocial outcomes. Confirmatory factor analyses (CFAs) indicated that ANT and AGG items were not structured optimally in these data in terms of correspondence to the subscale structure identified in the PAI manual. Exploratory factor analyses were conducted on a random split-half of the sample to derive optimized alternative factor structures, and cross-validated in the second split-half using CFA. Four-factor models emerged for both the ANT and AGG scales, and, as predicted, the size of item factor loadings was associated with the strength with which items were associated with institutional infractions and community recidivism. This suggests that the quality by which a construct is measured is associated with its predictive strength. Implications for risk assessment are discussed. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Real-time sea-level gauge observations and operational oceanography.

    PubMed

    Mourre, Baptiste; Crosnier, Laurence; Provost, Christian Le

    2006-04-15

    The contribution of tide-gauge data, which provide a unique monitoring of sea-level variability along the coasts of the world ocean, to operational oceanography is discussed in this paper. Two distinct applications that both demonstrate tide-gauge data utility when delivered in real-time are illustrated. The first case details basin-scale operational model validation of the French Mercator operational system applied to the North Atlantic. The accuracy of model outputs in the South Atlantic Bight both at coastal and offshore locations is evaluated using tide-gauge observations. These data enable one to assess the model's nowcasts and forecasts reliability which is needed in order for the model boundary conditions to be delivered to other coastal prediction systems. Such real-time validation is possible as long as data are delivered within a delay of a week. In the second application, tide-gauge data are assimilated in a storm surge model of the North Sea and used to control model trajectories in real-time. Using an advanced assimilation scheme that takes into account the swift evolution of model error statistics, these observations are shown to be very efficient to control model error, provided that they can be assimilated very frequently (i.e. available within a few hours).

  2. Coarse Grained Model for Biological Simulations: Recent Refinements and Validation

    PubMed Central

    Vicatos, Spyridon; Rychkova, Anna; Mukherjee, Shayantani; Warshel, Arieh

    2014-01-01

    Exploring the free energy landscape of proteins and modeling the corresponding functional aspects presents a major challenge for computer simulation approaches. This challenge is due to the complexity of the landscape and the enormous computer time needed for converging simulations. The use of various simplified coarse grained (CG) models offers an effective way of sampling the landscape, but most current models are not expected to give a reliable description of protein stability and functional aspects. The main problem is associated with insufficient focus on the electrostatic features of the model. In this respect our recent CG model offers significant advantage as it has been refined while focusing on its electrostatic free energy. Here we review the current state of our model, describing recent refinement, extensions and validation studies while focusing on demonstrating key applications. These include studies of protein stability, extending the model to include membranes and electrolytes and electrodes as well as studies of voltage activated proteins, protein insertion trough the translocon, the action of molecular motors and even the coupling of the stalled ribosome and the translocon. Our example illustrates the general potential of our approach in overcoming major challenges in studies of structure function correlation in proteins and large macromolecular complexes. PMID:25050439

  3. Examining construct and predictive validity of the Health-IT Usability Evaluation Scale: confirmatory factor analysis and structural equation modeling results

    PubMed Central

    Yen, Po-Yin; Sousa, Karen H; Bakken, Suzanne

    2014-01-01

    Background In a previous study, we developed the Health Information Technology Usability Evaluation Scale (Health-ITUES), which is designed to support customization at the item level. Such customization matches the specific tasks/expectations of a health IT system while retaining comparability at the construct level, and provides evidence of its factorial validity and internal consistency reliability through exploratory factor analysis. Objective In this study, we advanced the development of Health-ITUES to examine its construct validity and predictive validity. Methods The health IT system studied was a web-based communication system that supported nurse staffing and scheduling. Using Health-ITUES, we conducted a cross-sectional study to evaluate users’ perception toward the web-based communication system after system implementation. We examined Health-ITUES's construct validity through first and second order confirmatory factor analysis (CFA), and its predictive validity via structural equation modeling (SEM). Results The sample comprised 541 staff nurses in two healthcare organizations. The CFA (n=165) showed that a general usability factor accounted for 78.1%, 93.4%, 51.0%, and 39.9% of the explained variance in ‘Quality of Work Life’, ‘Perceived Usefulness’, ‘Perceived Ease of Use’, and ‘User Control’, respectively. The SEM (n=541) supported the predictive validity of Health-ITUES, explaining 64% of the variance in intention for system use. Conclusions The results of CFA and SEM provide additional evidence for the construct and predictive validity of Health-ITUES. The customizability of Health-ITUES has the potential to support comparisons at the construct level, while allowing variation at the item level. We also illustrate application of Health-ITUES across stages of system development. PMID:24567081

  4. Acoustic Predictions of Manned and Unmanned Rotorcraft Using the Comprehensive Analytical Rotorcraft Model for Acoustics (CARMA) Code System

    NASA Technical Reports Server (NTRS)

    Boyd, D. Douglas, Jr.; Burley, Casey L.; Conner, David A.

    2005-01-01

    The Comprehensive Analytical Rotorcraft Model for Acoustics (CARMA) is being developed under the Quiet Aircraft Technology Project within the NASA Vehicle Systems Program. The purpose of CARMA is to provide analysis tools for the design and evaluation of efficient low-noise rotorcraft, as well as support the development of safe, low-noise flight operations. The baseline prediction system of CARMA is presented and current capabilities are illustrated for a model rotor in a wind tunnel, a rotorcraft in flight and for a notional coaxial rotor configuration; however, a complete validation of the CARMA system capabilities with respect to a variety of measured databases is beyond the scope of this work. For the model rotor illustration, predicted rotor airloads and acoustics for a BO-105 model rotor are compared to test data from HART-II. For the flight illustration, acoustic data from an MD-520N helicopter flight test, which was conducted at Eglin Air Force Base in September 2003, are compared with CARMA full vehicle flight predictions. Predicted acoustic metrics at three microphone locations are compared for limited level flight and descent conditions. Initial acoustic predictions using CARMA for a notional coaxial rotor system are made. The effect of increasing the vertical separation between the rotors on the predicted airloads and acoustic results are shown for both aerodynamically non-interacting and aerodynamically interacting rotors. The sensitivity of including the aerodynamic interaction effects of each rotor on the other, especially when the rotors are in close proximity to one another is initially examined. The predicted coaxial rotor noise is compared to that of a conventional single rotor system of equal thrust, where both are of reasonable size for an unmanned aerial vehicle (UAV).

  5. MLFMA-accelerated Nyström method for ultrasonic scattering - Numerical results and experimental validation

    NASA Astrophysics Data System (ADS)

    Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron

    2018-04-01

    Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.

  6. Augmented Twin-Nonlinear Two-Box Behavioral Models for Multicarrier LTE Power Amplifiers

    PubMed Central

    2014-01-01

    A novel class of behavioral models is proposed for LTE-driven Doherty power amplifiers with strong memory effects. The proposed models, labeled augmented twin-nonlinear two-box models, are built by cascading a highly nonlinear memoryless function with a mildly nonlinear memory polynomial with cross terms. Experimental validation on gallium nitride based Doherty power amplifiers illustrates the accuracy enhancement and complexity reduction achieved by the proposed models. When strong memory effects are observed, the augmented twin-nonlinear two-box models can improve the normalized mean square error by up to 3 dB for the same number of coefficients when compared to state-of-the-art twin-nonlinear two-box models. Furthermore, the augmented twin-nonlinear two-box models lead to the same performance as previously reported twin-nonlinear two-box models while requiring up to 80% less coefficients. PMID:24624047

  7. Event-Based Media Enrichment Using an Adaptive Probabilistic Hypergraph Model.

    PubMed

    Liu, Xueliang; Wang, Meng; Yin, Bao-Cai; Huet, Benoit; Li, Xuelong

    2015-11-01

    Nowadays, with the continual development of digital capture technologies and social media services, a vast number of media documents are captured and shared online to help attendees record their experience during events. In this paper, we present a method combining semantic inference and multimodal analysis for automatically finding media content to illustrate events using an adaptive probabilistic hypergraph model. In this model, media items are taken as vertices in the weighted hypergraph and the task of enriching media to illustrate events is formulated as a ranking problem. In our method, each hyperedge is constructed using the K-nearest neighbors of a given media document. We also employ a probabilistic representation, which assigns each vertex to a hyperedge in a probabilistic way, to further exploit the correlation among media data. Furthermore, we optimize the hypergraph weights in a regularization framework, which is solved as a second-order cone problem. The approach is initiated by seed media and then used to rank the media documents using a transductive inference process. The results obtained from validating the approach on an event dataset collected from EventMedia demonstrate the effectiveness of the proposed approach.

  8. A geostatistical extreme-value framework for fast simulation of natural hazard events

    PubMed Central

    Stephenson, David B.

    2016-01-01

    We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student’s t-process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements. PMID:27279768

  9. An automated process for building reliable and optimal in vitro/in vivo correlation models based on Monte Carlo simulations.

    PubMed

    Sutton, Steven C; Hu, Mingxiu

    2006-05-05

    Many mathematical models have been proposed for establishing an in vitro/in vivo correlation (IVIVC). The traditional IVIVC model building process consists of 5 steps: deconvolution, model fitting, convolution, prediction error evaluation, and cross-validation. This is a time-consuming process and typically a few models at most are tested for any given data set. The objectives of this work were to (1) propose a statistical tool to screen models for further development of an IVIVC, (2) evaluate the performance of each model under different circumstances, and (3) investigate the effectiveness of common statistical model selection criteria for choosing IVIVC models. A computer program was developed to explore which model(s) would be most likely to work well with a random variation from the original formulation. The process used Monte Carlo simulation techniques to build IVIVC models. Data-based model selection criteria (Akaike Information Criteria [AIC], R2) and the probability of passing the Food and Drug Administration "prediction error" requirement was calculated. To illustrate this approach, several real data sets representing a broad range of release profiles are used to illustrate the process and to demonstrate the advantages of this automated process over the traditional approach. The Hixson-Crowell and Weibull models were often preferred over the linear. When evaluating whether a Level A IVIVC model was possible, the model selection criteria AIC generally selected the best model. We believe that the approach we proposed may be a rapid tool to determine which IVIVC model (if any) is the most applicable.

  10. Computational Simulation of Acoustic Modes in Rocket Combustors

    NASA Technical Reports Server (NTRS)

    Harper, Brent (Technical Monitor); Merkle, C. L.; Sankaran, V.; Ellis, M.

    2004-01-01

    A combination of computational fluid dynamic analysis and analytical solutions is being used to characterize the dominant modes in liquid rocket engines in conjunction with laboratory experiments. The analytical solutions are based on simplified geometries and flow conditions and are used for careful validation of the numerical formulation. The validated computational model is then extended to realistic geometries and flow conditions to test the effects of various parameters on chamber modes, to guide and interpret companion laboratory experiments in simplified combustors, and to scale the measurements to engine operating conditions. In turn, the experiments are used to validate and improve the model. The present paper gives an overview of the numerical and analytical techniques along with comparisons illustrating the accuracy of the computations as a function of grid resolution. A representative parametric study of the effect of combustor mean flow Mach number and combustor aspect ratio on the chamber modes is then presented for both transverse and longitudinal modes. The results show that higher mean flow Mach numbers drive the modes to lower frequencies. Estimates of transverse wave mechanics in a high aspect ratio combustor are then contrasted with longitudinal modes in a long and narrow combustor to provide understanding of potential experimental simulations.

  11. Online cross-validation-based ensemble learning.

    PubMed

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Investigation and modeling of biomass decay rate in the dark and its potential influence on net productivity of solar photobioreactors for microalga Chlamydomonas reinhardtii and cyanobacterium Arthrospira platensis.

    PubMed

    Le Borgne, François; Pruvost, Jérémy

    2013-06-01

    Biomass decay rate (BDR) in the dark was investigated for Chlamydomonas reinhardtii (microalga) and Arthrospira platensis (cyanobacterium). A specific setup based on a torus photobioreactor with online gas analysis was validated, enabling us to follow the time course of the specific BDR using oxygen monitoring and mass balance. Various operating parameters that could limit respiration rates, such as culture temperature and oxygen deprivation, were then investigated. C. reinhardtii was found to present a higher BDR in the dark than A. platensis, illustrating here the difference between eukaryotic and prokaryotic cells. In both cases, temperature proved an influential parameter, and the Arrhenius law was found to efficiently relate specific BDR to culture temperature. The utility of decreasing temperature at night to increase biomass productivity in a solar photobioreactor is also illustrated. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Formative versus reflective measurement: an illustration using work-family balance.

    PubMed

    Ellwart, Thomas; Konradt, Udo

    2011-01-01

    The aim of this article is to propose the formative measurement approach that can be used in various constructs of applied psychology. To illustrate this approach, the authors will (a) discuss the distinction between commonly used principal-factor (reflective) measures in comparison to the composite (formative) latent variable model, which is often applied in other disciplines such as marketing or engineering, and (b) point out the advantages and limitations of formative specifications using the example of the work-family balance (WFB) construct. Data collected from 2 large cross-sectional field studies confirm the reliability and validity of formative WFB measures as well as its predictive value regarding criteria of WFB (i.e., job satisfaction, family satisfaction, and life satisfaction). Last, the specific informational value of each formative indicator will be demonstrated and discussed in terms of practical implications for the assessment in different psychological fields.

  14. Towards the Development of a Smart Flying Sensor: Illustration in the Field of Precision Agriculture.

    PubMed

    Hernandez, Andres; Murcia, Harold; Copot, Cosmin; De Keyser, Robin

    2015-07-10

    Sensing is an important element to quantify productivity, product quality and to make decisions. Applications, such as mapping, surveillance, exploration and precision agriculture, require a reliable platform for remote sensing. This paper presents the first steps towards the development of a smart flying sensor based on an unmanned aerial vehicle (UAV). The concept of smart remote sensing is illustrated and its performance tested for the task of mapping the volume of grain inside a trailer during forage harvesting. Novelty lies in: (1) the development of a position-estimation method with time delay compensation based on inertial measurement unit (IMU) sensors and image processing; (2) a method to build a 3D map using information obtained from a regular camera; and (3) the design and implementation of a path-following control algorithm using model predictive control (MPC). Experimental results on a lab-scale system validate the effectiveness of the proposed methodology.

  15. Towards the Development of a Smart Flying Sensor: Illustration in the Field of Precision Agriculture

    PubMed Central

    Hernandez, Andres; Murcia, Harold; Copot, Cosmin; De Keyser, Robin

    2015-01-01

    Sensing is an important element to quantify productivity, product quality and to make decisions. Applications, such as mapping, surveillance, exploration and precision agriculture, require a reliable platform for remote sensing. This paper presents the first steps towards the development of a smart flying sensor based on an unmanned aerial vehicle (UAV). The concept of smart remote sensing is illustrated and its performance tested for the task of mapping the volume of grain inside a trailer during forage harvesting. Novelty lies in: (1) the development of a position-estimation method with time delay compensation based on inertial measurement unit (IMU) sensors and image processing; (2) a method to build a 3D map using information obtained from a regular camera; and (3) the design and implementation of a path-following control algorithm using model predictive control (MPC). Experimental results on a lab-scale system validate the effectiveness of the proposed methodology. PMID:26184205

  16. Soft-rigid interaction mechanism towards a lobster-inspired hybrid actuator

    NASA Astrophysics Data System (ADS)

    Chen, Yaohui; Wan, Fang; Wu, Tong; Song, Chaoyang

    2018-01-01

    Soft pneumatic actuators (SPAs) are intrinsically light-weight, compliant and therefore ideal to directly interact with humans and be implemented into wearable robotic devices. However, they also pose new challenges in describing and sensing their continuous deformation. In this paper, we propose a hybrid actuator design with bio-inspirations from the lobsters, which can generate reconfigurable bending movements through the internal soft chamber interacting with the external rigid shells. This design with joint and link structures enables us to exactly track its bending configurations that previously posed a significant challenge to soft robots. Analytic models are developed to illustrate the soft-rigid interaction mechanism with experimental validation. A robotic glove using hybrid actuators to assist grasping is assembled to illustrate their potentials in safe human-robot interactions. Considering all the design merits, our work presents a practical approach to the design of next-generation robots capable of achieving both good accuracy and compliance.

  17. Influence of Deformation Mechanisms on the Mechanical Behavior of Metals and Alloys: Experiments, Constitutive Modeling, and Validation

    NASA Astrophysics Data System (ADS)

    Gray, G. T.; Cerreta, E.; Chen, Shuh Rong; Maudlin, P. J.

    2004-06-01

    Jim Williams has made seminal contributions to the field of structure / property relations and its controlling effects on the mechanical behavior of metals and alloys. This talk will discuss experimental results illustrating the role of interstitial content, grain size, texture, temperature, and strain rate on the operative deformation mechanisms, mechanical behavior, and substructure evolution in titanium, zirconium, hafnium, and rhenium. Increasing grain size is shown to significantly decrease the dynamic flow strength of Ti and Zr while increasing work-hardening rates due to an increased incidence of deformation twinning. Increasing oxygen interstitial content is shown to significantly alter both the constitutive response and α-ω shock-induced phase transition in Zr. The influence of crystallographic texture on the mechanical behavior in Ti, Zr, and Hf is discussed in terms of slip system and deformation twinning activity. An example of the utility of incorporation of operative deformation mechanisms into a polycrystalline plasticity constitutive model and validation using Taylor cylinder impact testing is presented.

  18. A combined LS-SVM & MLR QSAR workflow for predicting the inhibition of CXCR3 receptor by quinazolinone analogs.

    PubMed

    Afantitis, Antreas; Melagraki, Georgia; Sarimveis, Haralambos; Koutentis, Panayiotis A; Igglessi-Markopoulou, Olga; Kollias, George

    2010-05-01

    A novel QSAR workflow is constructed that combines MLR with LS-SVM classification techniques for the identification of quinazolinone analogs as "active" or "non-active" CXCR3 antagonists. The accuracy of the LS-SVM classification technique for the training set and test was 100% and 90%, respectively. For the "active" analogs a validated MLR QSAR model estimates accurately their I-IP10 IC(50) inhibition values. The accuracy of the QSAR model (R (2) = 0.80) is illustrated using various evaluation techniques, such as leave-one-out procedure (R(LOO2)) = 0.67) and validation through an external test set (R(pred2) = 0.78). The key conclusion of this study is that the selected molecular descriptors, Highest Occupied Molecular Orbital energy (HOMO), Principal Moment of Inertia along X and Y axes PMIX and PMIZ, Polar Surface Area (PSA), Presence of triple bond (PTrplBnd), and Kier shape descriptor ((1) kappa), demonstrate discriminatory and pharmacophore abilities.

  19. Hypermentalizing, attachment, and epistemic trust in adolescent BPD: Clinical illustrations.

    PubMed

    Bo, Sune; Sharp, Carla; Fonagy, Peter; Kongerslev, Mickey

    2017-04-01

    Borderline personality disorder (BPD) has been shown to be a valid and reliable diagnosis in adolescents and associated with a decrease in both general and social functioning. With evidence linking BPD in adolescents to poor prognosis, it is important to develop a better understanding of factors and mechanisms contributing to the development of BPD. This could potentially enhance our knowledge and facilitate the design of novel treatment programs and interventions for this group. In this paper, we outline a theoretical model of BPD in adolescents linking the original mentalization-based theory of BPD, with recent extensions of the theory that focuses on hypermentalizing and epistemic trust. We then provide clinical case vignettes to illustrate this extended theoretical model of BPD. Furthermore, we suggest a treatment approach to BPD in adolescents that focuses on the reduction of hypermentalizing and epistemic mistrust. We conclude with an integration of theory and practice in the final section of the paper and make recommendations for future work in this area. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Bayesian Inference in the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2008-01-01

    This paper provides an elementary tutorial overview of Bayesian inference and its potential for application in aerospace experimentation in general and wind tunnel testing in particular. Bayes Theorem is reviewed and examples are provided to illustrate how it can be applied to objectively revise prior knowledge by incorporating insights subsequently obtained from additional observations, resulting in new (posterior) knowledge that combines information from both sources. A logical merger of Bayesian methods and certain aspects of Response Surface Modeling is explored. Specific applications to wind tunnel testing, computational code validation, and instrumentation calibration are discussed.

  1. Utility of correlation techniques in gravity and magnetic interpretation

    NASA Technical Reports Server (NTRS)

    Chandler, V. W.; Koski, J. S.; Braile, L. W.; Hinze, W. J.

    1977-01-01

    Two methods of quantitative combined analysis, internal correspondence and clustering, are presented. Model studies are used to illustrate implementation and interpretation procedures of these methods, particularly internal correspondence. Analysis of the results of applying these methods to data from the midcontinent and a transcontinental profile show they can be useful in identifying crustal provinces, providing information on horizontal and vertical variations of physical properties over province size zones, validating long wave-length anomalies, and isolating geomagnetic field removal problems. Thus, these techniques are useful in considering regional data acquired by satellites.

  2. Brief Strategic Family Therapy: Engaging Drug Using/Problem Behavior Adolescents and their Families into Treatment

    PubMed Central

    Szapocznik, José; Zarate, Monica; Duff, Johnathan; Muir, Joan

    2013-01-01

    Despite the efficacy of family-based interventions for improving outcomes for adolescent behavior problems such as substance use, engaging and retaining whole families in treatment is one of the greatest challenges therapists confront. This article illustrates how the Brief Strategic Family Therapy® (BSFT®) model, a family-based, empirically validated intervention designed to treat children and adolescents’ problem behaviors, can be used to increase engagement, improve retention, and bring about positive outcomes for families. Research evidence for efficacy and effectiveness is also presented. PMID:23731415

  3. AIRS Retrieval Validation During the EAQUATE

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Smith, William L.; Cuomo, Vincenzo; Taylor, Jonathan P.; Barnet, Christopher D.; DiGirolamo, Paolo; Pappalardo, Gelsomina; Larar, Allen M.; Liu, Xu; Newman, Stuart M.

    2006-01-01

    Atmospheric and surface thermodynamic parameters retrieved with advanced hyperspectral remote sensors of Earth observing satellites are critical for weather prediction and scientific research. The retrieval algorithms and retrieved parameters from satellite sounders must be validated to demonstrate the capability and accuracy of both observation and data processing systems. The European AQUA Thermodynamic Experiment (EAQUATE) was conducted mainly for validation of the Atmospheric InfraRed Sounder (AIRS) on the AQUA satellite, but also for assessment of validation systems of both ground-based and aircraft-based instruments which will be used for other satellite systems such as the Infrared Atmospheric Sounding Interferometer (IASI) on the European MetOp satellite, the Cross-track Infrared Sounder (CrIS) from the NPOESS Preparatory Project and the following NPOESS series of satellites. Detailed inter-comparisons were conducted and presented using different retrieval methodologies: measurements from airborne ultraspectral Fourier transform spectrometers, aircraft in-situ instruments, dedicated dropsondes and radiosondes, and ground based Raman Lidar, as well as from the European Center for Medium range Weather Forecasting (ECMWF) modeled thermal structures. The results of this study not only illustrate the quality of the measurements and retrieval products but also demonstrate the capability of these validation systems which are put in place to validate current and future hyperspectral sounding instruments and their scientific products.

  4. Sensitivity of the model error parameter specification in weak-constraint four-dimensional variational data assimilation

    NASA Astrophysics Data System (ADS)

    Shaw, Jeremy A.; Daescu, Dacian N.

    2017-08-01

    This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.

  5. Study of market model describing the contrary behaviors of informed and uninformed agents: Being minority and being majority

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Xia; Liao, Hao; Medo, Matus; Shang, Ming-Sheng; Yeung, Chi Ho

    2016-05-01

    In this paper we analyze the contrary behaviors of the informed investors and uniformed investors, and then construct a competition model with two groups of agents, namely agents who intend to stay in minority and those who intend to stay in majority. We find two kinds of competitions, inter- and intra-groups. The model shows periodic fluctuation feature. The average distribution of strategies illustrates a prominent central peak which is relevant to the peak-fat-tail character of price change distribution in stock markets. Furthermore, in the modified model the tolerance time parameter makes the agents diversified. Finally, we compare the strategies distribution with the price change distribution in real stock market, and we conclude that contrary behavior rules and tolerance time parameter are indeed valid in the description of market model.

  6. Flexible system model reduction and control system design based upon actuator and sensor influence functions

    NASA Technical Reports Server (NTRS)

    Yam, Yeung; Johnson, Timothy L.; Lang, Jeffrey H.

    1987-01-01

    A model reduction technique based on aggregation with respect to sensor and actuator influence functions rather than modes is presented for large systems of coupled second-order differential equations. Perturbation expressions which can predict the effects of spillover on both the reduced-order plant model and the neglected plant model are derived. For the special case of collocated actuators and sensors, these expressions lead to the derivation of constraints on the controller gains that are, given the validity of the perturbation technique, sufficient to guarantee the stability of the closed-loop system. A case study demonstrates the derivation of stabilizing controllers based on the present technique. The use of control and observation synthesis in modifying the dimension of the reduced-order plant model is also discussed. A numerical example is provided for illustration.

  7. Modeling nonlinearities in MEMS oscillators.

    PubMed

    Agrawal, Deepak K; Woodhouse, Jim; Seshia, Ashwin A

    2013-08-01

    We present a mathematical model of a microelectromechanical system (MEMS) oscillator that integrates the nonlinearities of the MEMS resonator and the oscillator circuitry in a single numerical modeling environment. This is achieved by transforming the conventional nonlinear mechanical model into the electrical domain while simultaneously considering the prominent nonlinearities of the resonator. The proposed nonlinear electrical model is validated by comparing the simulated amplitude-frequency response with measurements on an open-loop electrically addressed flexural silicon MEMS resonator driven to large motional amplitudes. Next, the essential nonlinearities in the oscillator circuit are investigated and a mathematical model of a MEMS oscillator is proposed that integrates the nonlinearities of the resonator. The concept is illustrated for MEMS transimpedance-amplifier- based square-wave and sine-wave oscillators. Closed-form expressions of steady-state output power and output frequency are derived for both oscillator models and compared with experimental and simulation results, with a good match in the predicted trends in all three cases.

  8. Mechanics of airflow in the human nasal airways.

    PubMed

    Doorly, D J; Taylor, D J; Schroter, R C

    2008-11-30

    The mechanics of airflow in the human nasal airways is reviewed, drawing on the findings of experimental and computational model studies. Modelling inevitably requires simplifications and assumptions, particularly given the complexity of the nasal airways. The processes entailed in modelling the nasal airways (from defining the model, to its production and, finally, validating the results) is critically examined, both for physical models and for computational simulations. Uncertainty still surrounds the appropriateness of the various assumptions made in modelling, particularly with regard to the nature of flow. New results are presented in which high-speed particle image velocimetry (PIV) and direct numerical simulation are applied to investigate the development of flow instability in the nasal cavity. These illustrate some of the improved capabilities afforded by technological developments for future model studies. The need for further improvements in characterising airway geometry and flow together with promising new methods are briefly discussed.

  9. Uncertainty in modeled upper ocean heat content change

    NASA Astrophysics Data System (ADS)

    Tokmakian, Robin; Challenor, Peter

    2014-02-01

    This paper examines the uncertainty in the change in the heat content in the ocean component of a general circulation model. We describe the design and implementation of our statistical methodology. Using an ensemble of model runs and an emulator, we produce an estimate of the full probability distribution function (PDF) for the change in upper ocean heat in an Atmosphere/Ocean General Circulation Model, the Community Climate System Model v. 3, across a multi-dimensional input space. We show how the emulator of the GCM's heat content change and hence, the PDF, can be validated and how implausible outcomes from the emulator can be identified when compared to observational estimates of the metric. In addition, the paper describes how the emulator outcomes and related uncertainty information might inform estimates of the same metric from a multi-model Coupled Model Intercomparison Project phase 3 ensemble. We illustrate how to (1) construct an ensemble based on experiment design methods, (2) construct and evaluate an emulator for a particular metric of a complex model, (3) validate the emulator using observational estimates and explore the input space with respect to implausible outcomes and (4) contribute to the understanding of uncertainties within a multi-model ensemble. Finally, we estimate the most likely value for heat content change and its uncertainty for the model, with respect to both observations and the uncertainty in the value for the input parameters.

  10. Modelling the pre-assessment learning effects of assessment: evidence in the validity chain

    PubMed Central

    Cilliers, Francois J; Schuwirth, Lambert W T; van der Vleuten, Cees P M

    2012-01-01

    OBJECTIVES We previously developed a model of the pre-assessment learning effects of consequential assessment and started to validate it. The model comprises assessment factors, mechanism factors and learning effects. The purpose of this study was to continue the validation process. For stringency, we focused on a subset of assessment factor–learning effect associations that featured least commonly in a baseline qualitative study. Our aims were to determine whether these uncommon associations were operational in a broader but similar population to that in which the model was initially derived. METHODS A cross-sectional survey of 361 senior medical students at one medical school was undertaken using a purpose-made questionnaire based on a grounded theory and comprising pairs of written situational tests. In each pair, the manifestation of an assessment factor was varied. The frequencies at which learning effects were selected were compared for each item pair, using an adjusted alpha to assign significance. The frequencies at which mechanism factors were selected were calculated. RESULTS There were significant differences in the learning effect selected between the two scenarios of an item pair for 13 of this subset of 21 uncommon associations, even when a p-value of < 0.00625 was considered to indicate significance. Three mechanism factors were operational in most scenarios: agency; response efficacy, and response value. CONCLUSIONS For a subset of uncommon associations in the model, the role of most assessment factor–learning effect associations and the mechanism factors involved were supported in a broader but similar population to that in which the model was derived. Although model validation is an ongoing process, these results move the model one step closer to the stage of usefully informing interventions. Results illustrate how factors not typically included in studies of the learning effects of assessment could confound the results of interventions aimed at using assessment to influence learning. Discuss ideas arising from this article at ‘http://www.mededuc.com discuss’ PMID:23078685

  11. Modelling the pre-assessment learning effects of assessment: evidence in the validity chain.

    PubMed

    Cilliers, Francois J; Schuwirth, Lambert W T; van der Vleuten, Cees P M

    2012-11-01

    We previously developed a model of the pre-assessment learning effects of consequential assessment and started to validate it. The model comprises assessment factors, mechanism factors and learning effects. The purpose of this study was to continue the validation process. For stringency, we focused on a subset of assessment factor-learning effect associations that featured least commonly in a baseline qualitative study. Our aims were to determine whether these uncommon associations were operational in a broader but similar population to that in which the model was initially derived. A cross-sectional survey of 361 senior medical students at one medical school was undertaken using a purpose-made questionnaire based on a grounded theory and comprising pairs of written situational tests. In each pair, the manifestation of an assessment factor was varied. The frequencies at which learning effects were selected were compared for each item pair, using an adjusted alpha to assign significance. The frequencies at which mechanism factors were selected were calculated. There were significant differences in the learning effect selected between the two scenarios of an item pair for 13 of this subset of 21 uncommon associations, even when a p-value of < 0.00625 was considered to indicate significance. Three mechanism factors were operational in most scenarios: agency; response efficacy, and response value. For a subset of uncommon associations in the model, the role of most assessment factor-learning effect associations and the mechanism factors involved were supported in a broader but similar population to that in which the model was derived. Although model validation is an ongoing process, these results move the model one step closer to the stage of usefully informing interventions. Results illustrate how factors not typically included in studies of the learning effects of assessment could confound the results of interventions aimed at using assessment to influence learning. © Blackwell Publishing Ltd 2012.

  12. An Argument Approach to Observation Protocol Validity

    ERIC Educational Resources Information Center

    Bell, Courtney A.; Gitomer, Drew H.; McCaffrey, Daniel F.; Hamre, Bridget K.; Pianta, Robert C.; Qi, Yi

    2012-01-01

    This article develops a validity argument approach for use on observation protocols currently used to assess teacher quality for high-stakes personnel and professional development decisions. After defining the teaching quality domain, we articulate an interpretive argument for observation protocols. To illustrate the types of evidence that might…

  13. Qualification of the flight-critical AFTI/F-16 digital flight control system. [Advanced Fighter Technology Integration

    NASA Technical Reports Server (NTRS)

    Mackall, D. A.; Ishmael, S. D.; Regenie, V. A.

    1983-01-01

    Qualification considerations for assuring the safety of a life-critical digital flight control system include four major areas: systems interactions, verification, validation, and configuration control. The AFTI/F-16 design, development, and qualification illustrate these considerations. In this paper, qualification concepts, procedures, and methodologies are discussed and illustrated through specific examples.

  14. New Monte Carlo model of cylindrical diffusing fibers illustrates axially heterogeneous fluorescence detection: simulation and experimental validation

    PubMed Central

    Baran, Timothy M.; Foster, Thomas H.

    2011-01-01

    We present a new Monte Carlo model of cylindrical diffusing fibers that is implemented with a graphics processing unit. Unlike previously published models that approximate the diffuser as a linear array of point sources, this model is based on the construction of these fibers. This allows for accurate determination of fluence distributions and modeling of fluorescence generation and collection. We demonstrate that our model generates fluence profiles similar to a linear array of point sources, but reveals axially heterogeneous fluorescence detection. With axially homogeneous excitation fluence, approximately 90% of detected fluorescence is collected by the proximal third of the diffuser for μs'/μa = 8 in the tissue and 70 to 88% is collected in this region for μs'/μa = 80. Increased fluorescence detection by the distal end of the diffuser relative to the center section is also demonstrated. Validation of these results was performed by creating phantoms consisting of layered fluorescent regions. Diffusers were inserted into these layered phantoms and fluorescence spectra were collected. Fits to these spectra show quantitative agreement between simulated fluorescence collection sensitivities and experimental results. These results will be applicable to the use of diffusers as detectors for dosimetry in interstitial photodynamic therapy. PMID:21895311

  15. Multi-scale hydrometeorological observation and modelling for flash flood understanding

    NASA Astrophysics Data System (ADS)

    Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.

    2014-09-01

    This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2), where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2), where the river routing and flooding processes become important. These observations are part of the HyMeX (HYdrological cycle in the Mediterranean EXperiment) enhanced observation period (EOP), which will last 4 years (2012-2015). In terms of hydrological modelling, the objective is to set up regional-scale models, while addressing small and generally ungauged catchments, which represent the scale of interest for flood risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set-up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes on various scales.

  16. Multi-scale hydrometeorological observation and modelling for flash-flood understanding

    NASA Astrophysics Data System (ADS)

    Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.

    2014-02-01

    This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2) where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2) where the river routing and flooding processes become important. These observations are part of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) Enhanced Observation Period (EOP) and lasts four years (2012-2015). In terms of hydrological modelling the objective is to set up models at the regional scale, while addressing small and generally ungauged catchments, which is the scale of interest for flooding risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses, in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes at various scales.

  17. Modeling and measurement of angle-beam wave propagation in a scatterer-free plate

    NASA Astrophysics Data System (ADS)

    Dawson, Alexander J.; Michaels, Jennifer E.; Michaels, Thomas E.

    2017-02-01

    Wavefield imaging has been shown to be a powerful tool for improving the understanding and characterization of wave propagation and scattering in plates. The complete measurement of surface displacement over a 2-D grid provided by wavefield imaging has the potential to serve as a useful means of validating ultrasonic models. Here, a preliminary study of ultrasonic angle-beam wave propagation in a scatterer-free plate using a combination of wavefield measurements and 2-D finite element models is described. Both wavefield imaging and finite element analysis are used to study the propagation of waves at a refracted angle of 56.8° propagating in a 6.35 mm thick aluminum plate. Wavefield imaging is performed using a laser vibrometer mounted on an XYZ scanning stage, which is programmed to move point-to-point on a rectilinear grid to acquire waveform data. The commercial finite element software package, PZFlex, which is specifically designed to handle large, complex ultrasonic problems, is used to create a 2-D cross-sectional model of the transducer and plate. For model validation, vertical surface displacements from both the wavefield measurements and the PZFlex finite element model are compared and found to be in excellent agreement. The validated PZFlex model is then used to explain the mechanism of Rayleigh wave generation by the angle-beam wedge. Since the wavefield measurements are restricted to the specimen surface, the cross-sectional PZFlex model is able to provide insights the wavefield data cannot. This study illustrates how information obtained from ultrasonic experiments and modeling results can be combined to improve understanding of angle-beam wave generation and propagation.

  18. Integrated tokamak modeling: when physics informs engineering and research planning

    NASA Astrophysics Data System (ADS)

    Poli, Francesca

    2017-10-01

    Simulations that integrate virtually all the relevant engineering and physics aspects of a real tokamak experiment are a power tool for experimental interpretation, model validation and planning for both present and future devices. This tutorial will guide through the building blocks of an ``integrated'' tokamak simulation, such as magnetic flux diffusion, thermal, momentum and particle transport, external heating and current drive sources, wall particle sources and sinks. Emphasis is given to the connection and interplay between external actuators and plasma response, between the slow time scales of the current diffusion and the fast time scales of transport, and how reduced and high-fidelity models can contribute to simulate a whole device. To illustrate the potential and limitations of integrated tokamak modeling for discharge prediction, a helium plasma scenario for the ITER pre-nuclear phase is taken as an example. This scenario presents challenges because it requires core-edge integration and advanced models for interaction between waves and fast-ions, which are subject to a limited experimental database for validation and guidance. Starting from a scenario obtained by re-scaling parameters from the demonstration inductive ``ITER baseline'', it is shown how self-consistent simulations that encompass both core and edge plasma regions, as well as high-fidelity heating and current drive source models are needed to set constraints on the density, magnetic field and heating scheme. This tutorial aims at demonstrating how integrated modeling, when used with adequate level of criticism, can not only support design of operational scenarios, but also help to asses the limitations and gaps in the available models, thus indicating where improved modeling tools are required and how present experiments can help their validation and inform research planning. Work supported by DOE under DE-AC02-09CH1146.

  19. Research on dynamic characteristics of motor vibration isolation system through mechanical impedance method

    NASA Astrophysics Data System (ADS)

    Zhao, Xingqian; Xu, Wei; Shuai, Changgeng; Hu, Zechao

    2017-12-01

    A mechanical impedance model of a coupled motor-shaft-bearing system has been developed to predict the dynamic characteristics and partially validated by comparing the computing results with finite element method (FEM), including the comparison of displacement amplitude in x and z directions at the two ends of the flexible coupling, the comparison of normalized vertical reaction force in z direction at bearing pedestals. The results demonstrate that the developed model can precisely predict the dynamic characteristics and the main advantage of such a method is that it can clearly illustrate the vibration property of the motor subsystem, which plays an important role in the isolation system design.

  20. Calibration of the computer model describing flows in the water supply system; example of the application of a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Orłowska-Szostak, Maria; Orłowski, Ryszard

    2017-11-01

    The paper discusses some relevant aspects of the calibration of a computer model describing flows in the water supply system. The authors described an exemplary water supply system and used it as a practical illustration of calibration. A range of measures was discussed and applied, which improve the convergence and effective use of calculations in the calibration process and also the effect of such calibration which is the validity of the results obtained. Drawing up results of performed measurements, i.e. estimating pipe roughnesses, the authors performed using the genetic algorithm implementation of which is a software developed by Resan Labs company from Brazil.

  1. Validation of a new modal performance measure for flexible controllers design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simo, J.B.; Tahan, S.A.; Kamwa, I.

    1996-05-01

    A new modal performance measure for power system stabilizer (PSS) optimization is proposed in this paper. The new method is based on modifying the square envelopes of oscillating modes, in order to take into account their damping ratios while minimizing the performance index. This criteria is applied to flexible controllers optimal design, on a multi-input-multi-output (MIMO) reduced-order model of a prototype power system. The multivariable model includes four generators, each having one input and one output. Linear time-response simulation and transient stability analysis with a nonlinear package confirm the superiority of the proposed criteria and illustrate its effectiveness in decentralizedmore » control.« less

  2. Cosmic backreaction and Gauss's law

    NASA Astrophysics Data System (ADS)

    Fleury, Pierre

    2017-06-01

    Cosmic backreaction refers to the general question of whether a homogeneous and isotropic cosmological model is able to predict the correct expansion dynamics of our inhomogeneous Universe. One aspect of this issue concerns the validity of the continuous approximation: does a system of point masses expand the same way as a fluid does? This article shows that it is not exactly the case in Newtonian gravity, although the associated corrections vanish in an infinite Universe. It turns out that Gauss's law is a key ingredient for such corrections to vanish. Backreaction, therefore, generically arises in alternative theories of gravitation, which threatens the trustworthiness of their cosmological tests. This phenomenon is illustrated with a toy model of massive gravity.

  3. Improving Stiffness-to-weight Ratio of Spot-welded Structures based upon Nonlinear Finite Element Modelling

    NASA Astrophysics Data System (ADS)

    Zhang, Shengyong

    2017-07-01

    Spot welding has been widely used for vehicle body construction due to its advantages of high speed and adaptability for automation. An effort to increase the stiffness-to-weight ratio of spot-welded structures is investigated based upon nonlinear finite element analysis. Topology optimization is conducted for reducing weight in the overlapping regions by choosing an appropriate topology. Three spot-welded models (lap, doubt-hat and T-shape) that approximate “typical” vehicle body components are studied for validating and illustrating the proposed method. It is concluded that removing underutilized material from overlapping regions can result in a significant increase in structural stiffness-to-weight ratio.

  4. Illustrating a Model-Game-Model Paradigm for Using Human Wargames in Analysis

    DTIC Science & Technology

    2017-02-01

    Working Paper Illustrating a Model- Game -Model Paradigm for Using Human Wargames in Analysis Paul K. Davis RAND National Security Research...paper proposes and illustrates an analysis-centric paradigm (model- game -model or what might be better called model-exercise-model in some cases) for...to involve stakehold- ers in model development from the outset. The model- game -model paradigm was illustrated in an application to crisis planning

  5. A comparison of zero-order, first-order, and monod biotransformation models

    USGS Publications Warehouse

    Bekins, B.A.; Warren, E.; Godsy, E.M.

    1998-01-01

    Under some conditions, a first-order kinetic model is a poor representation of biodegradation in contaminated aquifers. Although it is well known that the assumption of first-order kinetics is valid only when substrate concentration, S, is much less than the half-saturation constant, K(s), this assumption is often made without verification of this condition. We present a formal error analysis showing that the relative error in the first-order approximation is S/K(S) and in the zero-order approximation the error is K(s)/S. We then examine the problems that arise when the first-order approximation is used outside the range for which it is valid. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than K(s), it may better to model degradation using a zero-order rate expression. Compared with Monod kinetics, extrapolation of a first-order rate to lower concentrations under-predicts the biotransformation potential, while extrapolation to higher concentrations may grossly over-predict the transformation rate. A summary of solubilities and Monod parameters for aerobic benzene, toluene, and xylene (BTX) degradation shows that the a priori assumption of first-order degradation kinetics at sites contaminated with these compounds is not valid. In particular, out of six published values of KS for toluene, only one is greater than 2 mg/L, indicating that when toluene is present in concentrations greater than about a part per million, the assumption of first-order kinetics may be invalid. Finally, we apply an existing analytical solution for steady-state one-dimensional advective transport with Monod degradation kinetics to a field data set.A formal error analysis is presented showing that the relative error in the first-order approximation is S/KS and in the zero-order approximation the error is KS/S where S is the substrate concentration and KS is the half-saturation constant. The problems that arise when the first-order approximation is used outside the range for which it is valid are examined. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than KS, it may be better to model degradation using a zero-order rate expression.

  6. Estimation of Aerosol Optical Depth at Different Wavelengths by Multiple Regression Method

    NASA Technical Reports Server (NTRS)

    Tan, Fuyi; Lim, Hwee San; Abdullah, Khiruddin; Holben, Brent

    2015-01-01

    This study aims to investigate and establish a suitable model that can help to estimate aerosol optical depth (AOD) in order to monitor aerosol variations especially during non-retrieval time. The relationship between actual ground measurements (such as air pollution index, visibility, relative humidity, temperature, and pressure) and AOD obtained with a CIMEL sun photometer was determined through a series of statistical procedures to produce an AOD prediction model with reasonable accuracy. The AOD prediction model calibrated for each wavelength has a set of coefficients. The model was validated using a set of statistical tests. The validated model was then employed to calculate AOD at different wavelengths. The results show that the proposed model successfully predicted AOD at each studied wavelength ranging from 340 nm to 1020 nm. To illustrate the application of the model, the aerosol size determined using measure AOD data for Penang was compared with that determined using the model. This was done by examining the curvature in the ln [AOD]-ln [wavelength] plot. Consistency was obtained when it was concluded that Penang was dominated by fine mode aerosol in 2012 and 2013 using both measured and predicted AOD data. These results indicate that the proposed AOD prediction model using routine measurements as input is a promising tool for the regular monitoring of aerosol variation during non-retrieval time.

  7. A design space exploration for control of Critical Quality Attributes of mAb.

    PubMed

    Bhatia, Hemlata; Read, Erik; Agarabi, Cyrus; Brorson, Kurt; Lute, Scott; Yoon, Seongkyu

    2016-10-15

    A unique "design space (DSp) exploration strategy," defined as a function of four key scenarios, was successfully integrated and validated to enhance the DSp building exercise, by increasing the accuracy of analyses and interpretation of processed data. The four key scenarios, defining the strategy, were based on cumulative analyses of individual models developed for the Critical Quality Attributes (23 Glycan Profiles) considered for the study. The analyses of the CQA estimates and model performances were interpreted as (1) Inside Specification/Significant Model (2) Inside Specification/Non-significant Model (3) Outside Specification/Significant Model (4) Outside Specification/Non-significant Model. Each scenario was defined and illustrated through individual models of CQA aligning the description. The R(2), Q(2), Model Validity and Model Reproducibility estimates of G2, G2FaGbGN, G0 and G2FaG2, respectively, signified the four scenarios stated above. Through further optimizations, including the estimation of Edge of Failure and Set Point Analysis, wider and accurate DSps were created for each scenario, establishing critical functional relationship between Critical Process Parameters (CPPs) and Critical Quality Attributes (CQAs). A DSp provides the optimal region for systematic evaluation, mechanistic understanding and refining of a QbD approach. DSp exploration strategy will aid the critical process of consistently and reproducibly achieving predefined quality of a product throughout its lifecycle. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. A dynamic multi-scale Markov model based methodology for remaining life prediction

    NASA Astrophysics Data System (ADS)

    Yan, Jihong; Guo, Chaozhong; Wang, Xing

    2011-05-01

    The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.

  9. Parameter recovery, bias and standard errors in the linear ballistic accumulator model.

    PubMed

    Visser, Ingmar; Poessé, Rens

    2017-05-01

    The linear ballistic accumulator (LBA) model (Brown & Heathcote, , Cogn. Psychol., 57, 153) is increasingly popular in modelling response times from experimental data. An R package, glba, has been developed to fit the LBA model using maximum likelihood estimation which is validated by means of a parameter recovery study. At sufficient sample sizes parameter recovery is good, whereas at smaller sample sizes there can be large bias in parameters. In a second simulation study, two methods for computing parameter standard errors are compared. The Hessian-based method is found to be adequate and is (much) faster than the alternative bootstrap method. The use of parameter standard errors in model selection and inference is illustrated in an example using data from an implicit learning experiment (Visser et al., , Mem. Cogn., 35, 1502). It is shown that typical implicit learning effects are captured by different parameters of the LBA model. © 2017 The British Psychological Society.

  10. A highly coarse-grained model to simulate entangled polymer melts.

    PubMed

    Zhu, You-Liang; Liu, Hong; Lu, Zhong-Yuan

    2012-04-14

    We introduce a highly coarse-grained model to simulate the entangled polymer melts. In this model, a polymer chain is taken as a single coarse-grained particle, and the creation and annihilation of entanglements are regarded as stochastic events in proper time intervals according to certain rules and possibilities. We build the relationship between the probability of appearance of an entanglement between any pair of neighboring chains at a given time interval and the rate of variation of entanglements which describes the concurrence of birth and death of entanglements. The probability of disappearance of entanglements is tuned to keep the total entanglement number around the target value. This useful model can reflect many characteristics of entanglements and macroscopic properties of polymer melts. As an illustration, we apply this model to simulate the polyethylene melt of C(1000)H(2002) at 450 K and further validate this model by comparing to experimental data and other simulation results.

  11. Automated Analysis of Stateflow Models

    NASA Technical Reports Server (NTRS)

    Bourbouh, Hamza; Garoche, Pierre-Loic; Garion, Christophe; Gurfinkel, Arie; Kahsaia, Temesghen; Thirioux, Xavier

    2017-01-01

    Stateflow is a widely used modeling framework for embedded and cyber physical systems where control software interacts with physical processes. In this work, we present a framework a fully automated safety verification technique for Stateflow models. Our approach is two-folded: (i) we faithfully compile Stateflow models into hierarchical state machines, and (ii) we use automated logic-based verification engine to decide the validity of safety properties. The starting point of our approach is a denotational semantics of State flow. We propose a compilation process using continuation-passing style (CPS) denotational semantics. Our compilation technique preserves the structural and modal behavior of the system. The overall approach is implemented as an open source toolbox that can be integrated into the existing Mathworks Simulink Stateflow modeling framework. We present preliminary experimental evaluations that illustrate the effectiveness of our approach in code generation and safety verification of industrial scale Stateflow models.

  12. Degree of coupling and efficiency of energy converters far-from-equilibrium

    NASA Astrophysics Data System (ADS)

    Vroylandt, Hadrien; Lacoste, David; Verley, Gatien

    2018-02-01

    In this paper, we introduce a real symmetric and positive semi-definite matrix, which we call the non-equilibrium conductance matrix, and which generalizes the Onsager response matrix for a system in a non-equilibrium stationary state. We then express the thermodynamic efficiency in terms of the coefficients of this matrix using a parametrization similar to the one used near equilibrium. This framework, then valid arbitrarily far from equilibrium allows to set bounds on the thermodynamic efficiency by a universal function depending only on the degree of coupling between input and output currents. It also leads to new general power-efficiency trade-offs valid for macroscopic machines that are compared to trade-offs previously obtained from uncertainty relations. We illustrate our results on an unicycle heat to heat converter and on a discrete model of a molecular motor.

  13. Validity, reliability, and generalizability in qualitative research

    PubMed Central

    Leung, Lawrence

    2015-01-01

    In general practice, qualitative research contributes as significantly as quantitative research, in particular regarding psycho-social aspects of patient-care, health services provision, policy setting, and health administrations. In contrast to quantitative research, qualitative research as a whole has been constantly critiqued, if not disparaged, by the lack of consensus for assessing its quality and robustness. This article illustrates with five published studies how qualitative research can impact and reshape the discipline of primary care, spiraling out from clinic-based health screening to community-based disease monitoring, evaluation of out-of-hours triage services to provincial psychiatric care pathways model and finally, national legislation of core measures for children's healthcare insurance. Fundamental concepts of validity, reliability, and generalizability as applicable to qualitative research are then addressed with an update on the current views and controversies. PMID:26288766

  14. Statistical learning theory for high dimensional prediction: Application to criterion-keyed scale development.

    PubMed

    Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R

    2016-12-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. The role of observational reference data for climate downscaling: Insights from the VALUE COST Action

    NASA Astrophysics Data System (ADS)

    Kotlarski, Sven; Gutiérrez, José M.; Boberg, Fredrik; Bosshard, Thomas; Cardoso, Rita M.; Herrera, Sixto; Maraun, Douglas; Mezghani, Abdelkader; Pagé, Christian; Räty, Olle; Stepanek, Petr; Soares, Pedro M. M.; Szabo, Peter

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of downscaling methods. Such assessments can be expected to crucially depend on the existence of accurate and reliable observational reference data. In dynamical downscaling, observational data can influence model development itself and, later on, model evaluation, parameter calibration and added value assessment. In empirical-statistical downscaling, observations serve as predictand data and directly influence model calibration with corresponding effects on downscaled climate change projections. We here present a comprehensive assessment of the influence of uncertainties in observational reference data and of scale-related issues on several of the above-mentioned aspects. First, temperature and precipitation characteristics as simulated by a set of reanalysis-driven EURO-CORDEX RCM experiments are validated against three different gridded reference data products, namely (1) the EOBS dataset (2) the recently developed EURO4M-MESAN regional re-analysis, and (3) several national high-resolution and quality-controlled gridded datasets that recently became available. The analysis reveals a considerable influence of the choice of the reference data on the evaluation results, especially for precipitation. It is also illustrated how differences between the reference data sets influence the ranking of RCMs according to a comprehensive set of performance measures.

  16. Collaborative development of predictive toxicology applications

    PubMed Central

    2010-01-01

    OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals. The OpenTox Framework includes APIs and services for compounds, datasets, features, algorithms, models, ontologies, tasks, validation, and reporting which may be combined into multiple applications satisfying a variety of different user needs. OpenTox applications are based on a set of distributed, interoperable OpenTox API-compliant REST web services. The OpenTox approach to ontology allows for efficient mapping of complementary data coming from different datasets into a unifying structure having a shared terminology and representation. Two initial OpenTox applications are presented as an illustration of the potential impact of OpenTox for high-quality and consistent structure-activity relationship modelling of REACH-relevant endpoints: ToxPredict which predicts and reports on toxicities for endpoints for an input chemical structure, and ToxCreate which builds and validates a predictive toxicity model based on an input toxicology dataset. Because of the extensible nature of the standardised Framework design, barriers of interoperability between applications and content are removed, as the user may combine data, models and validation from multiple sources in a dependable and time-effective way. PMID:20807436

  17. Collaborative development of predictive toxicology applications.

    PubMed

    Hardy, Barry; Douglas, Nicki; Helma, Christoph; Rautenberg, Micha; Jeliazkova, Nina; Jeliazkov, Vedrin; Nikolova, Ivelina; Benigni, Romualdo; Tcheremenskaia, Olga; Kramer, Stefan; Girschick, Tobias; Buchwald, Fabian; Wicker, Joerg; Karwath, Andreas; Gütlein, Martin; Maunz, Andreas; Sarimveis, Haralambos; Melagraki, Georgia; Afantitis, Antreas; Sopasakis, Pantelis; Gallagher, David; Poroikov, Vladimir; Filimonov, Dmitry; Zakharov, Alexey; Lagunin, Alexey; Gloriozova, Tatyana; Novikov, Sergey; Skvortsova, Natalia; Druzhilovsky, Dmitry; Chawla, Sunil; Ghosh, Indira; Ray, Surajit; Patel, Hitesh; Escher, Sylvia

    2010-08-31

    OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals.The OpenTox Framework includes APIs and services for compounds, datasets, features, algorithms, models, ontologies, tasks, validation, and reporting which may be combined into multiple applications satisfying a variety of different user needs. OpenTox applications are based on a set of distributed, interoperable OpenTox API-compliant REST web services. The OpenTox approach to ontology allows for efficient mapping of complementary data coming from different datasets into a unifying structure having a shared terminology and representation.Two initial OpenTox applications are presented as an illustration of the potential impact of OpenTox for high-quality and consistent structure-activity relationship modelling of REACH-relevant endpoints: ToxPredict which predicts and reports on toxicities for endpoints for an input chemical structure, and ToxCreate which builds and validates a predictive toxicity model based on an input toxicology dataset. Because of the extensible nature of the standardised Framework design, barriers of interoperability between applications and content are removed, as the user may combine data, models and validation from multiple sources in a dependable and time-effective way.

  18. The impact of registration accuracy on imaging validation study design: A novel statistical power calculation.

    PubMed

    Gibson, Eli; Fenster, Aaron; Ward, Aaron D

    2013-10-01

    Novel imaging modalities are pushing the boundaries of what is possible in medical imaging, but their signal properties are not always well understood. The evaluation of these novel imaging modalities is critical to achieving their research and clinical potential. Image registration of novel modalities to accepted reference standard modalities is an important part of characterizing the modalities and elucidating the effect of underlying focal disease on the imaging signal. The strengths of the conclusions drawn from these analyses are limited by statistical power. Based on the observation that in this context, statistical power depends in part on uncertainty arising from registration error, we derive a power calculation formula relating registration error, number of subjects, and the minimum detectable difference between normal and pathologic regions on imaging, for an imaging validation study design that accommodates signal correlations within image regions. Monte Carlo simulations were used to evaluate the derived models and test the strength of their assumptions, showing that the model yielded predictions of the power, the number of subjects, and the minimum detectable difference of simulated experiments accurate to within a maximum error of 1% when the assumptions of the derivation were met, and characterizing sensitivities of the model to violations of the assumptions. The use of these formulae is illustrated through a calculation of the number of subjects required for a case study, modeled closely after a prostate cancer imaging validation study currently taking place at our institution. The power calculation formulae address three central questions in the design of imaging validation studies: (1) What is the maximum acceptable registration error? (2) How many subjects are needed? (3) What is the minimum detectable difference between normal and pathologic image regions? Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Numerical computation of Pop plot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    The Pop plot — distance-of-run to detonation versus initial shock pressure — is a key characterization of shock initiation in a heterogeneous explosive. Reactive burn models for high explosives (HE) must reproduce the experimental Pop plot to have any chance of accurately predicting shock initiation phenomena. This report describes a methodology for automating the computation of a Pop plot for a specific explosive with a given HE model. Illustrative examples of the computation are shown for PBX 9502 with three burn models (SURF, WSD and Forest Fire) utilizing the xRage code, which is the Eulerian ASC hydrocode at LANL. Comparisonmore » of the numerical and experimental Pop plot can be the basis for a validation test or as an aid in calibrating the burn rate of an HE model. Issues with calibration are discussed.« less

  20. Using meta-analytic path analysis to test theoretical predictions in health behavior: An illustration based on meta-analyses of the theory of planned behavior.

    PubMed

    Hagger, Martin S; Chan, Derwin K C; Protogerou, Cleo; Chatzisarantis, Nikos L D

    2016-08-01

    Synthesizing research on social cognitive theories applied to health behavior is an important step in the development of an evidence base of psychological factors as targets for effective behavioral interventions. However, few meta-analyses of research on social cognitive theories in health contexts have conducted simultaneous tests of theoretically-stipulated pattern effects using path analysis. We argue that conducting path analyses of meta-analytic effects among constructs from social cognitive theories is important to test nomological validity, account for mediation effects, and evaluate unique effects of theory constructs independent of past behavior. We illustrate our points by conducting new analyses of two meta-analyses of a popular theory applied to health behaviors, the theory of planned behavior. We conducted meta-analytic path analyses of the theory in two behavioral contexts (alcohol and dietary behaviors) using data from the primary studies included in the original meta-analyses augmented to include intercorrelations among constructs and relations with past behavior missing from the original analysis. Findings supported the nomological validity of the theory and its hypotheses for both behaviors, confirmed important model processes through mediation analysis, demonstrated the attenuating effect of past behavior on theory relations, and provided estimates of the unique effects of theory constructs independent of past behavior. Our analysis illustrates the importance of conducting a simultaneous test of theory-stipulated effects in meta-analyses of social cognitive theories applied to health behavior. We recommend researchers adopt this analytic procedure when synthesizing evidence across primary tests of social cognitive theories in health. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Human eye haptics-based multimedia.

    PubMed

    Velandia, David; Uribe-Quevedo, Alvaro; Perez-Gutierrez, Byron

    2014-01-01

    Immersive and interactive multimedia applications offer complementary study tools in anatomy as users can explore 3D models while obtaining information about the organ, tissue or part being explored. Haptics increases the sense of interaction with virtual objects improving user experience in a more realistic manner. Common eye studying tools are books, illustrations, assembly models, and more recently these are being complemented with mobile apps whose 3D capabilities, computing power and customers are increasing. The goal of this project is to develop a complementary eye anatomy and pathology study tool using deformable models within a multimedia application, offering the students the opportunity for exploring the eye from up close and within with relevant information. Validation of the tool provided feedback on the potential of the development, along with suggestions on improving haptic feedback and navigation.

  2. Modeling and design for electromagnetic surface wave devices

    NASA Astrophysics Data System (ADS)

    La Spada, Luigi; Haq, Sajad; Hao, Yang

    2017-09-01

    A great deal of interest has reemerged recently in the study of surface waves. The possibility to control and manipulate electromagnetic wave propagations at will opens many new research areas and leads to lots of novel applications in engineering. In this paper, we will present a comprehensive modeling and design approach for surface wave cloaks, based on graded-refractive-index materials and the theory of transformation optics. It can be also applied to any other forms of surface wave manipulation, in terms of amplitude and phase. In this paper, we will present a general method to illustrate how this can be achieved from modeling to the final design. The proposed approach is validated to be versatile and allows ease in manufacturing, thereby demonstrating great potential for practical applications.

  3. A stochastic agent-based model of pathogen propagation in dynamic multi-relational social networks

    PubMed Central

    Khan, Bilal; Dombrowski, Kirk; Saad, Mohamed

    2015-01-01

    We describe a general framework for modeling and stochastic simulation of epidemics in realistic dynamic social networks, which incorporates heterogeneity in the types of individuals, types of interconnecting risk-bearing relationships, and types of pathogens transmitted across them. Dynamism is supported through arrival and departure processes, continuous restructuring of risk relationships, and changes to pathogen infectiousness, as mandated by natural history; dynamism is regulated through constraints on the local agency of individual nodes and their risk behaviors, while simulation trajectories are validated using system-wide metrics. To illustrate its utility, we present a case study that applies the proposed framework towards a simulation of HIV in artificial networks of intravenous drug users (IDUs) modeled using data collected in the Social Factors for HIV Risk survey. PMID:25859056

  4. Modeling Framework and Validation of a Smart Grid and Demand Response System for Wind Power Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broeer, Torsten; Fuller, Jason C.; Tuffner, Francis K.

    2014-01-31

    Electricity generation from wind power and other renewable energy sources is increasing, and their variability introduces new challenges to the power system. The emergence of smart grid technologies in recent years has seen a paradigm shift in redefining the electrical system of the future, in which controlled response of the demand side is used to balance fluctuations and intermittencies from the generation side. This paper presents a modeling framework for an integrated electricity system where loads become an additional resource. The agent-based model represents a smart grid power system integrating generators, transmission, distribution, loads and market. The model incorporates generatormore » and load controllers, allowing suppliers and demanders to bid into a Real-Time Pricing (RTP) electricity market. The modeling framework is applied to represent a physical demonstration project conducted on the Olympic Peninsula, Washington, USA, and validation simulations are performed using actual dynamic data. Wind power is then introduced into the power generation mix illustrating the potential of demand response to mitigate the impact of wind power variability, primarily through thermostatically controlled loads. The results also indicate that effective implementation of Demand Response (DR) to assist integration of variable renewable energy resources requires a diversity of loads to ensure functionality of the overall system.« less

  5. An Enhanced Engineering Perspective of Global Climate Systems and Statistical Formulation of Terrestrial CO2 Exchanges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Yuanshun; Baek, Seung H.; Garcia-Diza, Alberto

    2012-01-01

    This paper designs a comprehensive approach based on the engineering machine/system concept, to model, analyze, and assess the level of CO2 exchange between the atmosphere and terrestrial ecosystems, which is an important factor in understanding changes in global climate. The focus of this article is on spatial patterns and on the correlation between levels of CO2 fluxes and a variety of influencing factors in eco-environments. The engineering/machine concept used is a system protocol that includes the sequential activities of design, test, observe, and model. This concept is applied to explicitly include various influencing factors and interactions associated with CO2 fluxes.more » To formulate effective models of a large and complex climate system, this article introduces a modeling technique that will be referred to as Stochastic Filtering Analysis of Variance (SFANOVA). The CO2 flux data observed from some sites of AmeriFlux are used to illustrate and validate the analysis, prediction and globalization capabilities of the proposed engineering approach and the SF-ANOVA technology. The SF-ANOVA modeling approach was compared to stepwise regression, ridge regression, and neural networks. The comparison indicated that the proposed approach is a valid and effective tool with similar accuracy and less complexity than the other procedures.« less

  6. Meteorological and air pollution modeling for an urban airport

    NASA Technical Reports Server (NTRS)

    Swan, P. R.; Lee, I. Y.

    1980-01-01

    Results are presented of numerical experiments modeling meteorology, multiple pollutant sources, and nonlinear photochemical reactions for the case of an airport in a large urban area with complex terrain. A planetary boundary-layer model which predicts the mixing depth and generates wind, moisture, and temperature fields was used; it utilizes only surface and synoptic boundary conditions as input data. A version of the Hecht-Seinfeld-Dodge chemical kinetics model is integrated with a new, rapid numerical technique; both the San Francisco Bay Area Air Quality Management District source inventory and the San Jose Airport aircraft inventory are utilized. The air quality model results are presented in contour plots; the combined results illustrate that the highly nonlinear interactions which are present require that the chemistry and meteorology be considered simultaneously to make a valid assessment of the effects of individual sources on regional air quality.

  7. A Layered Decision Model for Cost-Effective System Security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Huaqiang; Alves-Foss, James; Soule, Terry

    System security involves decisions in at least three areas: identification of well-defined security policies, selection of cost-effective defence strategies, and implementation of real-time defence tactics. Although choices made in each of these areas affect the others, existing decision models typically handle these three decision areas in isolation. There is no comprehensive tool that can integrate them to provide a single efficient model for safeguarding a network. In addition, there is no clear way to determine which particular combinations of defence decisions result in cost-effective solutions. To address these problems, this paper introduces a Layered Decision Model (LDM) for use inmore » deciding how to address defence decisions based on their cost-effectiveness. To validate the LDM and illustrate how it is used, we used simulation to test model rationality and applied the LDM to the design of system security for an e-commercial business case.« less

  8. Assessment and validation of the community radiative transfer model for ice cloud conditions

    NASA Astrophysics Data System (ADS)

    Yi, Bingqi; Yang, Ping; Weng, Fuzhong; Liu, Quanhua

    2014-11-01

    The performance of the Community Radiative Transfer Model (CRTM) under ice cloud conditions is evaluated and improved with the implementation of MODIS collection 6 ice cloud optical property model based on the use of severely roughened solid column aggregates and a modified Gamma particle size distribution. New ice cloud bulk scattering properties (namely, the extinction efficiency, single-scattering albedo, asymmetry factor, and scattering phase function) suitable for application to the CRTM are calculated by using the most up-to-date ice particle optical property library. CRTM-based simulations illustrate reasonable accuracy in comparison with the counterparts derived from a combination of the Discrete Ordinate Radiative Transfer (DISORT) model and the Line-by-line Radiative Transfer Model (LBLRTM). Furthermore, simulations of the top of the atmosphere brightness temperature with CRTM for the Crosstrack Infrared Sounder (CrIS) are carried out to further evaluate the updated CRTM ice cloud optical property look-up table.

  9. Identifying fMRI Model Violations with Lagrange Multiplier Tests

    PubMed Central

    Cassidy, Ben; Long, Christopher J; Rae, Caroline; Solo, Victor

    2013-01-01

    The standard modeling framework in Functional Magnetic Resonance Imaging (fMRI) is predicated on assumptions of linearity, time invariance and stationarity. These assumptions are rarely checked because doing so requires specialised software, although failure to do so can lead to bias and mistaken inference. Identifying model violations is an essential but largely neglected step in standard fMRI data analysis. Using Lagrange Multiplier testing methods we have developed simple and efficient procedures for detecting model violations such as non-linearity, non-stationarity and validity of the common Double Gamma specification for hemodynamic response. These procedures are computationally cheap and can easily be added to a conventional analysis. The test statistic is calculated at each voxel and displayed as a spatial anomaly map which shows regions where a model is violated. The methodology is illustrated with a large number of real data examples. PMID:22542665

  10. Evaluating the influence of geo-environmental factors on gully erosion in a semi-arid region of Iran: An integrated framework.

    PubMed

    Rahmati, Omid; Tahmasebipour, Naser; Haghizadeh, Ali; Pourghasemi, Hamid Reza; Feizizadeh, Bakhtiar

    2017-02-01

    Despite the importance of soil erosion in sustainable development goals in arid and semi-arid areas, the study of the geo-environmental conditions and factors influencing gully erosion occurrence is rarely undertaken. As effort to this challenge, the main objective of this study is to apply an integrated approach of Geographic Object-Based Image Analysis (GEOBIA) together with high-spatial resolution imagery (SPOT-5) for detecting gully erosion features at the Kashkan-Poldokhtar watershed, Iran. We also aimed to apply a Conditional Probability (CP) model for establishing the spatial relationship between gullies and the Geo-Environmental Factors (GEFs). The gully erosion inventory map prepared using GEOBIA and field surveying was randomly partitioned into two subsets: (1) part 1 that contains 70% was used in the training phase of the CP model; (2) part 2 is a validation dataset (30%) for validation of the model and to confirm its accuracy. Prediction performances of the GEOBIA and CP model were checked by overall accuracy and Receiver Operating Characteristics (ROC) curve methods, respectively. In addition, the influence of all GEFs on gully erosion was evaluated by performing a sensitivity analysis model. The validation findings illustrated that overall accuracy for GEOBIA approach and the area under the ROC curve for the CP model were 92.4% and 89.9%, respectively. Also, based on sensitivity analysis, soil texture, drainage density, and lithology represent significantly effects on the gully erosion occurrence. This study has shown that the integrated framework can be successfully used for modeling gully erosion occurrence in a data-poor environment. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Validation of non-stationary precipitation series for site-specific impact assessment: comparison of two statistical downscaling techniques

    NASA Astrophysics Data System (ADS)

    Mullan, Donal; Chen, Jie; Zhang, Xunchang John

    2016-02-01

    Statistical downscaling (SD) methods have become a popular, low-cost and accessible means of bridging the gap between the coarse spatial resolution at which climate models output climate scenarios and the finer spatial scale at which impact modellers require these scenarios, with various different SD techniques used for a wide range of applications across the world. This paper compares the Generator for Point Climate Change (GPCC) model and the Statistical DownScaling Model (SDSM)—two contrasting SD methods—in terms of their ability to generate precipitation series under non-stationary conditions across ten contrasting global climates. The mean, maximum and a selection of distribution statistics as well as the cumulative frequencies of dry and wet spells for four different temporal resolutions were compared between the models and the observed series for a validation period. Results indicate that both methods can generate daily precipitation series that generally closely mirror observed series for a wide range of non-stationary climates. However, GPCC tends to overestimate higher precipitation amounts, whilst SDSM tends to underestimate these. This infers that GPCC is more likely to overestimate the effects of precipitation on a given impact sector, whilst SDSM is likely to underestimate the effects. GPCC performs better than SDSM in reproducing wet and dry day frequency, which is a key advantage for many impact sectors. Overall, the mixed performance of the two methods illustrates the importance of users performing a thorough validation in order to determine the influence of simulated precipitation on their chosen impact sector.

  12. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  13. Validity of using photographs to simulate visible qualities of forest recreation environments

    Treesearch

    Robin E. Hoffman; James F. Palmer

    1995-01-01

    Forest recreation managers and researchers interested in conserving and improving the visual quality and recreation opportunities available in forest environments must often resort to simulations as a means of illustrating alternatives for potential users to evaluate. This paper reviews the results of prior research evaluating the validity of using photographic...

  14. Mixing Methods in Randomized Controlled Trials (RCTs): Validation, Contextualization, Triangulation, and Control

    ERIC Educational Resources Information Center

    Spillane, James P.; Pareja, Amber Stitziel; Dorner, Lisa; Barnes, Carol; May, Henry; Huff, Jason; Camburn, Eric

    2010-01-01

    In this paper we described how we mixed research approaches in a Randomized Control Trial (RCT) of a school principal professional development program. Using examples from our study we illustrate how combining qualitative and quantitative data can address some key challenges from validating instruments and measures of mediator variables to…

  15. Disagreement over the Best Way to Use the Word "Validity" and Options for Reaching Consensus

    ERIC Educational Resources Information Center

    Newton, Paul E.; Shaw, Stuart D.

    2016-01-01

    The ability to convey shared meaning with minimal ambiguity is highly desirable for technical terms within disciplines and professions. Unfortunately, there is no widespread professional consensus over the meaning of the word "validity" as it pertains to educational and psychological testing. After illustrating the nature and extent of…

  16. Why We Need Reliable, Valid, and Appropriate Learning Disability Assessments: The Perspective of a Postsecondary Disability Service Provider

    ERIC Educational Resources Information Center

    Wolforth, Joan

    2012-01-01

    This paper discusses issues regarding the validity and reliability of psychoeducational assessments provided to Disability Services Offices at Canadian Universities. Several vignettes illustrate some current issues and the potential consequences when university students are given less than thorough disability evaluations and ascribed diagnoses.…

  17. Criterion-Related Validity: Assessing the Value of Subscores

    ERIC Educational Resources Information Center

    Davison, Mark L.; Davenport, Ernest C., Jr.; Chang, Yu-Feng; Vue, Kory; Su, Shiyang

    2015-01-01

    Criterion-related profile analysis (CPA) can be used to assess whether subscores of a test or test battery account for more criterion variance than does a single total score. Application of CPA to subscore evaluation is described, compared to alternative procedures, and illustrated using SAT data. Considerations other than validity and reliability…

  18. Framework to parameterize and validate APEX to support deployment of the nutrient tracking tool

    USDA-ARS?s Scientific Manuscript database

    Guidelines have been developed to parameterize and validate the Agricultural Policy Environmental eXtender (APEX) to support the Nutrient Tracking Tool (NTT). This follow-up paper presents 1) a case study to illustrate how the developed guidelines are applied in a headwater watershed located in cent...

  19. Developing and Validating Proof Comprehension Tests in Undergraduate Mathematics

    ERIC Educational Resources Information Center

    Mejía-Ramos, Juan Pablo; Lew, Kristen; de la Torre, Jimmy; Weber, Keith

    2017-01-01

    In this article, we describe and illustrate the process by which we developed and validated short, multiple-choice, reliable tests to assess undergraduate students' comprehension of three mathematical proofs. We discuss the purpose for each stage and how it benefited the design of our instruments. We also suggest ways in which this process could…

  20. Evaluating the Unintended Consequences of Assessment Practices: Construct Irrelevance and Construct Underrepresentation

    ERIC Educational Resources Information Center

    Spurgeon, Shawn L.

    2017-01-01

    Construct irrelevance (CI) and construct underrepresentation (CU) are 2 major threats to validity, yet they are rarely discussed within the counseling literature. This article provides information about the relevance of these threats to internal validity. An illustrative case example will be provided to assist counselors in understanding these…

  1. Force Limiting Vibration Tests Evaluated from both Ground Acoustic Tests and FEM Simulations of a Flight Like Vehicle System Assembly

    NASA Technical Reports Server (NTRS)

    Smith, Andrew; LaVerde, Bruce; Waldon, James; Hunt, Ron

    2014-01-01

    Marshall Space Flight Center has conducted a series of ground acoustic tests with the dual goals of informing analytical judgment, and validating analytical methods when estimating vibroacoustic responses of launch vehicle subsystems. The process of repeatedly correlating finite element-simulated responses with test-measured responses has assisted in the development of best practices for modeling and post-processing. In recent work, force transducers were integrated to measure interface forces at the base of avionics box equipment. Other force data was indirectly measured using strain gauges. The combination of these direct and indirect force measurements has been used to support and illustrate the advantages of implementing the Force Limiting approach for equipment qualification tests. The comparison of force response from integrated system level tests to measurements at the same locations during component level vibration tests provides an excellent illustration. A second comparison of the measured response cases from the system level acoustic tests to finite element simulations has also produced some principles for assessing the suitability of Finite Element Models (FEMs) for making vibroacoustics estimates. The results indicate that when FEM models are employed to guide force limiting choices, they should include sufficient detail to represent the apparent mass of the system in the frequency range of interest.

  2. Hamiltonian closures in fluid models for plasmas

    NASA Astrophysics Data System (ADS)

    Tassi, Emanuele

    2017-11-01

    This article reviews recent activity on the Hamiltonian formulation of fluid models for plasmas in the non-dissipative limit, with emphasis on the relations between the fluid closures adopted for the different models and the Hamiltonian structures. The review focuses on results obtained during the last decade, but a few classical results are also described, in order to illustrate connections with the most recent developments. With the hope of making the review accessible not only to specialists in the field, an introduction to the mathematical tools applied in the Hamiltonian formalism for continuum models is provided. Subsequently, we review the Hamiltonian formulation of models based on the magnetohydrodynamics description, including those based on the adiabatic and double adiabatic closure. It is shown how Dirac's theory of constrained Hamiltonian systems can be applied to impose the incompressibility closure on a magnetohydrodynamic model and how an extended version of barotropic magnetohydrodynamics, accounting for two-fluid effects, is amenable to a Hamiltonian formulation. Hamiltonian reduced fluid models, valid in the presence of a strong magnetic field, are also reviewed. In particular, reduced magnetohydrodynamics and models assuming cold ions and different closures for the electron fluid are discussed. Hamiltonian models relaxing the cold-ion assumption are then introduced. These include models where finite Larmor radius effects are added by means of the gyromap technique, and gyrofluid models. Numerical simulations of Hamiltonian reduced fluid models investigating the phenomenon of magnetic reconnection are illustrated. The last part of the review concerns recent results based on the derivation of closures preserving a Hamiltonian structure, based on the Hamiltonian structure of parent kinetic models. Identification of such closures for fluid models derived from kinetic systems based on the Vlasov and drift-kinetic equations are presented, and connections with previously discussed fluid models are pointed out.

  3. Deterministic Local Sensitivity Analysis of Augmented Systems - II: Applications to the QUENCH-04 Experiment Using the RELAP5/MOD3.2 Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ionescu-Bujor, Mihaela; Jin Xuezhou; Cacuci, Dan G.

    2005-09-15

    The adjoint sensitivity analysis procedure for augmented systems for application to the RELAP5/MOD3.2 code system is illustrated. Specifically, the adjoint sensitivity model corresponding to the heat structure models in RELAP5/MOD3.2 is derived and subsequently augmented to the two-fluid adjoint sensitivity model (ASM-REL/TF). The end product, called ASM-REL/TFH, comprises the complete adjoint sensitivity model for the coupled fluid dynamics/heat structure packages of the large-scale simulation code RELAP5/MOD3.2. The ASM-REL/TFH model is validated by computing sensitivities to the initial conditions for various time-dependent temperatures in the test bundle of the Quench-04 reactor safety experiment. This experiment simulates the reflooding with water ofmore » uncovered, degraded fuel rods, clad with material (Zircaloy-4) that has the same composition and size as that used in typical pressurized water reactors. The most important response for the Quench-04 experiment is the time evolution of the cladding temperature of heated fuel rods. The ASM-REL/TFH model is subsequently used to perform an illustrative sensitivity analysis of this and other time-dependent temperatures within the bundle. The results computed by using the augmented adjoint sensitivity system, ASM-REL/TFH, highlight the reliability, efficiency, and usefulness of the adjoint sensitivity analysis procedure for computing time-dependent sensitivities.« less

  4. An application in identifying high-risk populations in alternative tobacco product use utilizing logistic regression and CART: a heuristic comparison.

    PubMed

    Lei, Yang; Nollen, Nikki; Ahluwahlia, Jasjit S; Yu, Qing; Mayo, Matthew S

    2015-04-09

    Other forms of tobacco use are increasing in prevalence, yet most tobacco control efforts are aimed at cigarettes. In light of this, it is important to identify individuals who are using both cigarettes and alternative tobacco products (ATPs). Most previous studies have used regression models. We conducted a traditional logistic regression model and a classification and regression tree (CART) model to illustrate and discuss the added advantages of using CART in the setting of identifying high-risk subgroups of ATP users among cigarettes smokers. The data were collected from an online cross-sectional survey administered by Survey Sampling International between July 5, 2012 and August 15, 2012. Eligible participants self-identified as current smokers, African American, White, or Latino (of any race), were English-speaking, and were at least 25 years old. The study sample included 2,376 participants and was divided into independent training and validation samples for a hold out validation. Logistic regression and CART models were used to examine the important predictors of cigarettes + ATP users. The logistic regression model identified nine important factors: gender, age, race, nicotine dependence, buying cigarettes or borrowing, whether the price of cigarettes influences the brand purchased, whether the participants set limits on cigarettes per day, alcohol use scores, and discrimination frequencies. The C-index of the logistic regression model was 0.74, indicating good discriminatory capability. The model performed well in the validation cohort also with good discrimination (c-index = 0.73) and excellent calibration (R-square = 0.96 in the calibration regression). The parsimonious CART model identified gender, age, alcohol use score, race, and discrimination frequencies to be the most important factors. It also revealed interesting partial interactions. The c-index is 0.70 for the training sample and 0.69 for the validation sample. The misclassification rate was 0.342 for the training sample and 0.346 for the validation sample. The CART model was easier to interpret and discovered target populations that possess clinical significance. This study suggests that the non-parametric CART model is parsimonious, potentially easier to interpret, and provides additional information in identifying the subgroups at high risk of ATP use among cigarette smokers.

  5. Some applications of categorical data analysis to epidemiological studies.

    PubMed Central

    Grizzle, J E; Koch, G G

    1979-01-01

    Several examples of categorized data from epidemiological studies are analyzed to illustrate that more informative analysis than tests of independence can be performed by fitting models. All of the analyses fit into a unified conceptual framework that can be performed by weighted least squares. The methods presented show how to calculate point estimate of parameters, asymptotic variances, and asymptotically valid chi 2 tests. The examples presented are analysis of relative risks estimated from several 2 x 2 tables, analysis of selected features of life tables, construction of synthetic life tables from cross-sectional studies, and analysis of dose-response curves. PMID:540590

  6. Protein Secondary Structure Prediction Using AutoEncoder Network and Bayes Classifier

    NASA Astrophysics Data System (ADS)

    Wang, Leilei; Cheng, Jinyong

    2018-03-01

    Protein secondary structure prediction is belong to bioinformatics,and it's important in research area. In this paper, we propose a new prediction way of protein using bayes classifier and autoEncoder network. Our experiments show some algorithms including the construction of the model, the classification of parameters and so on. The data set is a typical CB513 data set for protein. In terms of accuracy, the method is the cross validation based on the 3-fold. Then we can get the Q3 accuracy. Paper results illustrate that the autoencoder network improved the prediction accuracy of protein secondary structure.

  7. Numerical studies of interacting vortices

    NASA Technical Reports Server (NTRS)

    Liu, G. C.; Hsu, C. H.

    1985-01-01

    To get a basic understanding of the physics of flowfields modeled by vortex filaments with finite vortical cores, systematic numerical studies of the interactions of two dimensional vortices and pairs of coaxial axisymmetric circular vortex rings were made. Finite difference solutions of the unsteady incompressible Navier-Stokes equations were carried out using vorticity and stream function as primary variables. Special emphasis was placed on the formulation of appropriate boundary conditions necessary for the calculations in a finite computational domain. Numerical results illustrate the interaction of vortex filaments, demonstrate when and how they merge with each other, and establish the region of validity for an asymptotic analysis.

  8. Instrumented Taylor anvil-on-rod impact tests for validating applicability of standard strength models to transient deformation states

    NASA Astrophysics Data System (ADS)

    Eakins, D. E.; Thadhani, N. N.

    2006-10-01

    Instrumented Taylor anvil-on-rod impact tests have been conducted on oxygen-free electronic copper to validate the accuracy of current strength models for predicting transient states during dynamic deformation events. The experiments coupled the use of high-speed digital photography to record the transient deformation states and laser interferometry to monitor the sample back (free surface) velocity as a measure of the elastic/plastic wave propagation through the sample length. Numerical continuum dynamics simulations of the impact and plastic wave propagation employing the Johnson-Cook [Proceedings of the Seventh International Symposium on Ballistics, 1983, The Netherlands (Am. Def. Prep. Assoc. (ADPA)), pp. 541-547], Zerilli-Armstrong [J. Appl. Phys. C1, 1816 (1987)], and Steinberg-Guinan [J. Appl. Phys. 51, 1498 (1980)] constitutive equations were used to generate transient deformation profiles and the free surface velocity traces. While these simulations showed good correlation with the measured free surface velocity traces and the final deformed sample shape, varying degrees of deviations were observed between the photographed and calculated specimen profiles at intermediate deformation states. The results illustrate the usefulness of the instrumented Taylor anvil-on-rod impact technique for validating constitutive equations that can describe the path-dependent deformation response and can therefore predict the transient and final deformation states.

  9. The Lifecycle of Bayesian Network Models Developed for Multi-Source Signature Assessment of Nuclear Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gastelum, Zoe N.; White, Amanda M.; Whitney, Paul D.

    2013-06-04

    The Multi-Source Signatures for Nuclear Programs project, part of Pacific Northwest National Laboratory’s (PNNL) Signature Discovery Initiative, seeks to computationally capture expert assessment of multi-type information such as text, sensor output, imagery, or audio/video files, to assess nuclear activities through a series of Bayesian network (BN) models. These models incorporate knowledge from a diverse range of information sources in order to help assess a country’s nuclear activities. The models span engineering topic areas, state-level indicators, and facility-specific characteristics. To illustrate the development, calibration, and use of BN models for multi-source assessment, we present a model that predicts a country’s likelihoodmore » to participate in the international nuclear nonproliferation regime. We validate this model by examining the extent to which the model assists non-experts arrive at conclusions similar to those provided by nuclear proliferation experts. We also describe the PNNL-developed software used throughout the lifecycle of the Bayesian network model development.« less

  10. Modeling the Spatial Dynamics of Regional Land Use: The CLUE-S Model

    NASA Astrophysics Data System (ADS)

    Verburg, Peter H.; Soepboer, Welmoed; Veldkamp, A.; Limpiada, Ramil; Espaldon, Victoria; Mastura, Sharifah S. A.

    2002-09-01

    Land-use change models are important tools for integrated environmental management. Through scenario analysis they can help to identify near-future critical locations in the face of environmental change. A dynamic, spatially explicit, land-use change model is presented for the regional scale: CLUE-S. The model is specifically developed for the analysis of land use in small regions (e.g., a watershed or province) at a fine spatial resolution. The model structure is based on systems theory to allow the integrated analysis of land-use change in relation to socio-economic and biophysical driving factors. The model explicitly addresses the hierarchical organization of land use systems, spatial connectivity between locations and stability. Stability is incorporated by a set of variables that define the relative elasticity of the actual land-use type to conversion. The user can specify these settings based on expert knowledge or survey data. Two applications of the model in the Philippines and Malaysia are used to illustrate the functioning of the model and its validation.

  11. Modeling the spatial dynamics of regional land use: the CLUE-S model.

    PubMed

    Verburg, Peter H; Soepboer, Welmoed; Veldkamp, A; Limpiada, Ramil; Espaldon, Victoria; Mastura, Sharifah S A

    2002-09-01

    Land-use change models are important tools for integrated environmental management. Through scenario analysis they can help to identify near-future critical locations in the face of environmental change. A dynamic, spatially explicit, land-use change model is presented for the regional scale: CLUE-S. The model is specifically developed for the analysis of land use in small regions (e.g., a watershed or province) at a fine spatial resolution. The model structure is based on systems theory to allow the integrated analysis of land-use change in relation to socio-economic and biophysical driving factors. The model explicitly addresses the hierarchical organization of land use systems, spatial connectivity between locations and stability. Stability is incorporated by a set of variables that define the relative elasticity of the actual land-use type to conversion. The user can specify these settings based on expert knowledge or survey data. Two applications of the model in the Philippines and Malaysia are used to illustrate the functioning of the model and its validation.

  12. È VIVO: Virtual eruptions at Vesuvius; A multimedia tool to illustrate numerical modeling to a general public

    NASA Astrophysics Data System (ADS)

    Todesco, Micol; Neri, Augusto; Demaria, Cristina; Marmo, Costantino; Macedonio, Giovanni

    2006-07-01

    Dissemination of scientific results to the general public has become increasingly important in our society. When science deals with natural hazards, public outreach is even more important: on the one hand, it contributes to hazard perception and it is a necessary step toward preparedness and risk mitigation; on the other hand, it contributes to establish a positive link of mutual confidence between scientific community and the population living at risk. The existence of such a link plays a relevant role in hazard communication, which in turn is essential to mitigate the risk. In this work, we present a tool that we have developed to illustrate our scientific results on pyroclastic flow propagation at Vesuvius. This tool, a CD-ROM that we developed joining scientific data with appropriate knowledge in communication sciences is meant to be a first prototype that will be used to test the validity of this approach to public outreach. The multimedia guide contains figures, images of real volcanoes and computer animations obtained through numerical modeling of pyroclastic density currents. Explanatory text, kept as short and simple as possible, illustrates both the process and the methodology applied to study this very dangerous natural phenomenon. In this first version, the CD-ROM will be distributed among selected categories of end-users together with a short questionnaire that we have drawn to test its readability. Future releases will include feedback from the users, further advancement of scientific results as well as a higher degree of interactivity.

  13. Computational method for analysis of polyethylene biodegradation

    NASA Astrophysics Data System (ADS)

    Watanabe, Masaji; Kawai, Fusako; Shibata, Masaru; Yokoyama, Shigeo; Sudate, Yasuhiro

    2003-12-01

    In a previous study concerning the biodegradation of polyethylene, we proposed a mathematical model based on two primary factors: the direct consumption or absorption of small molecules and the successive weight loss of large molecules due to β-oxidation. Our model is an initial value problem consisting of a differential equation whose independent variable is time. Its unknown variable represents the total weight of all the polyethylene molecules that belong to a molecular-weight class specified by a parameter. In this paper, we describe a numerical technique to introduce experimental results into analysis of our model. We first establish its mathematical foundation in order to guarantee its validity, by showing that the initial value problem associated with the differential equation has a unique solution. Our computational technique is based on a linear system of differential equations derived from the original problem. We introduce some numerical results to illustrate our technique as a practical application of the linear approximation. In particular, we show how to solve the inverse problem to determine the consumption rate and the β-oxidation rate numerically, and illustrate our numerical technique by analyzing the GPC patterns of polyethylene wax obtained before and after 5 weeks cultivation of a fungus, Aspergillus sp. AK-3. A numerical simulation based on these degradation rates confirms that the primary factors of the polyethylene biodegradation posed in modeling are indeed appropriate.

  14. Interval-valued intuitionistic fuzzy matrix games based on Archimedean t-conorm and t-norm

    NASA Astrophysics Data System (ADS)

    Xia, Meimei

    2018-04-01

    Fuzzy game theory has been applied in many decision-making problems. The matrix game with interval-valued intuitionistic fuzzy numbers (IVIFNs) is investigated based on Archimedean t-conorm and t-norm. The existing matrix games with IVIFNs are all based on Algebraic t-conorm and t-norm, which are special cases of Archimedean t-conorm and t-norm. In this paper, the intuitionistic fuzzy aggregation operators based on Archimedean t-conorm and t-norm are employed to aggregate the payoffs of players. To derive the solution of the matrix game with IVIFNs, several mathematical programming models are developed based on Archimedean t-conorm and t-norm. The proposed models can be transformed into a pair of primal-dual linear programming models, based on which, the solution of the matrix game with IVIFNs is obtained. It is proved that the theorems being valid in the exiting matrix game with IVIFNs are still true when the general aggregation operator is used in the proposed matrix game with IVIFNs. The proposed method is an extension of the existing ones and can provide more choices for players. An example is given to illustrate the validity and the applicability of the proposed method.

  15. Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models

    PubMed Central

    Rice, John D.; Taylor, Jeremy M. G.

    2016-01-01

    One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492

  16. The unified acoustic and aerodynamic prediction theory of advanced propellers in the time domain

    NASA Technical Reports Server (NTRS)

    Farassat, F.

    1984-01-01

    This paper presents some numerical results for the noise of an advanced supersonic propeller based on a formulation published last year. This formulation was derived to overcome some of the practical numerical difficulties associated with other acoustic formulations. The approach is based on the Ffowcs Williams-Hawkings equation and time domain analysis is used. To illustrate the method of solution, a model problem in three dimensions and based on the Laplace equation is solved. A brief sketch of derivation of the acoustic formula is then given. Another model problem is used to verify validity of the acoustic formulation. A recent singular integral equation for aerodynamic applications derived from the acoustic formula is also presented here.

  17. Topological Vulnerability Evaluation Model Based on Fractal Dimension of Complex Networks.

    PubMed

    Gou, Li; Wei, Bo; Sadiq, Rehan; Sadiq, Yong; Deng, Yong

    2016-01-01

    With an increasing emphasis on network security, much more attentions have been attracted to the vulnerability of complex networks. In this paper, the fractal dimension, which can reflect space-filling capacity of networks, is redefined as the origin moment of the edge betweenness to obtain a more reasonable evaluation of vulnerability. The proposed model combining multiple evaluation indexes not only overcomes the shortage of average edge betweenness's failing to evaluate vulnerability of some special networks, but also characterizes the topological structure and highlights the space-filling capacity of networks. The applications to six US airline networks illustrate the practicality and effectiveness of our proposed method, and the comparisons with three other commonly used methods further validate the superiority of our proposed method.

  18. Derivation of stiffness matrix in constitutive modeling of magnetorheological elastomer

    NASA Astrophysics Data System (ADS)

    Leng, D.; Sun, L.; Sun, J.; Lin, Y.

    2013-02-01

    Magnetorheological elastomers (MREs) are a class of smart materials whose mechanical properties change instantly by the application of a magnetic field. Based on the specially orthotropic, transversely isotropic stress-strain relationships and effective permeability model, the stiffness matrix of constitutive equations for deformable chain-like MRE is considered. To valid the components of shear modulus in this stiffness matrix, the magnetic-structural simulations with finite element method (FEM) are presented. An acceptable agreement is illustrated between analytical equations and numerical simulations. For the specified magnetic field, sphere particle radius, distance between adjacent particles in chains and volume fractions of ferrous particles, this constitutive equation is effective to engineering application to estimate the elastic behaviour of chain-like MRE in an external magnetic field.

  19. Examining construct and predictive validity of the Health-IT Usability Evaluation Scale: confirmatory factor analysis and structural equation modeling results.

    PubMed

    Yen, Po-Yin; Sousa, Karen H; Bakken, Suzanne

    2014-10-01

    In a previous study, we developed the Health Information Technology Usability Evaluation Scale (Health-ITUES), which is designed to support customization at the item level. Such customization matches the specific tasks/expectations of a health IT system while retaining comparability at the construct level, and provides evidence of its factorial validity and internal consistency reliability through exploratory factor analysis. In this study, we advanced the development of Health-ITUES to examine its construct validity and predictive validity. The health IT system studied was a web-based communication system that supported nurse staffing and scheduling. Using Health-ITUES, we conducted a cross-sectional study to evaluate users' perception toward the web-based communication system after system implementation. We examined Health-ITUES's construct validity through first and second order confirmatory factor analysis (CFA), and its predictive validity via structural equation modeling (SEM). The sample comprised 541 staff nurses in two healthcare organizations. The CFA (n=165) showed that a general usability factor accounted for 78.1%, 93.4%, 51.0%, and 39.9% of the explained variance in 'Quality of Work Life', 'Perceived Usefulness', 'Perceived Ease of Use', and 'User Control', respectively. The SEM (n=541) supported the predictive validity of Health-ITUES, explaining 64% of the variance in intention for system use. The results of CFA and SEM provide additional evidence for the construct and predictive validity of Health-ITUES. The customizability of Health-ITUES has the potential to support comparisons at the construct level, while allowing variation at the item level. We also illustrate application of Health-ITUES across stages of system development. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  20. Fisher's geometrical model emerges as a property of complex integrated phenotypic networks.

    PubMed

    Martin, Guillaume

    2014-05-01

    Models relating phenotype space to fitness (phenotype-fitness landscapes) have seen important developments recently. They can roughly be divided into mechanistic models (e.g., metabolic networks) and more heuristic models like Fisher's geometrical model. Each has its own drawbacks, but both yield testable predictions on how the context (genomic background or environment) affects the distribution of mutation effects on fitness and thus adaptation. Both have received some empirical validation. This article aims at bridging the gap between these approaches. A derivation of the Fisher model "from first principles" is proposed, where the basic assumptions emerge from a more general model, inspired by mechanistic networks. I start from a general phenotypic network relating unspecified phenotypic traits and fitness. A limited set of qualitative assumptions is then imposed, mostly corresponding to known features of phenotypic networks: a large set of traits is pleiotropically affected by mutations and determines a much smaller set of traits under optimizing selection. Otherwise, the model remains fairly general regarding the phenotypic processes involved or the distribution of mutation effects affecting the network. A statistical treatment and a local approximation close to a fitness optimum yield a landscape that is effectively the isotropic Fisher model or its extension with a single dominant phenotypic direction. The fit of the resulting alternative distributions is illustrated in an empirical data set. These results bear implications on the validity of Fisher's model's assumptions and on which features of mutation fitness effects may vary (or not) across genomic or environmental contexts.

  1. Bayesian nonlinear structural FE model and seismic input identification for damage assessment of civil structures

    NASA Astrophysics Data System (ADS)

    Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.

    2017-09-01

    A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.

  2. Modeling multivariate time series on manifolds with skew radial basis functions.

    PubMed

    Jamshidi, Arta A; Kirby, Michael J

    2011-01-01

    We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.

  3. A Software Toolkit to Study Systematic Uncertainties of the Physics Models of the Geant4 Simulation Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genser, Krzysztof; Hatcher, Robert; Perdue, Gabriel

    2016-11-10

    The Geant4 toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models are tuned to cover a large variety of possible applications. This raises the critical question of what uncertainties are associated with the Geant4 physics model, or group of models, involved in a simulation project. To address the challenge, we have designed and implemented a comprehen- sive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies.more » It also enables analysis of multiple variants of the resulting physics observables of interest in order to estimate the uncertain- ties associated with the simulation model choices. Key functionalities of the toolkit are presented in this paper and are illustrated with selected results.« less

  4. Comparing statistical and machine learning classifiers: alternatives for predictive modeling in human factors research.

    PubMed

    Carnahan, Brian; Meyer, Gérard; Kuntz, Lois-Ann

    2003-01-01

    Multivariate classification models play an increasingly important role in human factors research. In the past, these models have been based primarily on discriminant analysis and logistic regression. Models developed from machine learning research offer the human factors professional a viable alternative to these traditional statistical classification methods. To illustrate this point, two machine learning approaches--genetic programming and decision tree induction--were used to construct classification models designed to predict whether or not a student truck driver would pass his or her commercial driver license (CDL) examination. The models were developed and validated using the curriculum scores and CDL exam performances of 37 student truck drivers who had completed a 320-hr driver training course. Results indicated that the machine learning classification models were superior to discriminant analysis and logistic regression in terms of predictive accuracy. Actual or potential applications of this research include the creation of models that more accurately predict human performance outcomes.

  5. Modeling the clinical and economic implications of obesity using microsimulation.

    PubMed

    Su, W; Huang, J; Chen, F; Iacobucci, W; Mocarski, M; Dall, T M; Perreault, L

    2015-01-01

    The obesity epidemic has raised considerable public health concerns, but there are few validated longitudinal simulation models examining the human and economic cost of obesity. This paper describes a microsimulation model as a comprehensive tool to understand the relationship between body weight, health, and economic outcomes. Patient health and economic outcomes were simulated annually over 10 years using a Markov-based microsimulation model. The obese population examined is nationally representative of obese adults in the US from the 2005-2012 National Health and Nutrition Examination Surveys, while a matched normal weight population was constructed to have similar demographics as the obese population during the same period. Prediction equations for onset of obesity-related comorbidities, medical expenditures, economic outcomes, mortality, and quality-of-life came from published trials and studies supplemented with original research. Model validation followed International Society for Pharmacoeconomics and Outcomes Research practice guidelines. Among surviving adults, relative to a matched normal weight population, obese adults averaged $3900 higher medical expenditures in the initial year, growing to $4600 higher expenditures in year 10. Obese adults had higher initial prevalence and higher simulated onset of comorbidities as they aged. Over 10 years, excess medical expenditures attributed to obesity averaged $4280 annually-ranging from $2820 for obese category I to $5100 for obese category II, and $8710 for obese category III. Each excess kilogram of weight contributed to $140 higher annual costs, on average, ranging from $136 (obese I) to $152 (obese III). Poor health associated with obesity increased work absenteeism and mortality, and lowered employment probability, personal income, and quality-of-life. This validated model helps illustrate why obese adults have higher medical and indirect costs relative to normal weight adults, and shows that medical costs for obese adults rise more rapidly with aging relative to normal weight adults.

  6. Object-oriented regression for building predictive models with high dimensional omics data from translational studies.

    PubMed

    Zhao, Lue Ping; Bolouri, Hamid

    2016-04-01

    Maturing omics technologies enable researchers to generate high dimension omics data (HDOD) routinely in translational clinical studies. In the field of oncology, The Cancer Genome Atlas (TCGA) provided funding support to researchers to generate different types of omics data on a common set of biospecimens with accompanying clinical data and has made the data available for the research community to mine. One important application, and the focus of this manuscript, is to build predictive models for prognostic outcomes based on HDOD. To complement prevailing regression-based approaches, we propose to use an object-oriented regression (OOR) methodology to identify exemplars specified by HDOD patterns and to assess their associations with prognostic outcome. Through computing patient's similarities to these exemplars, the OOR-based predictive model produces a risk estimate using a patient's HDOD. The primary advantages of OOR are twofold: reducing the penalty of high dimensionality and retaining the interpretability to clinical practitioners. To illustrate its utility, we apply OOR to gene expression data from non-small cell lung cancer patients in TCGA and build a predictive model for prognostic survivorship among stage I patients, i.e., we stratify these patients by their prognostic survival risks beyond histological classifications. Identification of these high-risk patients helps oncologists to develop effective treatment protocols and post-treatment disease management plans. Using the TCGA data, the total sample is divided into training and validation data sets. After building up a predictive model in the training set, we compute risk scores from the predictive model, and validate associations of risk scores with prognostic outcome in the validation data (P-value=0.015). Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Object-Oriented Regression for Building Predictive Models with High Dimensional Omics Data from Translational Studies

    PubMed Central

    Zhao, Lue Ping; Bolouri, Hamid

    2016-01-01

    Maturing omics technologies enable researchers to generate high dimension omics data (HDOD) routinely in translational clinical studies. In the field of oncology, The Cancer Genome Atlas (TCGA) provided funding support to researchers to generate different types of omics data on a common set of biospecimens with accompanying clinical data and to make the data available for the research community to mine. One important application, and the focus of this manuscript, is to build predictive models for prognostic outcomes based on HDOD. To complement prevailing regression-based approaches, we propose to use an object-oriented regression (OOR) methodology to identify exemplars specified by HDOD patterns and to assess their associations with prognostic outcome. Through computing patient’s similarities to these exemplars, the OOR-based predictive model produces a risk estimate using a patient’s HDOD. The primary advantages of OOR are twofold: reducing the penalty of high dimensionality and retaining the interpretability to clinical practitioners. To illustrate its utility, we apply OOR to gene expression data from non-small cell lung cancer patients in TCGA and build a predictive model for prognostic survivorship among stage I patients, i.e., we stratify these patients by their prognostic survival risks beyond histological classifications. Identification of these high-risk patients helps oncologists to develop effective treatment protocols and post-treatment disease management plans. Using the TCGA data, the total sample is divided into training and validation data sets. After building up a predictive model in the training set, we compute risk scores from the predictive model, and validate associations of risk scores with prognostic outcome in the validation data (p=0.015). PMID:26972839

  8. Application of a Subspace-Based Fault Detection Method to Industrial Structures

    NASA Astrophysics Data System (ADS)

    Mevel, L.; Hermans, L.; van der Auweraer, H.

    1999-11-01

    Early detection and localization of damage allow increased expectations of reliability, safety and reduction of the maintenance cost. This paper deals with the industrial validation of a technique to monitor the health of a structure in operating conditions (e.g. rotating machinery, civil constructions subject to ambient excitations, etc.) and to detect slight deviations in a modal model derived from in-operation measured data. In this paper, a statistical local approach based on covariance-driven stochastic subspace identification is proposed. The capabilities and limitations of the method with respect to health monitoring and damage detection are discussed and it is explained how the method can be practically used in industrial environments. After the successful validation of the proposed method on a few laboratory structures, its application to a sports car is discussed. The example illustrates that the method allows the early detection of a vibration-induced fatigue problem of a sports car.

  9. Verification methodology for fault-tolerant, fail-safe computers applied to maglev control computer systems. Final report, July 1991-May 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lala, J.H.; Nagle, G.A.; Harper, R.E.

    1993-05-01

    The Maglev control computer system should be designed to verifiably possess high reliability and safety as well as high availability to make Maglev a dependable and attractive transportation alternative to the public. A Maglev control computer system has been designed using a design-for-validation methodology developed earlier under NASA and SDIO sponsorship for real-time aerospace applications. The present study starts by defining the maglev mission scenario and ends with the definition of a maglev control computer architecture. Key intermediate steps included definitions of functional and dependability requirements, synthesis of two candidate architectures, development of qualitative and quantitative evaluation criteria, and analyticalmore » modeling of the dependability characteristics of the two architectures. Finally, the applicability of the design-for-validation methodology was also illustrated by applying it to the German Transrapid TR07 maglev control system.« less

  10. Assessment of Satellite Radiometry in the Visible Domain

    NASA Technical Reports Server (NTRS)

    Melin, Frederick; Franz, Bryan A.

    2014-01-01

    Marine reflectance and chlorophyll-a concentration are listed among the Essential Climate Variables by the Global Climate Observing System. To contribute to climate research, the satellite ocean color data records resulting from successive missions need to be consistent and well characterized in terms of uncertainties. This chapter reviews various approaches that can be used for the assessment of satellite ocean color data. Good practices for validating satellite products with in situ data and the current status of validation results are illustrated. Model-based approaches and inter-comparison techniques can also contribute to characterize some components of the uncertainty budget, while time series analysis can detect issues with the instrument radiometric characterization and calibration. Satellite data from different missions should also provide a consistent picture in scales of variability, including seasonal and interannual signals. Eventually, the various assessment approaches should be combined to create a fully characterized climate data record from satellite ocean color.

  11. Experimental study of an adaptive elastic metamaterial controlled by electric circuits

    NASA Astrophysics Data System (ADS)

    Zhu, R.; Chen, Y. Y.; Barnhart, M. V.; Hu, G. K.; Sun, C. T.; Huang, G. L.

    2016-01-01

    The ability to control elastic wave propagation at a deep subwavelength scale makes locally resonant elastic metamaterials very relevant. A number of abilities have been demonstrated such as frequency filtering, wave guiding, and negative refraction. Unfortunately, few metamaterials develop into practical devices due to their lack of tunability for specific frequencies. With the help of multi-physics numerical modeling, experimental validation of an adaptive elastic metamaterial integrated with shunted piezoelectric patches has been performed in a deep subwavelength scale. The tunable bandgap capacity, as high as 45%, is physically realized by using both hardening and softening shunted circuits. It is also demonstrated that the effective mass density of the metamaterial can be fully tailored by adjusting parameters of the shunted electric circuits. Finally, to illustrate a practical application, transient wave propagation tests of the adaptive metamaterial subjected to impact loads are conducted to validate their tunable wave mitigation abilities in real-time.

  12. Estimating abundance in the presence of species uncertainty

    USGS Publications Warehouse

    Chambert, Thierry A.; Hossack, Blake R.; Fishback, LeeAnn; Davenport, Jon M.

    2016-01-01

    1.N-mixture models have become a popular method for estimating abundance of free-ranging animals that are not marked or identified individually. These models have been used on count data for single species that can be identified with certainty. However, co-occurring species often look similar during one or more life stages, making it difficult to assign species for all recorded captures. This uncertainty creates problems for estimating species-specific abundance and it can often limit life stages to which we can make inference. 2.We present a new extension of N-mixture models that accounts for species uncertainty. In addition to estimating site-specific abundances and detection probabilities, this model allows estimating probability of correct assignment of species identity. We implement this hierarchical model in a Bayesian framework and provide all code for running the model in BUGS-language programs. 3.We present an application of the model on count data from two sympatric freshwater fishes, the brook stickleback (Culaea inconstans) and the ninespine stickleback (Pungitius pungitius), ad illustrate implementation of covariate effects (habitat characteristics). In addition, we used a simulation study to validate the model and illustrate potential sample size issues. We also compared, for both real and simulated data, estimates provided by our model to those obtained by a simple N-mixture model when captures of unknown species identification were discarded. In the latter case, abundance estimates appeared highly biased and very imprecise, while our new model provided unbiased estimates with higher precision. 4.This extension of the N-mixture model should be useful for a wide variety of studies and taxa, as species uncertainty is a common issue. It should notably help improve investigation of abundance and vital rate characteristics of organisms’ early life stages, which are sometimes more difficult to identify than adults.

  13. The ALADIN System and its canonical model configurations AROME CY41T1 and ALARO CY40T1

    NASA Astrophysics Data System (ADS)

    Termonia, Piet; Fischer, Claude; Bazile, Eric; Bouyssel, François; Brožková, Radmila; Bénard, Pierre; Bochenek, Bogdan; Degrauwe, Daan; Derková, Mariá; El Khatib, Ryad; Hamdi, Rafiq; Mašek, Ján; Pottier, Patricia; Pristov, Neva; Seity, Yann; Smolíková, Petra; Španiel, Oldřich; Tudor, Martina; Wang, Yong; Wittmann, Christoph; Joly, Alain

    2018-01-01

    The ALADIN System is a numerical weather prediction (NWP) system developed by the international ALADIN consortium for operational weather forecasting and research purposes. It is based on a code that is shared with the global model IFS of the ECMWF and the ARPEGE model of Météo-France. Today, this system can be used to provide a multitude of high-resolution limited-area model (LAM) configurations. A few configurations are thoroughly validated and prepared to be used for the operational weather forecasting in the 16 partner institutes of this consortium. These configurations are called the ALADIN canonical model configurations (CMCs). There are currently three CMCs: the ALADIN baseline CMC, the AROME CMC and the ALARO CMC. Other configurations are possible for research, such as process studies and climate simulations. The purpose of this paper is (i) to define the ALADIN System in relation to the global counterparts IFS and ARPEGE, (ii) to explain the notion of the CMCs, (iii) to document their most recent versions, and (iv) to illustrate the process of the validation and the porting of these configurations to the operational forecast suites of the partner institutes of the ALADIN consortium. This paper is restricted to the forecast model only; data assimilation techniques and postprocessing techniques are part of the ALADIN System but they are not discussed here.

  14. An Algorithm and R Program for Fitting and Simulation of Pharmacokinetic and Pharmacodynamic Data.

    PubMed

    Li, Jijie; Yan, Kewei; Hou, Lisha; Du, Xudong; Zhu, Ping; Zheng, Li; Zhu, Cairong

    2017-06-01

    Pharmacokinetic/pharmacodynamic link models are widely used in dose-finding studies. By applying such models, the results of initial pharmacokinetic/pharmacodynamic studies can be used to predict the potential therapeutic dose range. This knowledge can improve the design of later comparative large-scale clinical trials by reducing the number of participants and saving time and resources. However, the modeling process can be challenging, time consuming, and costly, even when using cutting-edge, powerful pharmacological software. Here, we provide a freely available R program for expediently analyzing pharmacokinetic/pharmacodynamic data, including data importation, parameter estimation, simulation, and model diagnostics. First, we explain the theory related to the establishment of the pharmacokinetic/pharmacodynamic link model. Subsequently, we present the algorithms used for parameter estimation and potential therapeutic dose computation. The implementation of the R program is illustrated by a clinical example. The software package is then validated by comparing the model parameters and the goodness-of-fit statistics generated by our R package with those generated by the widely used pharmacological software WinNonlin. The pharmacokinetic and pharmacodynamic parameters as well as the potential recommended therapeutic dose can be acquired with the R package. The validation process shows that the parameters estimated using our package are satisfactory. The R program developed and presented here provides pharmacokinetic researchers with a simple and easy-to-access tool for pharmacokinetic/pharmacodynamic analysis on personal computers.

  15. Nerve Conduction Through Dendrites via Proton Hopping.

    PubMed

    Kier, Lemont B

    2017-01-01

    In our previous studies of nerve conduction conducted by proton hopping, we have considered the axon, soma, synapse and the nodes of Ranvier. The role of proton hopping described the passage of information through each of these units of a typical nerve system. The synapse projects information from the axon to the dendrite and their associated spines. We have invoked the passage of protons via a hopping mechanism to illustrate the continuum of the impulse through the system, via the soma following the dendrites. This is proposed to be a continuum invoked by the proton hopping method. With the proposal of the activity through the dendrites, via proton hopping, a complete model of the nerve function is invoked. At each step to the way, a water pathway is present and is invoked in the proposed model as the carrier of the message via proton hopping. The importance of the dendrites is evident by the presence of a vast number of spines, each possessing the possibility to carry unique messages through the nervous system. With this model of the role of dendrites, functioning with the presence of proton hopping, a complete model of the nerve system is presented. The validity of this model will be available for further studies and models to assess it's validity. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  16. A guided search genetic algorithm using mined rules for optimal affective product design

    NASA Astrophysics Data System (ADS)

    Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.

    2014-08-01

    Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.

  17. Initial transport validation studies using NSTX-U L-mode plasmas

    NASA Astrophysics Data System (ADS)

    Guttenfelder, Walter; Battaglia, D.; Bell, R. E.; Boyer, M. D.; Crocker, N.; Diallo, A.; Ferraro, N.; Gerhardt, S. P.; Kaye, S. M.; Leblanc, B. P.; Liu, D.; Menard, J. E.; Mueller, D.; Myer, C.; Podesta, M.; Raman, R.; Ren, Y.; Sabbagh, S.; Smith, D.

    2016-10-01

    A variety of stationary L-mode plasmas have been successfully developed in NSTX-U for physics validation studies. The plasmas span a range of density (1-4 ×1019 m-3) , plasma current (0.65-1.0 MA), and neutral beam heating power (<=4 MW), taking advantage of new, more tangential neutral beam sources to vary rotation profiles. Transport analysis (TRANSP) and turbulence measurements (BES, reflectometry) of these plasmas will be illustrated and compared with initial microstability and transport predictions. In particular, the normalized beta of these L-modes range between βN = 1-2, providing a valuable bridge in parameter space between (i) H-modes at comparable beta in conventional tokamaks (R/a 3, βN 2), where transport models have been largely developed and tested, and (ii) low-aspect-ratio H-modes at higher beta (R/a 1.5-1.7, βN 5), where transport models are less tested and challenged by stronger electromagnetic and equilibrium effects. This work is supported by US DOE contract DE-AC02-09CH11466.

  18. A Predictive Approach to Network Reverse-Engineering

    NASA Astrophysics Data System (ADS)

    Wiggins, Chris

    2005-03-01

    A central challenge of systems biology is the ``reverse engineering" of transcriptional networks: inferring which genes exert regulatory control over which other genes. Attempting such inference at the genomic scale has only recently become feasible, via data-intensive biological innovations such as DNA microrrays (``DNA chips") and the sequencing of whole genomes. In this talk we present a predictive approach to network reverse-engineering, in which we integrate DNA chip data and sequence data to build a model of the transcriptional network of the yeast S. cerevisiae capable of predicting the response of genes in unseen experiments. The technique can also be used to extract ``motifs,'' sequence elements which act as binding sites for regulatory proteins. We validate by a number of approaches and present comparison of theoretical prediction vs. experimental data, along with biological interpretations of the resulting model. En route, we will illustrate some basic notions in statistical learning theory (fitting vs. over-fitting; cross- validation; assessing statistical significance), highlighting ways in which physicists can make a unique contribution in data- driven approaches to reverse engineering.

  19. Experimental Effects and Individual Differences in Linear Mixed Models: Estimating the Relationship between Spatial, Object, and Attraction Effects in Visual Attention

    PubMed Central

    Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin

    2011-01-01

    Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292

  20. Output-only modal parameter estimator of linear time-varying structural systems based on vector TAR model and least squares support vector machine

    NASA Astrophysics Data System (ADS)

    Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei

    2018-01-01

    Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.

  1. Structural models of antibody variable fragments: A method for investigating binding mechanisms

    NASA Astrophysics Data System (ADS)

    Petit, Samuel; Brard, Frédéric; Coquerel, Gérard; Perez, Guy; Tron, François

    1998-03-01

    The value of comparative molecular modeling for elucidating structure-function relationships was demonstrated by analyzing six anti-nucleosome autoantibody variable fragments. Structural models were built using the automated procedure developed in the COMPOSER software, subsequently minimized with the AMBER force field, and validated according to several standard geometric and chemical criteria. Canonical class assignment from Chothia and Lesk's [Chottin and Lesk, J. Mol. Biol., 196 (1987) 901; Chothia et al., Nature, 342 (1989) 877] work was used as a supplementary validation tool for five of the six hypervariable loops. The analysis, based on the hypothesis that antigen binding could occur through electrostatic interactions, reveals a diversity of possible binding mechanisms of anti-nucleosome or anti-histone antibodies to their cognate antigen. These results lead us to postulate that anti-nucleosome autoantibodies could have different origins. Since both anti-DNA and anti-nculeosome autoantibodies are produced during the course of systemic lupus erythematosus, a non-organ specific autoimmune disease, a comparative structural and electrostatic analysis of the two populations of autoantibodies may constitute a way to elucidate their origin and the role of the antigen in tolerance breakdown. The present study illustrates some interests, advantages and limits of a methodology based on the use of comparative modeling and analysis of molecular surface properties.

  2. Conservative Exposure Predictions for Rapid Risk Assessment of Phase-Separated Additives in Medical Device Polymers.

    PubMed

    Chandrasekar, Vaishnavi; Janes, Dustin W; Saylor, David M; Hood, Alan; Bajaj, Akhil; Duncan, Timothy V; Zheng, Jiwen; Isayeva, Irada S; Forrey, Christopher; Casey, Brendan J

    2018-01-01

    A novel approach for rapid risk assessment of targeted leachables in medical device polymers is proposed and validated. Risk evaluation involves understanding the potential of these additives to migrate out of the polymer, and comparing their exposure to a toxicological threshold value. In this study, we propose that a simple diffusive transport model can be used to provide conservative exposure estimates for phase separated color additives in device polymers. This model has been illustrated using a representative phthalocyanine color additive (manganese phthalocyanine, MnPC) and polymer (PEBAX 2533) system. Sorption experiments of MnPC into PEBAX were conducted in order to experimentally determine the diffusion coefficient, D = (1.6 ± 0.5) × 10 -11  cm 2 /s, and matrix solubility limit, C s  = 0.089 wt.%, and model predicted exposure values were validated by extraction experiments. Exposure values for the color additive were compared to a toxicological threshold for a sample risk assessment. Results from this study indicate that a diffusion model-based approach to predict exposure has considerable potential for use as a rapid, screening-level tool to assess the risk of color additives and other small molecule additives in medical device polymers.

  3. Kinetic modeling of α-hydrogen abstractions from unsaturated and saturated oxygenate compounds by hydrogen atoms.

    PubMed

    Paraskevas, Paschalis D; Sabbe, Maarten K; Reyniers, Marie-Françoise; Papayannakos, Nikos G; Marin, Guy B

    2014-10-09

    Hydrogen-abstraction reactions play a significant role in thermal biomass conversion processes, as well as regular gasification, pyrolysis, or combustion. In this work, a group additivity model is constructed that allows prediction of reaction rates and Arrhenius parameters of hydrogen abstractions by hydrogen atoms from alcohols, ethers, esters, peroxides, ketones, aldehydes, acids, and diketones in a broad temperature range (300-2000 K). A training set of 60 reactions was developed with rate coefficients and Arrhenius parameters calculated by the CBS-QB3 method in the high-pressure limit with tunneling corrections using Eckart tunneling coefficients. From this set of reactions, 15 group additive values were derived for the forward and the reverse reaction, 4 referring to primary and 11 to secondary contributions. The accuracy of the model is validated upon an ab initio and an experimental validation set of 19 and 21 reaction rates, respectively, showing that reaction rates can be predicted with a mean factor of deviation of 2 for the ab initio and 3 for the experimental values. Hence, this work illustrates that the developed group additive model can be reliably applied for the accurate prediction of kinetics of α-hydrogen abstractions by hydrogen atoms from a broad range of oxygenates.

  4. Reduced-order model based active disturbance rejection control of hydraulic servo system with singular value perturbation theory.

    PubMed

    Wang, Chengwen; Quan, Long; Zhang, Shijie; Meng, Hongjun; Lan, Yuan

    2017-03-01

    Hydraulic servomechanism is the typical mechanical/hydraulic double-dynamics coupling system with the high stiffness control and mismatched uncertainties input problems, which hinder direct applications of many advanced control approaches in the hydraulic servo fields. In this paper, by introducing the singular value perturbation theory, the original double-dynamics coupling model of the hydraulic servomechanism was reduced to a integral chain system. So that, the popular ADRC (active disturbance rejection control) technology could be directly applied to the reduced system. In addition, the high stiffness control and mismatched uncertainties input problems are avoided. The validity of the simplified model is analyzed and proven theoretically. The standard linear ADRC algorithm is then developed based on the obtained reduced-order model. Extensive comparative co-simulations and experiments are carried out to illustrate the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. A Poisson approach to the validation of failure time surrogate endpoints in individual patient data meta-analyses.

    PubMed

    Rotolo, Federico; Paoletti, Xavier; Burzykowski, Tomasz; Buyse, Marc; Michiels, Stefan

    2017-01-01

    Surrogate endpoints are often used in clinical trials instead of well-established hard endpoints for practical convenience. The meta-analytic approach relies on two measures of surrogacy: one at the individual level and one at the trial level. In the survival data setting, a two-step model based on copulas is commonly used. We present a new approach which employs a bivariate survival model with an individual random effect shared between the two endpoints and correlated treatment-by-trial interactions. We fit this model using auxiliary mixed Poisson models. We study via simulations the operating characteristics of this mixed Poisson approach as compared to the two-step copula approach. We illustrate the application of the methods on two individual patient data meta-analyses in gastric cancer, in the advanced setting (4069 patients from 20 randomized trials) and in the adjuvant setting (3288 patients from 14 randomized trials).

  6. Computational and experimental analysis of DNA shuffling

    PubMed Central

    Maheshri, Narendra; Schaffer, David V.

    2003-01-01

    We describe a computational model of DNA shuffling based on the thermodynamics and kinetics of this process. The model independently tracks a representative ensemble of DNA molecules and records their states at every stage of a shuffling reaction. These data can subsequently be analyzed to yield information on any relevant metric, including reassembly efficiency, crossover number, type and distribution, and DNA sequence length distributions. The predictive ability of the model was validated by comparison to three independent sets of experimental data, and analysis of the simulation results led to several unique insights into the DNA shuffling process. We examine a tradeoff between crossover frequency and reassembly efficiency and illustrate the effects of experimental parameters on this relationship. Furthermore, we discuss conditions that promote the formation of useless “junk” DNA sequences or multimeric sequences containing multiple copies of the reassembled product. This model will therefore aid in the design of optimal shuffling reaction conditions. PMID:12626764

  7. An Application-Based Discussion of Construct Validity and Internal Consistency Reliability.

    ERIC Educational Resources Information Center

    Taylor, Dianne L.; Campbell, Kathleen T.

    Several techniques for conducting studies of measurement integrity are explained and illustrated using a heuristic data set from a study of teachers' participation in decision making (D. L. Taylor, 1991). The sample consisted of 637 teachers. It is emphasized that validity and reliability are characteristics of data, and do not inure to tests as…

  8. Prevalence Estimation and Validation of New Instruments in Psychiatric Research: An Application of Latent Class Analysis and Sensitivity Analysis

    ERIC Educational Resources Information Center

    Pence, Brian Wells; Miller, William C.; Gaynes, Bradley N.

    2009-01-01

    Prevalence and validation studies rely on imperfect reference standard (RS) diagnostic instruments that can bias prevalence and test characteristic estimates. The authors illustrate 2 methods to account for RS misclassification. Latent class analysis (LCA) combines information from multiple imperfect measures of an unmeasurable latent condition to…

  9. Conflicting Discourses in Qualitative Research: The Search for Divergent Data within Cases

    ERIC Educational Resources Information Center

    Antin, Tamar M. J.; Constantine, Norman A.; Hunt, Geoffrey

    2015-01-01

    The search for disconfirming evidence, or negative cases, is often considered a valuable strategy for assessing the credibility or validity of qualitative research claims. This article draws on a multimethod qualitative research project to illustrate how a search for disconfirming evidence evolved from a check on the validity of findings to a…

  10. Program Validation: Four Case Studies. A Brief Report on Four Projects and Their Experiences Before the Joint Dissemination Review Panel.

    ERIC Educational Resources Information Center

    Taylor, Nancy, Ed.

    In an effort to clearly illustrate the most effective approach when seeking Joint Dissemination Review Panel (JDRP) validation, this report describes four different educational programs. A program involving prekindergarten special students, and a project entailing a systems approach for disadvantaged elementary school students, were awarded…

  11. Teaching Validity and Soundness of Arguments Using the Board Game: "The Resistance"

    ERIC Educational Resources Information Center

    Thompson, Derek

    2015-01-01

    The primary goal of this paper is to highlight the possibilities and benefits of incorporating games into college mathematics classrooms. This is illustrated through the personal success of using the board game "The Resistance" to teach validity and soundness of arguments in a discrete mathematics course. Along the way, we will give some…

  12. Measuring the Quality of Life of University Students. Research Monograph Series. Volume 1.

    ERIC Educational Resources Information Center

    Roberts, Lance W.; Clifton, Rodney A.

    This study sought to develop a valid set of scales in the cognitive and affective domains for measuring the quality of life of university students. In addition the study attempted to illustrate the usefulness of Thomas Piazza's procedures for constructing valid scales in educational research. Piazza's method involves a multi-step construction of…

  13. On the Validity of Repeated Assessments in the UMAT, a High-Stakes Admissions Test

    ERIC Educational Resources Information Center

    Andrich, David; Styles, Irene; Mercer, Annette; Puddey, Ian B.

    2017-01-01

    The possibility that the validity of assessment is compromised by repeated sittings of highly competitive and high profile selection tests has been documented and is of concern to stake-holders. An illustrative example is the Undergraduate Medicine and Health Sciences Admission Test (UMAT) used by some medical and dental courses in Australia and…

  14. Computation of incompressible viscous flows through artificial heart devices with moving boundaries

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Rogers, Stuart; Kwak, Dochan; Chang, I.-DEE

    1991-01-01

    The extension of computational fluid dynamics techniques to artificial heart flow simulations is illustrated. Unsteady incompressible Navier-Stokes equations written in 3-D generalized curvilinear coordinates are solved iteratively at each physical time step until the incompressibility condition is satisfied. The solution method is based on the pseudo compressibility approach and uses an implicit upwind differencing scheme together with the Gauss-Seidel line relaxation method. The efficiency and robustness of the time accurate formulation of the algorithm are tested by computing the flow through model geometries. A channel flow with a moving indentation is computed and validated with experimental measurements and other numerical solutions. In order to handle the geometric complexity and the moving boundary problems, a zonal method and an overlapping grid embedding scheme are used, respectively. Steady state solutions for the flow through a tilting disk heart valve was compared against experimental measurements. Good agreement was obtained. The flow computation during the valve opening and closing is carried out to illustrate the moving boundary capability.

  15. Thermal design and TDM test of the ETS-VI

    NASA Astrophysics Data System (ADS)

    Yoshinaka, T.; Kanamori, K.; Takenaka, N.; Kawashima, J.; Ido, Y.; Kuriyama, Y.

    The Engineering Test Satellite-VI (ETS-VI) thermal design, thermal development model (TDM) test, and evaluation results are described. The allocation of the thermal control materials on the spacecraft is illustrated. The principal design approach is to minimize the interactions between the antenna tower module and the main body, and between the main body and the liquid apogee propulsion system by means of multilayer insulation blankets and low conductance graphite epoxy support structures. The TDM test shows that the thermal control subsystem is capable of maintaining the on-board components within specified temperature limits. The heat pipe network is confirmed to operate properly, and a uniform panel temperature distribution is accomplished. The thermal analytical model is experimentally verified. The validity of the thermal control subsystem design is confirmed by the modified on-orbit analytical model.

  16. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks.

    PubMed

    Khan, Taimoor; De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results.

  17. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks

    PubMed Central

    De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616

  18. Data Farming and Defense Applications

    NASA Technical Reports Server (NTRS)

    Horne, Gary; Meyer, Ted

    2011-01-01

    .Data farm,ing uses simulation modeling, high performance computing, experimental design and analysis to examine questions of interest with large possibility spaces. This methodology allows for the examination of whole landscapes of potential outcomes and provides the capability of executing enough experiments so that outliers might be captured and examined for insights. It can be used to conduct sensitivity studies, to support validation and verification of models, to iteratively optimize outputs using heuristic search and discovery, and as an aid to decision-makers in understanding complex relationships of factors. In this paper we describe efforts at the Naval Postgraduate School in developing these new and emerging tools. We also discuss data farming in the context of application to questions inherent in military decision-making. The particular application we illustrate here is social network modeling to support the countering of improvised explosive devices.

  19. Uncertainties in ecosystem service maps: a comparison on the European scale.

    PubMed

    Schulp, Catharina J E; Burkhard, Benjamin; Maes, Joachim; Van Vliet, Jasper; Verburg, Peter H

    2014-01-01

    Safeguarding the benefits that ecosystems provide to society is increasingly included as a target in international policies. To support such policies, ecosystem service maps are made. However, there is little attention for the accuracy of these maps. We made a systematic review and quantitative comparison of ecosystem service maps on the European scale to generate insights in the uncertainty of ecosystem service maps and discuss the possibilities for quantitative validation. Maps of climate regulation and recreation were reasonably similar while large uncertainties among maps of erosion protection and flood regulation were observed. Pollination maps had a moderate similarity. Differences among the maps were caused by differences in indicator definition, level of process understanding, mapping aim, data sources and methodology. Absence of suitable observed data on ecosystem services provisioning hampers independent validation of the maps. Consequently, there are, so far, no accurate measures for ecosystem service map quality. Policy makers and other users need to be cautious when applying ecosystem service maps for decision-making. The results illustrate the need for better process understanding and data acquisition to advance ecosystem service mapping, modelling and validation.

  20. Modal analysis of graphene-based structures for large deformations, contact and material nonlinearities

    NASA Astrophysics Data System (ADS)

    Ghaffari, Reza; Sauer, Roger A.

    2018-06-01

    The nonlinear frequencies of pre-stressed graphene-based structures, such as flat graphene sheets and carbon nanotubes, are calculated. These structures are modeled with a nonlinear hyperelastic shell model. The model is calibrated with quantum mechanics data and is valid for high strains. Analytical solutions of the natural frequencies of various plates are obtained for the Canham bending model by assuming infinitesimal strains. These solutions are used for the verification of the numerical results. The performance of the model is illustrated by means of several examples. Modal analysis is performed for square plates under pure dilatation or uniaxial stretch, circular plates under pure dilatation or under the effects of an adhesive substrate, and carbon nanotubes under uniaxial compression or stretch. The adhesive substrate is modeled with van der Waals interaction (based on the Lennard-Jones potential) and a coarse grained contact model. It is shown that the analytical natural frequencies underestimate the real ones, and this should be considered in the design of devices based on graphene structures.

  1. School system evaluation by value added analysis under endogeneity.

    PubMed

    Manzi, Jorge; San Martín, Ernesto; Van Bellegem, Sébastien

    2014-01-01

    Value added is a common tool in educational research on effectiveness. It is often modeled as a (prediction of a) random effect in a specific hierarchical linear model. This paper shows that this modeling strategy is not valid when endogeneity is present. Endogeneity stems, for instance, from a correlation between the random effect in the hierarchical model and some of its covariates. This paper shows that this phenomenon is far from exceptional and can even be a generic problem when the covariates contain the prior score attainments, a typical situation in value added modeling. Starting from a general, model-free definition of value added, the paper derives an explicit expression of the value added in an endogeneous hierarchical linear Gaussian model. Inference on value added is proposed using an instrumental variable approach. The impact of endogeneity on the value added and the estimated value added is calculated accurately. This is also illustrated on a large data set of individual scores of about 200,000 students in Chile.

  2. Developmental framework to validate future designs of ballistic neck protection.

    PubMed

    Breeze, J; Midwinter, M J; Pope, D; Porter, K; Hepper, A E; Clasper, J

    2013-01-01

    The number of neck injuries has increased during the war in Afghanistan, and they have become an appreciable source of mortality and long-term morbidity for UK servicemen. A three-dimensional numerical model of the neck is necessary to allow simulation of penetrating injury from explosive fragments so that the design of body armour can be optimal, and a framework is required to validate and describe the individual components of this program. An interdisciplinary consensus group consisting of military maxillofacial surgeons, and biomedical, physical, and material scientists was convened to generate the components of the framework, and as a result it incorporates the following components: analysis of deaths and long-term morbidity, assessment of critical cervical structures for incorporation into the model, characterisation of explosive fragments, evaluation of the material of which the body armour is made, and mapping of the entry sites of fragments. The resulting numerical model will simulate the wound tract produced by fragments of differing masses and velocities, and illustrate the effects of temporary cavities on cervical neurovascular structures. Using this framework, a new shirt to be worn under body armour that incorporates ballistic cervical protection has been developed for use in Afghanistan. New designs of the collar validated by human factors and assessment of coverage are currently being incorporated into early versions of the numerical model. The aim of this paper is to describe this developmental framework and provide an update on the current progress of its individual components. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  3. Development and external multicenter validation of Chinese Prostate Cancer Consortium prostate cancer risk calculator for initial prostate biopsy.

    PubMed

    Chen, Rui; Xie, Liping; Xue, Wei; Ye, Zhangqun; Ma, Lulin; Gao, Xu; Ren, Shancheng; Wang, Fubo; Zhao, Lin; Xu, Chuanliang; Sun, Yinghao

    2016-09-01

    Substantial differences exist in the relationship of prostate cancer (PCa) detection rate and prostate-specific antigen (PSA) level between Western and Asian populations. Classic Western risk calculators, European Randomized Study for Screening of Prostate Cancer Risk Calculator, and Prostate Cancer Prevention Trial Risk Calculator, were shown to be not applicable in Asian populations. We aimed to develop and validate a risk calculator for predicting the probability of PCa and high-grade PCa (defined as Gleason Score sum 7 or higher) at initial prostate biopsy in Chinese men. Urology outpatients who underwent initial prostate biopsy according to the inclusion criteria were included. The multivariate logistic regression-based Chinese Prostate Cancer Consortium Risk Calculator (CPCC-RC) was constructed with cases from 2 hospitals in Shanghai. Discriminative ability, calibration and decision curve analysis were externally validated in 3 CPCC member hospitals. Of the 1,835 patients involved, PCa was identified in 338/924 (36.6%) and 294/911 (32.3%) men in the development and validation cohort, respectively. Multivariate logistic regression analyses showed that 5 predictors (age, logPSA, logPV, free PSA ratio, and digital rectal examination) were associated with PCa (Model 1) or high-grade PCa (Model 2), respectively. The area under the curve of Model 1 and Model 2 was 0.801 (95% CI: 0.771-0.831) and 0.826 (95% CI: 0.796-0.857), respectively. Both models illustrated good calibration and substantial improvement in decision curve analyses than any single predictors at all threshold probabilities. Higher predicting accuracy, better calibration, and greater clinical benefit were achieved by CPCC-RC, compared with European Randomized Study for Screening of Prostate Cancer Risk Calculator and Prostate Cancer Prevention Trial Risk Calculator in predicting PCa. CPCC-RC performed well in discrimination and calibration and decision curve analysis in external validation compared with Western risk calculators. CPCC-RC may aid in decision-making of prostate biopsy in Chinese or in other Asian populations with similar genetic and environmental backgrounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  5. Predicting arsenic concentrations in groundwater of San Luis Valley, Colorado: implications for individual-level lifetime exposure assessment.

    PubMed

    James, Katherine A; Meliker, Jaymie R; Buttenfield, Barbara E; Byers, Tim; Zerbe, Gary O; Hokanson, John E; Marshall, Julie A

    2014-08-01

    Consumption of inorganic arsenic in drinking water at high levels has been associated with chronic diseases. Risk is less clear at lower levels of arsenic, in part due to difficulties in estimating exposure. Herein we characterize spatial and temporal variability of arsenic concentrations and develop models for predicting aquifer arsenic concentrations in the San Luis Valley, Colorado, an area of moderately elevated arsenic in groundwater. This study included historical water samples with total arsenic concentrations from 595 unique well locations. A longitudinal analysis established temporal stability in arsenic levels in individual wells. The mean arsenic levels for a random sample of 535 wells were incorporated into five kriging models to predict groundwater arsenic concentrations at any point in time. A separate validation dataset (n = 60 wells) was used to identify the model with strongest predictability. Findings indicate that arsenic concentrations are temporally stable (r = 0.88; 95 % CI 0.83-0.92 for samples collected from the same well 15-25 years apart) and the spatial model created using ordinary kriging best predicted arsenic concentrations (ρ = 0.72 between predicted and observed validation data). These findings illustrate the value of geostatistical modeling of arsenic and suggest the San Luis Valley is a good region for conducting epidemiologic studies of groundwater metals because of the ability to accurately predict variation in groundwater arsenic concentrations.

  6. Climate change and heat-related mortality in six cities Part 1: model construction and validation

    NASA Astrophysics Data System (ADS)

    Gosling, Simon N.; McGregor, Glenn R.; Páldy, Anna

    2007-08-01

    Heat waves are expected to increase in frequency and magnitude with climate change. The first part of a study to produce projections of the effect of future climate change on heat-related mortality is presented. Separate city-specific empirical statistical models that quantify significant relationships between summer daily maximum temperature ( T max) and daily heat-related deaths are constructed from historical data for six cities: Boston, Budapest, Dallas, Lisbon, London, and Sydney. ‘Threshold temperatures’ above which heat-related deaths begin to occur are identified. The results demonstrate significantly lower thresholds in ‘cooler’ cities exhibiting lower mean summer temperatures than in ‘warmer’ cities exhibiting higher mean summer temperatures. Analysis of individual ‘heat waves’ illustrates that a greater proportion of mortality is due to mortality displacement in cities with less sensitive temperature-mortality relationships than in those with more sensitive relationships, and that mortality displacement is no longer a feature more than 12 days after the end of the heat wave. Validation techniques through residual and correlation analyses of modelled and observed values and comparisons with other studies indicate that the observed temperature-mortality relationships are represented well by each of the models. The models can therefore be used with confidence to examine future heat-related deaths under various climate change scenarios for the respective cities (presented in Part 2).

  7. Pharmacological and Physiological Characterization of the Tremulous Jaw Movement Model of Parkinsonian Tremor: Potential Insights into the Pathophysiology of Tremor

    PubMed Central

    Collins-Praino, Lyndsey E.; Paul, Nicholas E.; Rychalsky, Kristen L.; Hinman, James R.; Chrobak, James J.; Senatus, Patrick B.; Salamone, John D.

    2011-01-01

    Tremor is a cardinal symptom of parkinsonism, occurring early on in the disease course and affecting more than 70% of patients. Parkinsonian resting tremor occurs in a frequency range of 3–7 Hz and can be resistant to available pharmacotherapy. Despite its prevalence, and the significant decrease in quality of life associated with it, the pathophysiology of parkinsonian tremor is poorly understood. The tremulous jaw movement (TJM) model is an extensively validated rodent model of tremor. TJMs are induced by conditions that also lead to parkinsonism in humans (i.e., striatal DA depletion, DA antagonism, and cholinomimetic activity) and reversed by several antiparkinsonian drugs (i.e., DA precursors, DA agonists, anticholinergics, and adenosine A2A antagonists). TJMs occur in the same 3–7 Hz frequency range seen in parkinsonian resting tremor, a range distinct from that of dyskinesia (1–2 Hz), and postural tremor (8–14 Hz). Overall, these drug-induced TJMs share many characteristics with human parkinsonian tremor, but do not closely resemble tardive dyskinesia. The current review discusses recent advances in the validation of the TJM model, and illustrates how this model is being used to develop novel therapeutic strategies, both surgical and pharmacological, for the treatment of parkinsonian resting tremor. PMID:21772815

  8. Establishment and validation for the theoretical model of the vehicle airbag

    NASA Astrophysics Data System (ADS)

    Zhang, Junyuan; Jin, Yang; Xie, Lizhe; Chen, Chao

    2015-05-01

    The current design and optimization of the occupant restraint system (ORS) are based on numerous actual tests and mathematic simulations. These two methods are overly time-consuming and complex for the concept design phase of the ORS, though they're quite effective and accurate. Therefore, a fast and directive method of the design and optimization is needed in the concept design phase of the ORS. Since the airbag system is a crucial part of the ORS, in this paper, a theoretical model for the vehicle airbag is established in order to clarify the interaction between occupants and airbags, and further a fast design and optimization method of airbags in the concept design phase is made based on the proposed theoretical model. First, the theoretical expression of the simplified mechanical relationship between the airbag's design parameters and the occupant response is developed based on classical mechanics, then the momentum theorem and the ideal gas state equation are adopted to illustrate the relationship between airbag's design parameters and occupant response. By using MATLAB software, the iterative algorithm method and discrete variables are applied to the solution of the proposed theoretical model with a random input in a certain scope. And validations by MADYMO software prove the validity and accuracy of this theoretical model in two principal design parameters, the inflated gas mass and vent diameter, within a regular range. This research contributes to a deeper comprehension of the relation between occupants and airbags, further a fast design and optimization method for airbags' principal parameters in the concept design phase, and provides the range of the airbag's initial design parameters for the subsequent CAE simulations and actual tests.

  9. Predictive Engineering Tools for Injection-Molded Long-Carbon-Fiber Thermoplastic Composites. Topical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Fifield, Leonard S.; Wang, Jin

    2016-06-01

    This project aimed to integrate, optimize, and validate the fiber orientation and length distribution models previously developed and implemented in the Autodesk® Simulation Moldflow® Insight (ASMI) software package for injection-molded long-carbon-fiber (LCF) thermoplastic composite structures. The project was organized into two phases. Phase 1 demonstrated the ability of the advanced ASMI package to predict fiber orientation and length distributions in LCF/polypropylene (PP) and LCF/polyamide-6, 6 (PA66) plaques within 15% of experimental results. Phase 2 validated the advanced ASMI package by predicting fiber orientation and length distributions within 15% of experimental results for a complex three-dimensional (3D) Toyota automotive part injection-moldedmore » from LCF/PP and LCF/PA66 materials. Work under Phase 2 also included estimate of weight savings and cost impacts for a vehicle system using ASMI and structural analyses of the complex part. The present report summarizes the completion of Phases 1 and 2 work activities and accomplishments achieved by the team comprising Pacific Northwest National Laboratory (PNNL); Purdue University (Purdue); Virginia Polytechnic Institute and State University (Virginia Tech); Autodesk, Inc. (Autodesk); PlastiComp, Inc. (PlastiComp); Toyota Research Institute North America (Toyota); Magna Exteriors and Interiors Corp. (Magna); and University of Illinois. Figure 1 illustrates the technical approach adopted in this project that progressed from compounding LCF/PP and LCF/PA66 materials, to process model improvement and implementation, to molding and modeling LCF/PP and LCF/PA66 plaques. The lessons learned from the plaque study and the successful validation of improved process models for fiber orientation and length distributions for these plaques enabled the project to go to Phase 2 to mold, model, and optimize the 3D complex part.« less

  10. Inverse probability weighting to control confounding in an illness-death model for interval-censored data.

    PubMed

    Gillaizeau, Florence; Sénage, Thomas; Le Borgne, Florent; Le Tourneau, Thierry; Roussel, Jean-Christian; Leffondrè, Karen; Porcher, Raphaël; Giraudeau, Bruno; Dantan, Etienne; Foucher, Yohann

    2018-04-15

    Multistate models with interval-censored data, such as the illness-death model, are still not used to any considerable extent in medical research regardless of the significant literature demonstrating their advantages compared to usual survival models. Possible explanations are their uncommon availability in classical statistical software or, when they are available, by the limitations related to multivariable modelling to take confounding into consideration. In this paper, we propose a strategy based on propensity scores that allows population causal effects to be estimated: the inverse probability weighting in the illness semi-Markov model with interval-censored data. Using simulated data, we validated the performances of the proposed approach. We also illustrated the usefulness of the method by an application aiming to evaluate the relationship between the inadequate size of an aortic bioprosthesis and its degeneration or/and patient death. We have updated the R package multistate to facilitate the future use of this method. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Population-based absolute risk estimation with survey data

    PubMed Central

    Kovalchik, Stephanie A.; Pfeiffer, Ruth M.

    2013-01-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  12. Computational Modeling in Structural Materials Processing

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    High temperature materials such as silicon carbide, a variety of nitrides, and ceramic matrix composites find use in aerospace, automotive, machine tool industries and in high speed civil transport applications. Chemical vapor deposition (CVD) is widely used in processing such structural materials. Variations of CVD include deposition on substrates, coating of fibers, inside cavities and on complex objects, and infiltration within preforms called chemical vapor infiltration (CVI). Our current knowledge of the process mechanisms, ability to optimize processes, and scale-up for large scale manufacturing is limited. In this regard, computational modeling of the processes is valuable since a validated model can be used as a design tool. The effort is similar to traditional chemically reacting flow modeling with emphasis on multicomponent diffusion, thermal diffusion, large sets of homogeneous reactions, and surface chemistry. In the case of CVI, models for pore infiltration are needed. In the present talk, examples of SiC nitride, and Boron deposition from the author's past work will be used to illustrate the utility of computational process modeling.

  13. A baseline-free procedure for transformation models under interval censorship.

    PubMed

    Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin

    2005-12-01

    An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.

  14. Validation of an ESL Writing Test in a Malaysian Secondary School Context

    ERIC Educational Resources Information Center

    Zainal, Azlin

    2012-01-01

    The present study was conducted with a twofold purpose. First, I aim to apply the socio-cognitive framework by Shaw and Weir (2007) in order to validate a summative writing test used in a Malaysian ESL secondary school context. Secondly, by applying the framework I also aim to illustrate practical ways in which teachers can gather validity…

  15. Two new species of Paurodontella Husain and Khan, 1968 (Nematoda: Sphaerulariidae) associated with wheat and a diagnostic compendium to the genus

    USDA-ARS?s Scientific Manuscript database

    An identification key to 10 valid species of Paurodontella is given. A compendium of the most important diagnostic characters with illustrations of each species is included as a practical alternative and supplement to the key. The diagnosis of Paurodontella is emended and a list of all valid specie...

  16. Cloud ice: A climate model challenge with signs and expectations of progress

    NASA Astrophysics Data System (ADS)

    Waliser, Duane E.; Li, Jui-Lin F.; Woods, Christopher P.; Austin, Richard T.; Bacmeister, Julio; Chern, Jiundar; Del Genio, Anthony; Jiang, Jonathan H.; Kuang, Zhiming; Meng, Huan; Minnis, Patrick; Platnick, Steve; Rossow, William B.; Stephens, Graeme L.; Sun-Mack, Szedung; Tao, Wei-Kuo; Tompkins, Adrian M.; Vane, Deborah G.; Walker, Christopher; Wu, Dong

    2009-04-01

    Present-day shortcomings in the representation of upper tropospheric ice clouds in general circulation models (GCMs) lead to errors in weather and climate forecasts as well as account for a source of uncertainty in climate change projections. An ongoing challenge in rectifying these shortcomings has been the availability of adequate, high-quality, global observations targeting ice clouds and related precipitating hydrometeors. In addition, the inadequacy of the modeled physics and the often disjointed nature between model representation and the characteristics of the retrieved/observed values have hampered GCM development and validation efforts from making effective use of the measurements that have been available. Thus, even though parameterizations in GCMs accounting for cloud ice processes have, in some cases, become more sophisticated in recent years, this development has largely occurred independently of the global-scale measurements. With the relatively recent addition of satellite-derived products from Aura/Microwave Limb Sounder (MLS) and CloudSat, there are now considerably more resources with new and unique capabilities to evaluate GCMs. In this article, we illustrate the shortcomings evident in model representations of cloud ice through a comparison of the simulations assessed in the Intergovernmental Panel on Climate Change Fourth Assessment Report, briefly discuss the range of global observational resources that are available, and describe the essential components of the model parameterizations that characterize their "cloud" ice and related fields. Using this information as background, we (1) discuss some of the main considerations and cautions that must be taken into account in making model-data comparisons related to cloud ice, (2) illustrate present progress and uncertainties in applying satellite cloud ice (namely from MLS and CloudSat) to model diagnosis, (3) show some indications of model improvements, and finally (4) discuss a number of remaining questions and suggestions for pathways forward.

  17. Models with Men and Women: Representing Gender in Dynamic Modeling of Social Systems.

    PubMed

    Palmer, Erika; Wilson, Benedicte

    2018-04-01

    Dynamic engineering models have yet to be evaluated in the context of feminist engineering ethics. Decision-making concerning gender in dynamic modeling design is a gender and ethical issue that is important to address regardless of the system in which the dynamic modeling is applied. There are many dynamic modeling tools that operationally include the female population, however, there is an important distinction between females and women; it is the difference between biological sex and the social construct of gender, which is fluid and changes over time and geography. The ethical oversight in failing to represent or misrepresenting gender in model design when it is relevant to the model purpose can have implications for model validity and policy model development. This paper highlights this gender issue in the context of feminist engineering ethics using a dynamic population model. Women are often represented in this type of model only in their biological capacity, while lacking their gender identity. This illustrative example also highlights how language, including the naming of variables and communication with decision-makers, plays a role in this gender issue.

  18. An integrated approach to evaluating alternative risk prediction strategies: a case study comparing alternative approaches for preventing invasive fungal disease.

    PubMed

    Sadique, Z; Grieve, R; Harrison, D A; Jit, M; Allen, E; Rowan, K M

    2013-12-01

    This article proposes an integrated approach to the development, validation, and evaluation of new risk prediction models illustrated with the Fungal Infection Risk Evaluation study, which developed risk models to identify non-neutropenic, critically ill adult patients at high risk of invasive fungal disease (IFD). Our decision-analytical model compared alternative strategies for preventing IFD at up to three clinical decision time points (critical care admission, after 24 hours, and end of day 3), followed with antifungal prophylaxis for those judged "high" risk versus "no formal risk assessment." We developed prognostic models to predict the risk of IFD before critical care unit discharge, with data from 35,455 admissions to 70 UK adult, critical care units, and validated the models externally. The decision model was populated with positive predictive values and negative predictive values from the best-fitting risk models. We projected lifetime cost-effectiveness and expected value of partial perfect information for groups of parameters. The risk prediction models performed well in internal and external validation. Risk assessment and prophylaxis at the end of day 3 was the most cost-effective strategy at the 2% and 1% risk threshold. Risk assessment at each time point was the most cost-effective strategy at a 0.5% risk threshold. Expected values of partial perfect information were high for positive predictive values or negative predictive values (£11 million-£13 million) and quality-adjusted life-years (£11 million). It is cost-effective to formally assess the risk of IFD for non-neutropenic, critically ill adult patients. This integrated approach to developing and evaluating risk models is useful for informing clinical practice and future research investment. © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Published by International Society for Pharmacoeconomics and Outcomes Research (ISPOR) All rights reserved.

  19. Nonlinear finite element model updating for damage identification of civil structures using batch Bayesian estimation

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.

    2017-02-01

    This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.

  20. Elastic full waveform inversion based on the homogenization method: theoretical framework and 2-D numerical illustrations

    NASA Astrophysics Data System (ADS)

    Capdeville, Yann; Métivier, Ludovic

    2018-05-01

    Seismic imaging is an efficient tool to investigate the Earth interior. Many of the different imaging techniques currently used, including the so-called full waveform inversion (FWI), are based on limited frequency band data. Such data are not sensitive to the true earth model, but to a smooth version of it. This smooth version can be related to the true model by the homogenization technique. Homogenization for wave propagation in deterministic media with no scale separation, such as geological media, has been recently developed. With such an asymptotic theory, it is possible to compute an effective medium valid for a given frequency band such that effective waveforms and true waveforms are the same up to a controlled error. In this work we make the link between limited frequency band inversion, mainly FWI, and homogenization. We establish the relation between a true model and an FWI result model. This relation is important for a proper interpretation of FWI images. We numerically illustrate, in the 2-D case, that an FWI result is at best the homogenized version of the true model. Moreover, it appears that the homogenized FWI model is quite independent of the FWI parametrization, as long as it has enough degrees of freedom. In particular, inverting for the full elastic tensor is, in each of our tests, always a good choice. We show how the homogenization can help to understand FWI behaviour and help to improve its robustness and convergence by efficiently constraining the solution space of the inverse problem.

  1. CFD gas distribution analysis for different continuous-miner scrubber redirection configurations

    PubMed Central

    Zheng, Y.; Organiscak, J.A.; Zhou, L.; Beck, T.W.; Rider, J.P.

    2018-01-01

    The U.S. National Institute for Occupational Safety and Health (NIOSH)’s Pittsburgh Mining Research Division (PMRD) recently developed a series of models using computational fluid dynamics (CFD) to study gas distribution around a continuous mining machine with various fan-powered flooded bed scrubber discharge configurations in an exhaust curtain working face. CFD models utilizing species transport model without reactions in FLUENT were constructed to evaluate the redirection of scrubber discharge toward the mining face rather than behind the return curtain. The study illustrates the gas distribution in the slab (second) cut. The following scenarios are considered in this study: 100 percent of the discharge redirected back toward the face on the off-curtain side; 100 percent of the discharge redirected back toward the face, but divided equally to both sides; and 15 percent of the discharge redirected toward the face on the off-curtain side, with 85 percent directed toward the return curtain. These models are compared against a model with a conventional scrubber discharge where air is directed away from the face into the return. The models were validated against experimental data, proving to accurately predict sulfur hexafluoride (SF6) gas levels at four gas monitoring locations. This study includes a predictive simulation examining a 45° scrubber angle compared with the 23° angle for the 100 percent redirected, equally divided case. This paper describes the validation of the CFD models based on experimental data of the gas distribution results. PMID:29375242

  2. CFD gas distribution analysis for different continuous-miner scrubber redirection configurations.

    PubMed

    Zheng, Y; Organiscak, J A; Zhou, L; Beck, T W; Rider, J P

    2017-01-01

    The U.S. National Institute for Occupational Safety and Health (NIOSH)'s Pittsburgh Mining Research Division (PMRD) recently developed a series of models using computational fluid dynamics (CFD) to study gas distribution around a continuous mining machine with various fan-powered flooded bed scrubber discharge configurations in an exhaust curtain working face. CFD models utilizing species transport model without reactions in FLUENT were constructed to evaluate the redirection of scrubber discharge toward the mining face rather than behind the return curtain. The study illustrates the gas distribution in the slab (second) cut. The following scenarios are considered in this study: 100 percent of the discharge redirected back toward the face on the off-curtain side; 100 percent of the discharge redirected back toward the face, but divided equally to both sides; and 15 percent of the discharge redirected toward the face on the off-curtain side, with 85 percent directed toward the return curtain. These models are compared against a model with a conventional scrubber discharge where air is directed away from the face into the return. The models were validated against experimental data, proving to accurately predict sulfur hexafluoride (SF 6 ) gas levels at four gas monitoring locations. This study includes a predictive simulation examining a 45° scrubber angle compared with the 23° angle for the 100 percent redirected, equally divided case. This paper describes the validation of the CFD models based on experimental data of the gas distribution results.

  3. Root zone water quality model (RZWQM2): Model use, calibration and validation

    USGS Publications Warehouse

    Ma, Liwang; Ahuja, Lajpat; Nolan, B.T.; Malone, Robert; Trout, Thomas; Qi, Z.

    2012-01-01

    The Root Zone Water Quality Model (RZWQM2) has been used widely for simulating agricultural management effects on crop production and soil and water quality. Although it is a one-dimensional model, it has many desirable features for the modeling community. This article outlines the principles of calibrating the model component by component with one or more datasets and validating the model with independent datasets. Users should consult the RZWQM2 user manual distributed along with the model and a more detailed protocol on how to calibrate RZWQM2 provided in a book chapter. Two case studies (or examples) are included in this article. One is from an irrigated maize study in Colorado to illustrate the use of field and laboratory measured soil hydraulic properties on simulated soil water and crop production. It also demonstrates the interaction between soil and plant parameters in simulated plant responses to water stresses. The other is from a maize-soybean rotation study in Iowa to show a manual calibration of the model for crop yield, soil water, and N leaching in tile-drained soils. Although the commonly used trial-and-error calibration method works well for experienced users, as shown in the second example, an automated calibration procedure is more objective, as shown in the first example. Furthermore, the incorporation of the Parameter Estimation Software (PEST) into RZWQM2 made the calibration of the model more efficient than a grid (ordered) search of model parameters. In addition, PEST provides sensitivity and uncertainty analyses that should help users in selecting the right parameters to calibrate.

  4. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  5. Design and Evaluation of Complex Moving HIFU Treatment Protocols

    NASA Astrophysics Data System (ADS)

    Kargl, Steven G.; Andrew, Marilee A.; Kaczkowski, Peter J.; Brayman, Andrew A.; Crum, Lawrence A.

    2005-03-01

    The use of moving high-intensity focused ultrasound (HIFU) treatment protocols is of interest in achieving efficient formation of large-volume thermal lesions in tissue. Judicious protocol design is critical in order to avoid collateral damage to healthy tissues outside the treatment zone. A KZK-BHTE model, extended to simulate multiple, moving scans in tissue, is used to investigate protocol design considerations. Prediction and experimental observations are presented which 1) validate the model, 2) illustrate how to assess the effects of acoustic nonlinearity, and 3) demonstrate how to assess and control collateral damage such as prefocal lesion formation and lesion formation resulting from thermal conduction without direct HIFU exposure. Experimental data consist of linear and circular scan protocols delivered over a range of exposure regimes in ex vivo bovine liver.

  6. Nonlocal elasticity and shear deformation effects on thermal buckling of a CNT embedded in a viscoelastic medium

    NASA Astrophysics Data System (ADS)

    Zenkour, A. M.

    2018-05-01

    The thermal buckling analysis of carbon nanotubes embedded in a visco-Pasternak's medium is investigated. The Eringen's nonlocal elasticity theory, in conjunction with the first-order Donnell's shell theory, is used for this purpose. The surrounding medium is considered as a three-parameter viscoelastic foundation model, Winkler-Pasternak's model as well as a viscous damping coefficient. The governing equilibrium equations are obtained and solved for carbon nanotubes subjected to different thermal and mechanical loads. The effects of nonlocal parameter, radius and length of nanotube, and the three foundation parameters on the thermal buckling of the nanotube are studied. Sample critical buckling loads are reported and graphically illustrated to check the validity of the present results and to present benchmarks for future comparisons.

  7. Insuring wind energy production

    NASA Astrophysics Data System (ADS)

    D'Amico, Guglielmo; Petroni, Filippo; Prattico, Flavio

    2017-02-01

    This paper presents an insurance contract that the supplier of wind energy may subscribe in order to immunize the production of electricity against the volatility of the wind speed process. The other party of the contract may be any dispatchable energy producer, like gas turbine or hydroelectric generator, which can supply the required energy in case of little or no wind. The adoption of a stochastic wind speed model allows the computation of the fair premium that the wind power supplier has to pay in order to hedge the risk of inadequate output of electricity at any time. Recursive type equations are obtained for the prospective mathematical reserves of the insurance contract and for their higher order moments. The model and the validity of the results are illustrated through a numerical example.

  8. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  9. Performance testing and analysis results of AMTEC cells for space applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borkowski, C.A.; Barkan, A.; Hendricks, T.J.

    1998-01-01

    Testing and analysis has shown that AMTEC (Alkali Metal Thermal to Electric Conversion) (Weber, 1974) cells can reach the performance (power) levels required by a variety of space applications. The performance of an AMTEC cell is highly dependent on the thermal environment to which it is subjected. A guard heater assembly has been designed, fabricated, and used to expose individual AMTEC cells to various thermal environments. The design and operation of the guard heater assembly will be discussed. Performance test results of an AMTEC cell operated under guard heated conditions to simulate an adiabatic cell wall thermal environment are presented.more » Experimental data and analytic model results are compared to illustrate validation of the model. {copyright} {ital 1998 American Institute of Physics.}« less

  10. Entropy Production and Fluctuation Theorems for Active Matter

    NASA Astrophysics Data System (ADS)

    Mandal, Dibyendu; Klymko, Katherine; DeWeese, Michael R.

    2017-12-01

    Active biological systems reside far from equilibrium, dissipating heat even in their steady state, thus requiring an extension of conventional equilibrium thermodynamics and statistical mechanics. In this Letter, we have extended the emerging framework of stochastic thermodynamics to active matter. In particular, for the active Ornstein-Uhlenbeck model, we have provided consistent definitions of thermodynamic quantities such as work, energy, heat, entropy, and entropy production at the level of single, stochastic trajectories and derived related fluctuation relations. We have developed a generalization of the Clausius inequality, which is valid even in the presence of the non-Hamiltonian dynamics underlying active matter systems. We have illustrated our results with explicit numerical studies.

  11. Socio-economic applications of finite state mean field games.

    PubMed

    Gomes, Diogo; Velho, Roberto M; Wolfram, Marie-Therese

    2014-11-13

    In this paper, we present different applications of finite state mean field games to socio-economic sciences. Examples include paradigm shifts in the scientific community or consumer choice behaviour in the free market. The corresponding finite state mean field game models are hyperbolic systems of partial differential equations, for which we present and validate different numerical methods. We illustrate the behaviour of solutions with various numerical experiments, which show interesting phenomena such as shock formation. Hence, we conclude with an investigation of the shock structure in the case of two-state problems. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  12. Coupled NASTRAN/boundary element formulation for acoustic scattering

    NASA Technical Reports Server (NTRS)

    Everstine, Gordon C.; Henderson, Francis M.; Schuetz, Luise S.

    1987-01-01

    A coupled finite element/boundary element capability is described for calculating the sound pressure field scattered by an arbitrary submerged 3-D elastic structure. Structural and fluid impedances are calculated with no approximation other than discretization. The surface fluid pressures and normal velocities are first calculated by coupling a NASTRAN finite element model of the structure with a discretized form of the Helmholtz surface integral equation for the exterior field. Far field pressures are then evaluated from the surface solution using the Helmholtz exterior integral equation. The overall approach is illustrated and validated using a known analytic solution for scattering from submerged spherical shells.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coffrin, Carleton James; Hijazi, Hassan L; Van Hentenryck, Pascal R

    Here this work revisits the Semidefine Programming (SDP) relaxation of the AC power flow equations in light of recent results illustrating the benefits of bounds propagation, valid inequalities, and the Convex Quadratic (QC) relaxation. By integrating all of these results into the SDP model a new hybrid relaxation is proposed, which combines the benefits from all of these recent works. This strengthened SDP formulation is evaluated on 71 AC Optimal Power Flow test cases from the NESTA archive and is shown to have an optimality gap of less than 1% on 63 cases. This new hybrid relaxation closes 50% ofmore » the open cases considered, leaving only 8 for future investigation.« less

  14. Detecting determinism from point processes.

    PubMed

    Andrzejak, Ralph G; Mormann, Florian; Kreuz, Thomas

    2014-12-01

    The detection of a nonrandom structure from experimental data can be crucial for the classification, understanding, and interpretation of the generating process. We here introduce a rank-based nonlinear predictability score to detect determinism from point process data. Thanks to its modular nature, this approach can be adapted to whatever signature in the data one considers indicative of deterministic structure. After validating our approach using point process signals from deterministic and stochastic model dynamics, we show an application to neuronal spike trains recorded in the brain of an epilepsy patient. While we illustrate our approach in the context of temporal point processes, it can be readily applied to spatial point processes as well.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epiney, A.; Canepa, S.; Zerkak, O.

    The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less

  16. Establishment of the mathematical model for diagnosing the engine valve faults by genetic programming

    NASA Astrophysics Data System (ADS)

    Yang, Wen-Xian

    2006-05-01

    Available machine fault diagnostic methods show unsatisfactory performances on both on-line and intelligent analyses because their operations involve intensive calculations and are labour intensive. Aiming at improving this situation, this paper describes the development of an intelligent approach by using the Genetic Programming (abbreviated as GP) method. Attributed to the simple calculation of the mathematical model being constructed, different kinds of machine faults may be diagnosed correctly and quickly. Moreover, human input is significantly reduced in the process of fault diagnosis. The effectiveness of the proposed strategy is validated by an illustrative example, in which three kinds of valve states inherent in a six-cylinders/four-stroke cycle diesel engine, i.e. normal condition, valve-tappet clearance and gas leakage faults, are identified. In the example, 22 mathematical functions have been specially designed and 8 easily obtained signal features are used to construct the diagnostic model. Different from existing GPs, the diagnostic tree used in the algorithm is constructed in an intelligent way by applying a power-weight coefficient to each feature. The power-weight coefficients vary adaptively between 0 and 1 during the evolutionary process. Moreover, different evolutionary strategies are employed, respectively for selecting the diagnostic features and functions, so that the mathematical functions are sufficiently utilized and in the meantime, the repeated use of signal features may be fully avoided. The experimental results are illustrated diagrammatically in the following sections.

  17. Inferring the Impact of Regulatory Mechanisms that Underpin CD8+ T Cell Control of B16 Tumor Growth In vivo Using Mechanistic Models and Simulation.

    PubMed

    Klinke, David J; Wang, Qing

    2016-01-01

    A major barrier for broadening the efficacy of immunotherapies for cancer is identifying key mechanisms that limit the efficacy of tumor infiltrating lymphocytes. Yet, identifying these mechanisms using human samples and mouse models for cancer remains a challenge. While interactions between cancer and the immune system are dynamic and non-linear, identifying the relative roles that biological components play in regulating anti-tumor immunity commonly relies on human intuition alone, which can be limited by cognitive biases. To assist natural intuition, modeling and simulation play an emerging role in identifying therapeutic mechanisms. To illustrate the approach, we developed a multi-scale mechanistic model to describe the control of tumor growth by a primary response of CD8+ T cells against defined tumor antigens using the B16 C57Bl/6 mouse model for malignant melanoma. The mechanistic model was calibrated to data obtained following adenovirus-based immunization and validated to data obtained following adoptive transfer of transgenic CD8+ T cells. More importantly, we use simulation to test whether the postulated network topology, that is the modeled biological components and their associated interactions, is sufficient to capture the observed anti-tumor immune response. Given the available data, the simulation results also provided a statistical basis for quantifying the relative importance of different mechanisms that underpin CD8+ T cell control of B16F10 growth. By identifying conditions where the postulated network topology is incomplete, we illustrate how this approach can be used as part of an iterative design-build-test cycle to expand the predictive power of the model.

  18. Traditional Arabic & Islamic medicine: validation and empirical assessment of a conceptual model in Qatar.

    PubMed

    AlRawi, Sara N; Khidir, Amal; Elnashar, Maha S; Abdelrahim, Huda A; Killawi, Amal K; Hammoud, Maya M; Fetters, Michael D

    2017-03-14

    Evidence indicates traditional medicine is no longer only used for the healthcare of the poor, its prevalence is also increasing in countries where allopathic medicine is predominant in the healthcare system. While these healing practices have been utilized for thousands of years in the Arabian Gulf, only recently has a theoretical model been developed illustrating the linkages and components of such practices articulated as Traditional Arabic & Islamic Medicine (TAIM). Despite previous theoretical work presenting development of the TAIM model, empirical support has been lacking. The objective of this research is to provide empirical support for the TAIM model and illustrate real world applicability. Using an ethnographic approach, we recruited 84 individuals (43 women and 41 men) who were speakers of one of four common languages in Qatar; Arabic, English, Hindi, and Urdu, Through in-depth interviews, we sought confirming and disconfirming evidence of the model components, namely, health practices, beliefs and philosophy to treat, diagnose, and prevent illnesses and/or maintain well-being, as well as patterns of communication about their TAIM practices with their allopathic providers. Based on our analysis, we find empirical support for all elements of the TAIM model. Participants in this research, visitors to major healthcare centers, mentioned using all elements of the TAIM model: herbal medicines, spiritual therapies, dietary practices, mind-body methods, and manual techniques, applied singularly or in combination. Participants had varying levels of comfort sharing information about TAIM practices with allopathic practitioners. These findings confirm an empirical basis for the elements of the TAIM model. Three elements, namely, spiritual healing, herbal medicine, and dietary practices, were most commonly found. Future research should examine the prevalence of TAIM element use, how it differs among various populations, and its impact on health.

  19. Prototype of an Integrated Hurricane Information System for Research: Description and Illustration of its Use in Evaluating WRF Model Simulations

    NASA Astrophysics Data System (ADS)

    Hristova-Veleva, S.; Chao, Y.; Vane, D.; Lambrigtsen, B.; Li, P. P.; Knosp, B.; Vu, Q. A.; Su, H.; Dang, V.; Fovell, R.; Tanelli, S.; Garay, M.; Willis, J.; Poulsen, W.; Fishbein, E.; Ao, C. O.; Vazquez, J.; Park, K. J.; Callahan, P.; Marcus, S.; Haddad, Z.; Fetzer, E.; Kahn, R.

    2007-12-01

    In spite of recent improvements in hurricane track forecast accuracy, currently there are still many unanswered questions about the physical processes that determine hurricane genesis, intensity, track and impact on large- scale environment. Furthermore, a significant amount of work remains to be done in validating hurricane forecast models, understanding their sensitivities and improving their parameterizations. None of this can be accomplished without a comprehensive set of multiparameter observations that are relevant to both the large- scale and the storm-scale processes in the atmosphere and in the ocean. To address this need, we have developed a prototype of a comprehensive hurricane information system of high- resolution satellite, airborne and in-situ observations and model outputs pertaining to: i) the thermodynamic and microphysical structure of the storms; ii) the air-sea interaction processes; iii) the larger-scale environment as depicted by the SST, ocean heat content and the aerosol loading of the environment. Our goal was to create a one-stop place to provide the researchers with an extensive set of observed hurricane data, and their graphical representation, together with large-scale and convection-resolving model output, all organized in an easy way to determine when coincident observations from multiple instruments are available. Analysis tools will be developed in the next step. The analysis tools will be used to determine spatial, temporal and multiparameter covariances that are needed to evaluate model performance, provide information for data assimilation and characterize and compare observations from different platforms. We envision that the developed hurricane information system will help in the validation of the hurricane models, in the systematic understanding of their sensitivities and in the improvement of the physical parameterizations employed by the models. Furthermore, it will help in studying the physical processes that affect hurricane development and impact on large-scale environment. This talk will describe the developed prototype of the hurricane information systems. Furthermore, we will use a set of WRF hurricane simulations and compare simulated to observed structures to illustrate how the information system can be used to discriminate between simulations that employ different physical parameterizations. The work described here was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics ans Space Administration.

  20. System equivalent model mixing

    NASA Astrophysics Data System (ADS)

    Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis

    2018-05-01

    This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.

  1. Observed SWE trends and climate analysis for Northwest Pacific North America: validation for future projection of SWE using the CRCM and VIC

    NASA Astrophysics Data System (ADS)

    Bennett, K. E.; Bronaugh, D.; Rodenhuis, D.

    2008-12-01

    Observational databases of snow water equivalent (SWE) have been collected from Alaska, western US states and the Canadian provinces of British Columbia, Alberta, Saskatchewan, and territories of NWT, and the Yukon. These databases were initially validated to remove inconsistencies and errors in the station records, dates or the geographic co-ordinates of the station. The cleaned data was then analysed for historical (1950 to 2006) trend using emerging techniques for trend detection based on (first of the month) estimates for January to June. Analysis of SWE showed spatial variability in the count of records across the six month time period, and this study illustrated differences between Canadian and US (or the north and south) collection. Two different data sets (one gridded and one station) were then used to analyse April 1st records, for which there was the greatest spatial spread of station records for analysis with climate information. Initial results show spatial variability (in both magnitude and direction of trend) for trend results, and climate correlations and principal components indicate different drivers of change in SWE across the western US, Canada and north to Alaska. These results will be used to validate future predictions of SWE that are being undertaken using the Canadian Regional Climate Model (CRCM) and the Variable Infiltration Capacity (VIC) hydrologic model for Western Northern America (CRCM) and British Columbia (VIC).

  2. Magnetically coupled flextensional transducer for wideband vibration energy harvesting: Design, modeling and experiments

    NASA Astrophysics Data System (ADS)

    Zou, Hong-Xiang; Zhang, Wen-Ming; Li, Wen-Bo; Wei, Ke-Xiang; Hu, Kai-Ming; Peng, Zhi-Ke; Meng, Guang

    2018-03-01

    The combination of nonlinear bistable and flextensional mechanisms has the advantages of wide operating frequency and high equivalent piezoelectric constant. In this paper, three magnetically coupled flextensional vibration energy harvesters (MF-VEHs) are designed from three magnetically coupled vibration systems which utilize a magnetic repulsion, two symmetrical magnetic attractions and multi-magnetic repulsions, respectively. The coupled dynamic models are developed to describe the electromechanical transitions. Simulations under harmonic excitation and random excitation are carried out to investigate the performance of the MF-VEHs with different parameters. Experimental validations of the MF-VEHs are performed under different excitation levels. The experimental results verify that the developed mathematical models can be used to accurately characterize the MF-VEHs for various magnetic coupling modes. A comparison of three MF-VEHs is provided and the results illustrate that a reasonable arrangement of multiple magnets can reduce the threshold excitation intensity and increase the harvested energy.

  3. Robust Control of a Cable-Driven Soft Exoskeleton Joint for Intrinsic Human-Robot Interaction.

    PubMed

    Jarrett, C; McDaid, A J

    2017-07-01

    A novel, cable-driven soft joint is presented for use in robotic rehabilitation exoskeletons to provide intrinsic, comfortable human-robot interaction. The torque-displacement characteristics of the soft elastomeric core contained within the joint are modeled. This knowledge is used in conjunction with a dynamic system model to derive a sliding mode controller (SMC) to implement low-level torque control of the joint. The SMC controller is experimentally compared with a baseline feedback-linearised proportional-derivative controller across a range of conditions and shown to be robust to un-modeled disturbances. The torque controller is then tested with six healthy subjects while they perform a selection of activities of daily living, which has validated its range of performance. Finally, a case study with a participant with spastic cerebral palsy is presented to illustrate the potential of both the joint and controller to be used in a physiotherapy setting to assist clinical populations.

  4. Scheduling optimization of design stream line for production research and development projects

    NASA Astrophysics Data System (ADS)

    Liu, Qinming; Geng, Xiuli; Dong, Ming; Lv, Wenyuan; Ye, Chunming

    2017-05-01

    In a development project, efficient design stream line scheduling is difficult and important owing to large design imprecision and the differences in the skills and skill levels of employees. The relative skill levels of employees are denoted as fuzzy numbers. Multiple execution modes are generated by scheduling different employees for design tasks. An optimization model of a design stream line scheduling problem is proposed with the constraints of multiple executive modes, multi-skilled employees and precedence. The model considers the parallel design of multiple projects, different skills of employees, flexible multi-skilled employees and resource constraints. The objective function is to minimize the duration and tardiness of the project. Moreover, a two-dimensional particle swarm algorithm is used to find the optimal solution. To illustrate the validity of the proposed method, a case is examined in this article, and the results support the feasibility and effectiveness of the proposed model and algorithm.

  5. A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation.

    PubMed

    Breton, S-P; Sumner, J; Sørensen, J N; Hansen, K S; Sarmast, S; Ivanell, S

    2017-04-13

    Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple wake situations is also supplied. Some typical results for wind turbine and wind farm flows are presented to illustrate best practices for carrying out high-fidelity LES of wind farms under various atmospheric and terrain conditions.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).

  6. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    PubMed Central

    Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998

  7. A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation

    PubMed Central

    Sumner, J.; Sørensen, J. N.; Hansen, K. S.; Sarmast, S.; Ivanell, S.

    2017-01-01

    Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple wake situations is also supplied. Some typical results for wind turbine and wind farm flows are presented to illustrate best practices for carrying out high-fidelity LES of wind farms under various atmospheric and terrain conditions. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265021

  8. Sojourning with the Homogeneous Poisson Process.

    PubMed

    Liu, Piaomu; Peña, Edsel A

    2016-01-01

    In this pedagogical article, distributional properties, some surprising, pertaining to the homogeneous Poisson process (HPP), when observed over a possibly random window, are presented. Properties of the gap-time that covered the termination time and the correlations among gap-times of the observed events are obtained. Inference procedures, such as estimation and model validation, based on event occurrence data over the observation window, are also presented. We envision that through the results in this paper, a better appreciation of the subtleties involved in the modeling and analysis of recurrent events data will ensue, since the HPP is arguably one of the simplest among recurrent event models. In addition, the use of the theorem of total probability, Bayes theorem, the iterated rules of expectation, variance and covariance, and the renewal equation could be illustrative when teaching distribution theory, mathematical statistics, and stochastic processes at both the undergraduate and graduate levels. This article is targeted towards both instructors and students.

  9. LMI-based stability analysis of fuzzy-model-based control systems using approximated polynomial membership functions.

    PubMed

    Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles

    2011-06-01

    Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.

  10. State-Space System Realization with Input- and Output-Data Correlation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    1997-01-01

    This paper introduces a general version of the information matrix consisting of the autocorrelation and cross-correlation matrices of the shifted input and output data. Based on the concept of data correlation, a new system realization algorithm is developed to create a model directly from input and output data. The algorithm starts by computing a special type of correlation matrix derived from the information matrix. The special correlation matrix provides information on the system-observability matrix and the state-vector correlation. A system model is then developed from the observability matrix in conjunction with other algebraic manipulations. This approach leads to several different algorithms for computing system matrices for use in representing the system model. The relationship of the new algorithms with other realization algorithms in the time and frequency domains is established with matrix factorization of the information matrix. Several examples are given to illustrate the validity and usefulness of these new algorithms.

  11. Statistical Methodologies to Integrate Experimental and Computational Research

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; Johnson, R. T.; Montgomery, D. C.

    2008-01-01

    Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.

  12. Score tests for independence in semiparametric competing risks models.

    PubMed

    Saïd, Mériem; Ghazzali, Nadia; Rivest, Louis-Paul

    2009-12-01

    A popular model for competing risks postulates the existence of a latent unobserved failure time for each risk. Assuming that these underlying failure times are independent is attractive since it allows standard statistical tools for right-censored lifetime data to be used in the analysis. This paper proposes simple independence score tests for the validity of this assumption when the individual risks are modeled using semiparametric proportional hazards regressions. It assumes that covariates are available, making the model identifiable. The score tests are derived for alternatives that specify that copulas are responsible for a possible dependency between the competing risks. The test statistics are constructed by adding to the partial likelihoods for the individual risks an explanatory variable for the dependency between the risks. A variance estimator is derived by writing the score function and the Fisher information matrix for the marginal models as stochastic integrals. Pitman efficiencies are used to compare test statistics. A simulation study and a numerical example illustrate the methodology proposed in this paper.

  13. Effect of Heterogeneous Interest Similarity on the Spread of Information in Mobile Social Networks

    NASA Astrophysics Data System (ADS)

    Zhao, Narisa; Sui, Guoqin; Yang, Fan

    2018-06-01

    Mobile social networks (MSNs) are important platforms for spreading news. The fact that individuals usually forward information aligned with their own interests inevitably changes the dynamics of information spread. Thereby, first we present a theoretical model based on the discrete Markov chain and mean field theory to evaluate the effect of interest similarity on the information spread in MSNs. Meanwhile, individuals' interests are heterogeneous and vary with time. These two features result in interest shift behavior, and both features are considered in our model. A leveraging simulation demonstrates the accuracy of our model. Moreover, the basic reproduction number R0 is determined. Further extensive numerical analyses based on the model indicate that interest similarity has a critical impact on information spread at the early spreading stage. Specifically, the information always spreads more quickly and widely if the interest similarity between an individual and the information is higher. Finally, five actual data sets from Sina Weibo illustrate the validity of the model.

  14. A long-term validation of the modernised DC-ARC-OES solid-sample method.

    PubMed

    Flórián, K; Hassler, J; Förster, O

    2001-12-01

    The validation procedure based on ISO 17025 standard has been used to study and illustrate both the longterm stability of the calibration process of the DC-ARC solid sample spectrometric method and the main validation criteria of the method. In the calculation of the validation characteristics depending on the linearity(calibration), also the fulfilment of predetermining criteria such as normality and homoscedasticity was checked. In order to decide whether there are any trends in the time-variation of the analytical signal or not, also the Neumann test of trend was applied and evaluated. Finally, a comparison with similar validation data of the ETV-ICP-OES method was carried out.

  15. Validation of alternative methods for toxicity testing.

    PubMed Central

    Bruner, L H; Carr, G J; Curren, R D; Chamberlain, M

    1998-01-01

    Before nonanimal toxicity tests may be officially accepted by regulatory agencies, it is generally agreed that the validity of the new methods must be demonstrated in an independent, scientifically sound validation program. Validation has been defined as the demonstration of the reliability and relevance of a test method for a particular purpose. This paper provides a brief review of the development of the theoretical aspects of the validation process and updates current thinking about objectively testing the performance of an alternative method in a validation study. Validation of alternative methods for eye irritation testing is a specific example illustrating important concepts. Although discussion focuses on the validation of alternative methods intended to replace current in vivo toxicity tests, the procedures can be used to assess the performance of alternative methods intended for other uses. Images Figure 1 PMID:9599695

  16. Why does trigonometric substitution work?

    NASA Astrophysics Data System (ADS)

    Cunningham, Daniel W.

    2018-05-01

    Modern calculus textbooks carefully illustrate how to perform integration by trigonometric substitution. Unfortunately, most of these books do not adequately justify this powerful technique of integration. In this article, we present an accessible proof that establishes the validity of integration by trigonometric substitution. The proof offers calculus instructors a simple argument that can be used to show their students that trigonometric substitution is a valid technique of integration.

  17. Advances and challenges in logical modeling of cell cycle regulation: perspective for multi-scale, integrative yeast cell models

    PubMed Central

    Todd, Robert G.; van der Zee, Lucas

    2016-01-01

    Abstract The eukaryotic cell cycle is robustly designed, with interacting molecules organized within a definite topology that ensures temporal precision of its phase transitions. Its underlying dynamics are regulated by molecular switches, for which remarkable insights have been provided by genetic and molecular biology efforts. In a number of cases, this information has been made predictive, through computational models. These models have allowed for the identification of novel molecular mechanisms, later validated experimentally. Logical modeling represents one of the youngest approaches to address cell cycle regulation. We summarize the advances that this type of modeling has achieved to reproduce and predict cell cycle dynamics. Furthermore, we present the challenge that this type of modeling is now ready to tackle: its integration with intracellular networks, and its formalisms, to understand crosstalks underlying systems level properties, ultimate aim of multi-scale models. Specifically, we discuss and illustrate how such an integration may be realized, by integrating a minimal logical model of the cell cycle with a metabolic network. PMID:27993914

  18. Posterior Predictive Bayesian Phylogenetic Model Selection

    PubMed Central

    Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-01-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892

  19. Statistical modeling of natural backgrounds in hyperspectral LWIR data

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Manolakis, Dimitris; Cooley, Thomas; Meola, Joseph

    2016-09-01

    Hyperspectral sensors operating in the long wave infrared (LWIR) have a wealth of applications including remote material identification and rare target detection. While statistical models for modeling surface reflectance in visible and near-infrared regimes have been well studied, models for the temperature and emissivity in the LWIR have not been rigorously investigated. In this paper, we investigate modeling hyperspectral LWIR data using a statistical mixture model for the emissivity and surface temperature. Statistical models for the surface parameters can be used to simulate surface radiances and at-sensor radiance which drives the variability of measured radiance and ultimately the performance of signal processing algorithms. Thus, having models that adequately capture data variation is extremely important for studying performance trades. The purpose of this paper is twofold. First, we study the validity of this model using real hyperspectral data, and compare the relative variability of hyperspectral data in the LWIR and visible and near-infrared (VNIR) regimes. Second, we illustrate how materials that are easily distinguished in the VNIR, may be difficult to separate when imaged in the LWIR.

  20. Numerical simulations of LNG vapor dispersion in Brayton Fire Training Field tests with ANSYS CFX.

    PubMed

    Qi, Ruifeng; Ng, Dedy; Cormier, Benjamin R; Mannan, M Sam

    2010-11-15

    Federal safety regulations require the use of validated consequence models to determine the vapor cloud dispersion exclusion zones for accidental liquefied natural gas (LNG) releases. One tool that is being developed in industry for exclusion zone determination and LNG vapor dispersion modeling is computational fluid dynamics (CFD). This paper uses the ANSYS CFX CFD code to model LNG vapor dispersion in the atmosphere. Discussed are important parameters that are essential inputs to the ANSYS CFX simulations, including the atmospheric conditions, LNG evaporation rate and pool area, turbulence in the source term, ground surface temperature and roughness height, and effects of obstacles. A sensitivity analysis was conducted to illustrate uncertainties in the simulation results arising from the mesh size and source term turbulence intensity. In addition, a set of medium-scale LNG spill tests were performed at the Brayton Fire Training Field to collect data for validating the ANSYS CFX prediction results. A comparison of test data with simulation results demonstrated that CFX was able to describe the dense gas behavior of LNG vapor cloud, and its prediction results of downwind gas concentrations close to ground level were in approximate agreement with the test data. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    PubMed

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  2. A closer look at cross-validation for assessing the accuracy of gene regulatory networks and models.

    PubMed

    Tabe-Bordbar, Shayan; Emad, Amin; Zhao, Sihai Dave; Sinha, Saurabh

    2018-04-26

    Cross-validation (CV) is a technique to assess the generalizability of a model to unseen data. This technique relies on assumptions that may not be satisfied when studying genomics datasets. For example, random CV (RCV) assumes that a randomly selected set of samples, the test set, well represents unseen data. This assumption doesn't hold true where samples are obtained from different experimental conditions, and the goal is to learn regulatory relationships among the genes that generalize beyond the observed conditions. In this study, we investigated how the CV procedure affects the assessment of supervised learning methods used to learn gene regulatory networks (or in other applications). We compared the performance of a regression-based method for gene expression prediction estimated using RCV with that estimated using a clustering-based CV (CCV) procedure. Our analysis illustrates that RCV can produce over-optimistic estimates of the model's generalizability compared to CCV. Next, we defined the 'distinctness' of test set from training set and showed that this measure is predictive of performance of the regression method. Finally, we introduced a simulated annealing method to construct partitions with gradually increasing distinctness and showed that performance of different gene expression prediction methods can be better evaluated using this method.

  3. Control Relevant Modeling and Design of Scramjet-Powered Hypersonic Vehicles

    NASA Astrophysics Data System (ADS)

    Dickeson, Jeffrey James

    This report provides an overview of scramjet-powered hypersonic vehicle modeling and control challenges. Such vehicles are characterized by unstable non-minimum phase dynamics with significant coupling and low thrust margins. Recent trends in hypersonic vehicle research are summarized. To illustrate control relevant design issues and tradeoffs, a generic nonlinear 3DOF longitudinal dynamics model capturing aero-elastic-propulsive interactions for wedge-shaped vehicle is used. Limitations of the model are discussed and numerous modifications have been made to address control relevant needs. Two different baseline configurations are examined over a two-stage to orbit ascent trajectory. The report highlights how vehicle level-flight static (trim) and dynamic properties change over the trajectory. Thermal choking constraints are imposed on control system design as a direct consequence of having a finite FER margin. The implication of this state-dependent nonlinear FER margin constraint, the right half plane (RHP) zero, and lightly damped flexible modes, on control system bandwidth (BW) and FPA tracking has been discussed. A control methodology has been proposed that addresses the above dynamics while providing some robustness to modeling uncertainty. Vehicle closure (the ability to fly a trajectory segment subject to constraints) is provided through a proposed vehicle design methodology. The design method attempts to use open loop metrics whenever possible to design the vehicle. The design method is applied to a vehicle/control law closed loop nonlinear simulation for validation. The 3DOF longitudinal modeling results are validated against a newly released NASA 6DOF code.

  4. Chikungunya Virus: In Vitro Response to Combination Therapy With Ribavirin and Interferon Alfa 2a.

    PubMed

    Gallegos, Karen M; Drusano, George L; D Argenio, David Z; Brown, Ashley N

    2016-10-15

    We evaluated the antiviral activities of ribavirin (RBV) and interferon (IFN) alfa as monotherapy and combination therapy against chikungunya virus (CHIKV). Vero cells were infected with CHIKV in the presence of RBV and/or IFN alfa, and viral production was quantified by plaque assay. A mathematical model was fit to the data to identify drug interactions for effect. We ran simulations using the best-fit model parameters to predict the antiviral activity associated with clinically relevant regimens of RBV and IFN alfa as combination therapy. The model predictions were validated using the hollow fiber infection model (HFIM) system. RBV and IFN alfa were effective against CHIKV as monotherapy at supraphysiological concentrations. However, RBV and IFN alfa were highly synergistic for antiviral effect when administered as combination therapy. Simulations with our mathematical model predicted that a standard clinical regimen of RBV plus IFN alfa would inhibit CHIKV burden by 2.5 log10 following 24 hours of treatment. In the HFIM system, RBV plus IFN alfa at clinical exposures resulted in a 2.1-log10 decrease in the CHIKV burden following 24 hours of therapy. These findings validate the prediction made by the mathematical model. These studies illustrate the promise of RBV plus IFN alfa as a potential therapeutic strategy for the treatment of CHIKV infections. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  5. Environmental Predictors of US County Mortality Patterns on a National Basis.

    PubMed

    Chan, Melissa P L; Weinhold, Robert S; Thomas, Reuben; Gohlke, Julia M; Portier, Christopher J

    2015-01-01

    A growing body of evidence has found that mortality rates are positively correlated with social inequalities, air pollution, elevated ambient temperature, availability of medical care and other factors. This study develops a model to predict the mortality rates for different diseases by county across the US. The model is applied to predict changes in mortality caused by changing environmental factors. A total of 3,110 counties in the US, excluding Alaska and Hawaii, were studied. A subset of 519 counties from the 3,110 counties was chosen by using systematic random sampling and these samples were used to validate the model. Step-wise and linear regression analyses were used to estimate the ability of environmental pollutants, socio-economic factors and other factors to explain variations in county-specific mortality rates for cardiovascular diseases, cancers, chronic obstructive pulmonary disease (COPD), all causes combined and lifespan across five population density groups. The estimated models fit adequately for all mortality outcomes for all population density groups and, adequately predicted risks for the 519 validation counties. This study suggests that, at local county levels, average ozone (0.07 ppm) is the most important environmental predictor of mortality. The analysis also illustrates the complex inter-relationships of multiple factors that influence mortality and lifespan, and suggests the need for a better understanding of the pathways through which these factors, mortality, and lifespan are related at the community level.

  6. Environmental Predictors of US County Mortality Patterns on a National Basis

    PubMed Central

    Thomas, Reuben; Gohlke, Julia M.; Portier, Christopher J.

    2015-01-01

    A growing body of evidence has found that mortality rates are positively correlated with social inequalities, air pollution, elevated ambient temperature, availability of medical care and other factors. This study develops a model to predict the mortality rates for different diseases by county across the US. The model is applied to predict changes in mortality caused by changing environmental factors. A total of 3,110 counties in the US, excluding Alaska and Hawaii, were studied. A subset of 519 counties from the 3,110 counties was chosen by using systematic random sampling and these samples were used to validate the model. Step-wise and linear regression analyses were used to estimate the ability of environmental pollutants, socio-economic factors and other factors to explain variations in county-specific mortality rates for cardiovascular diseases, cancers, chronic obstructive pulmonary disease (COPD), all causes combined and lifespan across five population density groups. The estimated models fit adequately for all mortality outcomes for all population density groups and, adequately predicted risks for the 519 validation counties. This study suggests that, at local county levels, average ozone (0.07 ppm) is the most important environmental predictor of mortality. The analysis also illustrates the complex inter-relationships of multiple factors that influence mortality and lifespan, and suggests the need for a better understanding of the pathways through which these factors, mortality, and lifespan are related at the community level. PMID:26629706

  7. Design and validation of an ontology-driven animal-free testing strategy for developmental neurotoxicity testing.

    PubMed

    Hessel, Ellen V S; Staal, Yvonne C M; Piersma, Aldert H

    2018-03-13

    Developmental neurotoxicity entails one of the most complex areas in toxicology. Animal studies provide only limited information as to human relevance. A multitude of alternative models have been developed over the years, providing insights into mechanisms of action. We give an overview of fundamental processes in neural tube formation, brain development and neural specification, aiming at illustrating complexity rather than comprehensiveness. We also give a flavor of the wealth of alternative methods in this area. Given the impressive progress in mechanistic knowledge of human biology and toxicology, the time is right for a conceptual approach for designing testing strategies that cover the integral mechanistic landscape of developmental neurotoxicity. The ontology approach provides a framework for defining this landscape, upon which an integral in silico model for predicting toxicity can be built. It subsequently directs the selection of in vitro assays for rate-limiting events in the biological network, to feed parameter tuning in the model, leading to prediction of the toxicological outcome. Validation of such models requires primary attention to coverage of the biological domain, rather than classical predictive value of individual tests. Proofs of concept for such an approach are already available. The challenge is in mining modern biology, toxicology and chemical information to feed intelligent designs, which will define testing strategies for neurodevelopmental toxicity testing. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  9. Coupling of Bayesian Networks with GIS for wildfire risk assessment on natural and agricultural areas of the Mediterranean

    NASA Astrophysics Data System (ADS)

    Scherb, Anke; Papakosta, Panagiota; Straub, Daniel

    2014-05-01

    Wildfires cause severe damages to ecosystems, socio-economic assets, and human lives in the Mediterranean. To facilitate coping with wildfire risks, an understanding of the factors influencing wildfire occurrence and behavior (e.g. human activity, weather conditions, topography, fuel loads) and their interaction is of importance, as is the implementation of this knowledge in improved wildfire hazard and risk prediction systems. In this project, a probabilistic wildfire risk prediction model is developed, with integrated fire occurrence and fire propagation probability and potential impact prediction on natural and cultivated areas. Bayesian Networks (BNs) are used to facilitate the probabilistic modeling. The final BN model is a spatial-temporal prediction system at the meso scale (1 km2 spatial and 1 day temporal resolution). The modeled consequences account for potential restoration costs and production losses referred to forests, agriculture, and (semi-) natural areas. BNs and a geographic information system (GIS) are coupled within this project to support a semi-automated BN model parameter learning and the spatial-temporal risk prediction. The coupling also enables the visualization of prediction results by means of daily maps. The BN parameters are learnt for Cyprus with data from 2006-2009. Data from 2010 is used as validation data set. A special focus is put on the performance evaluation of the BN for fire occurrence, which is modeled as binary classifier and thus, could be validated by means of Receiver Operator Characteristic (ROC) curves. With the final best models, AUC values of more than 70% for validation could be achieved, which indicates potential for reliable prediction performance via BN. Maps of selected days in 2010 are shown to illustrate final prediction results. The resulting system can be easily expanded to predict additional expected damages in the mesoscale (e.g. building and infrastructure damages). The system can support planning of preventive measures (e.g. state resources allocation for wildfire prevention and preparedness) and assist recuperation plans of damaged areas.

  10. A Formal Investigation of Human Spatial Control Skills: Mathematical Formalization, Skill Development, and Skill Assessment

    NASA Astrophysics Data System (ADS)

    Li, Bin

    Spatial control behaviors account for a large proportion of human everyday activities from normal daily tasks, such as reaching for objects, to specialized tasks, such as driving, surgery, or operating equipment. These behaviors involve intensive interactions within internal processes (i.e. cognitive, perceptual, and motor control) and with the physical world. This dissertation builds on a concept of interaction pattern and a hierarchical functional model. Interaction pattern represents a type of behavior synergy that humans coordinates cognitive, perceptual, and motor control processes. It contributes to the construction of the hierarchical functional model that delineates humans spatial control behaviors as the coordination of three functional subsystems: planning, guidance, and tracking/pursuit. This dissertation formalizes and validates these two theories and extends them for the investigation of human spatial control skills encompassing development and assessment. Specifically, this dissertation first presents an overview of studies in human spatial control skills encompassing definition, characteristic, development, and assessment, to provide theoretical evidence for the concept of interaction pattern and the hierarchical functional model. The following, the human experiments for collecting motion and gaze data and techniques to register and classify gaze data, are described. This dissertation then elaborates and mathematically formalizes the hierarchical functional model and the concept of interaction pattern. These theories then enables the construction of a succinct simulation model that can reproduce a variety of human performance with a minimal set of hypotheses. This validates the hierarchical functional model as a normative framework for interpreting human spatial control behaviors. The dissertation then investigates human skill development and captures the emergence of interaction pattern. The final part of the dissertation applies the hierarchical functional model for skill assessment and introduces techniques to capture interaction patterns both from the top down using their geometric features and from the bottom up using their dynamical characteristics. The validity and generality of the skill assessment is illustrated using two the remote-control flight and laparoscopic surgical training experiments.

  11. Life Modeling and Design Analysis for Ceramic Matrix Composite Materials

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The primary research efforts focused on characterizing and modeling static failure, environmental durability, and creep-rupture behavior of two classes of ceramic matrix composites (CMC), silicon carbide fibers in a silicon carbide matrix (SiC/SiC) and carbon fibers in a silicon carbide matrix (C/SiC). An engineering life prediction model (Probabilistic Residual Strength model) has been developed specifically for CMCs. The model uses residual strength as the damage metric for evaluating remaining life and is posed probabilistically in order to account for the stochastic nature of the material s response. In support of the modeling effort, extensive testing of C/SiC in partial pressures of oxygen has been performed. This includes creep testing, tensile testing, half life and residual tensile strength testing. C/SiC is proposed for airframe and propulsion applications in advanced reusable launch vehicles. Figures 1 and 2 illustrate the models predictive capabilities as well as the manner in which experimental tests are being selected in such a manner as to ensure sufficient data is available to aid in model validation.

  12. Comparing fluid mechanics models with experimental data.

    PubMed Central

    Spedding, G R

    2003-01-01

    The art of modelling the physical world lies in the appropriate simplification and abstraction of the complete problem. In fluid mechanics, the Navier-Stokes equations provide a model that is valid under most circumstances germane to animal locomotion, but the complexity of solutions provides strong incentive for the development of further, more simplified practical models. When the flow organizes itself so that all shearing motions are collected into localized patches, then various mathematical vortex models have been very successful in predicting and furthering the physical understanding of many flows, particularly in aerodynamics. Experimental models have the significant added convenience that the fluid mechanics can be generated by a real fluid, not a model, provided the appropriate dimensionless groups have similar values. Then, analogous problems can be encountered in making intelligible but independent descriptions of the experimental results. Finally, model predictions and experimental results may be compared if, and only if, numerical estimates of the likely variations in the tested quantities are provided. Examples from recent experimental measurements of wakes behind a fixed wing and behind a bird in free flight are used to illustrate these principles. PMID:14561348

  13. A Software Toolkit to Study Systematic Uncertainties of the Physics Models of the Geant4 Simulation Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genser, Krzysztof; Hatcher, Robert; Kelsey, Michael

    The Geant4 simulation toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models rely on measured cross-sections and phenomenological models with the physically motivated parameters that are tuned to cover many application domains. To study what uncertainties are associated with the Geant4 physics models we have designed and implemented a comprehensive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies. It also enables analysis of multiple variantsmore » of the resulting physics observables of interest in order to estimate the uncertainties associated with the simulation model choices. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. exible run-time con gurable work ow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented in this paper and illustrated with selected results.« less

  14. Isotropically etched radial micropore for cell concentration, immobilization, and picodroplet generation.

    PubMed

    Perroud, Thomas D; Meagher, Robert J; Kanouff, Michael P; Renzi, Ronald F; Wu, Meiye; Singh, Anup K; Patel, Kamlesh D

    2009-02-21

    To enable several on-chip cell handling operations in a fused-silica substrate, small shallow micropores are radially embedded in larger deeper microchannels using an adaptation of single-level isotropic wet etching. By varying the distance between features on the photolithographic mask (mask distance), we can precisely control the overlap between two etch fronts and create a zero-thickness semi-elliptical micropore (e.g. 20 microm wide, 6 microm deep). Geometrical models derived from a hemispherical etch front show that micropore width and depth can be expressed as a function of mask distance and etch depth. These models are experimentally validated at different etch depths (25.03 and 29.78 microm) and for different configurations (point-to-point and point-to-edge). Good reproducibility confirms the validity of this approach to fabricate micropores with a desired size. To illustrate the wide range of cell handling operations enabled by micropores, we present three on-chip functionalities: continuous-flow particle concentration, immobilization of single cells, and picoliter droplet generation. (1) Using pressure differentials, particles are concentrated by removing the carrier fluid successively through a series of 44 shunts terminated by 31 microm wide, 5 microm deep micropores. Theoretical values for the concentration factor determined by a flow circuit model in conjunction with finite volume modeling are experimentally validated. (2) Flowing macrophages are individually trapped in 20 microm wide, 6 microm deep micropores by hydrodynamic confinement. The translocation of transcription factor NF-kappaB into the nucleus upon lipopolysaccharide stimulation is imaged by fluorescence microscopy. (3) Picoliter-sized droplets are generated at a 20 microm wide, 7 microm deep micropore T-junction in an oil stream for the encapsulation of individual E. coli bacteria cells.

  15. Picture Me Safe

    ERIC Educational Resources Information Center

    Irvin, Daniel W.

    1977-01-01

    The validity of well-written articles can be destroyed by poor illustration, especially when the pictures show unsafe practices. The responsibility lies with the author to provide clear printable pictures showing safe working environments and safe practices. (Editor)

  16. Looking beyond general metrics for model comparison - lessons from an international model intercomparison study

    NASA Astrophysics Data System (ADS)

    de Boer-Euser, Tanja; Bouaziz, Laurène; De Niel, Jan; Brauer, Claudia; Dewals, Benjamin; Drogue, Gilles; Fenicia, Fabrizio; Grelier, Benjamin; Nossent, Jiri; Pereira, Fernando; Savenije, Hubert; Thirel, Guillaume; Willems, Patrick

    2017-01-01

    International collaboration between research institutes and universities is a promising way to reach consensus on hydrological model development. Although model comparison studies are very valuable for international cooperation, they do often not lead to very clear new insights regarding the relevance of the modelled processes. We hypothesise that this is partly caused by model complexity and the comparison methods used, which focus too much on a good overall performance instead of focusing on a variety of specific events. In this study, we use an approach that focuses on the evaluation of specific events and characteristics. Eight international research groups calibrated their hourly model on the Ourthe catchment in Belgium and carried out a validation in time for the Ourthe catchment and a validation in space for nested and neighbouring catchments. The same protocol was followed for each model and an ensemble of best-performing parameter sets was selected. Although the models showed similar performances based on general metrics (i.e. the Nash-Sutcliffe efficiency), clear differences could be observed for specific events. We analysed the hydrographs of these specific events and conducted three types of statistical analyses on the entire time series: cumulative discharges, empirical extreme value distribution of the peak flows and flow duration curves for low flows. The results illustrate the relevance of including a very quick flow reservoir preceding the root zone storage to model peaks during low flows and including a slow reservoir in parallel with the fast reservoir to model the recession for the studied catchments. This intercomparison enhanced the understanding of the hydrological functioning of the catchment, in particular for low flows, and enabled to identify present knowledge gaps for other parts of the hydrograph. Above all, it helped to evaluate each model against a set of alternative models.

  17. Modeling spin magnetization transport in a spatially varying magnetic field

    NASA Astrophysics Data System (ADS)

    Picone, Rico A. R.; Garbini, Joseph L.; Sidles, John A.

    2015-01-01

    We present a framework for modeling the transport of any number of globally conserved quantities in any spatial configuration and apply it to obtain a model of magnetization transport for spin-systems that is valid in new regimes (including high-polarization). The framework allows an entropy function to define a model that explicitly respects the laws of thermodynamics. Three facets of the model are explored. First, it is expressed as nonlinear partial differential equations that are valid for the new regime of high dipole-energy and polarization. Second, the nonlinear model is explored in the limit of low dipole-energy (semi-linear), from which is derived a physical parameter characterizing separative magnetization transport (SMT). It is shown that the necessary and sufficient condition for SMT to occur is that the parameter is spatially inhomogeneous. Third, the high spin-temperature (linear) limit is shown to be equivalent to the model of nuclear spin transport of Genack and Redfield (1975) [1]. Differences among the three forms of the model are illustrated by numerical solution with parameters corresponding to a magnetic resonance force microscopy (MRFM) experiment (Degen et al., 2009 [2]; Kuehn et al., 2008 [3]; Sidles et al., 2003 [4]; Dougherty et al., 2000 [5]). A family of analytic, steady-state solutions to the nonlinear equation is derived and shown to be the spin-temperature analog of the Langevin paramagnetic equation and Curie's law. Finally, we analyze the separative quality of magnetization transport, and a steady-state solution for the magnetization is shown to be compatible with Fenske's separative mass transport equation (Fenske, 1932 [6]).

  18. Multicoordination Control Strategy Performance in Hybrid Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pezzini, Paolo; Bryden, Kenneth M.; Tucker, David

    This paper evaluates a state-space methodology of a multi-input multi-output (MIMO) control strategy using a 2 × 2 tightly coupled scenario applied to a physical gas turbine fuel cell hybrid power system. A centralized MIMO controller was preferred compared to a decentralized control approach because previous simulation studies showed that the coupling effect identified during the simultaneous control of the turbine speed and cathode airflow was better minimized. The MIMO controller was developed using a state-space dynamic model of the system that was derived using first-order transfer functions empirically obtained through experimental tests. The controller performance was evaluated in termsmore » of disturbance rejection through perturbations in the gas turbine operation, and setpoint tracking maneuver through turbine speed and cathode airflow steps. The experimental results illustrate that a multicoordination control strategy was able to mitigate the coupling of each actuator to each output during the simultaneous control of the system, and improved the overall system performance during transient conditions. On the other hand, the controller showed different performance during validation in simulation environment compared to validation in the physical facility, which will require a better dynamic modeling of the system for the implementation of future multivariable control strategies.« less

  19. Multicoordination Control Strategy Performance in Hybrid Power Systems

    DOE PAGES

    Pezzini, Paolo; Bryden, Kenneth M.; Tucker, David

    2018-04-11

    This paper evaluates a state-space methodology of a multi-input multi-output (MIMO) control strategy using a 2 × 2 tightly coupled scenario applied to a physical gas turbine fuel cell hybrid power system. A centralized MIMO controller was preferred compared to a decentralized control approach because previous simulation studies showed that the coupling effect identified during the simultaneous control of the turbine speed and cathode airflow was better minimized. The MIMO controller was developed using a state-space dynamic model of the system that was derived using first-order transfer functions empirically obtained through experimental tests. The controller performance was evaluated in termsmore » of disturbance rejection through perturbations in the gas turbine operation, and setpoint tracking maneuver through turbine speed and cathode airflow steps. The experimental results illustrate that a multicoordination control strategy was able to mitigate the coupling of each actuator to each output during the simultaneous control of the system, and improved the overall system performance during transient conditions. On the other hand, the controller showed different performance during validation in simulation environment compared to validation in the physical facility, which will require a better dynamic modeling of the system for the implementation of future multivariable control strategies.« less

  20. [Classification in medicine. An introductory reflection on its aim and object].

    PubMed

    Giere, W

    2007-07-01

    Human beings are born with the ability to recognize Gestalt and to classify. However, all classifications depend on their circumstances and intentions. There is no ultimate classification, and there is no one correct classification in medicine either. Examples for classifications of diagnoses, symptoms and procedures are discussed. The path to gaining knowledge and the basic difference between collecting data (patient file) and sorting data (register) will be illustrated using the BAIK information model. Additionally the model shows how the doctor can profit from the active electronic patient file which automatically offers him other relevant information for his current decision and saves time. "Without classification no new knowledge, no new knowledge through classification". This paradox will be solved eventually: a change of paradigms requires the overcoming of the currently valid classification system in medicine as well. Finally more precise recommendations will be given on how doctors can be freed from the burden of the need to classify and how the whole health system can gain much more valid data without limiting the doctors' freedom and creativity through co-ordinated use of IT, all while saving money at the same time.

  1. WhiteRef: a new tower-based hyperspectral system for continuous reflectance measurements.

    PubMed

    Sakowska, Karolina; Gianelle, Damiano; Zaldei, Alessandro; MacArthur, Alasdair; Carotenuto, Federico; Miglietta, Franco; Zampedri, Roberto; Cavagna, Mauro; Vescovo, Loris

    2015-01-08

    Proximal sensing is fundamental to monitor the spatial and seasonal dynamics of ecosystems and can be considered as a crucial validation tool to upscale in situ observations to the satellite level. Linking hyperspectral remote sensing with carbon fluxes and biophysical parameters is critical to allow the exploitation of spatial and temporal extensive information for validating model simulations at different scales. In this study, we present the WhiteRef, a new hyperspectral system designed as a direct result of the needs identified during the EUROSPEC ES0903 Cost Action, and developed by Fondazione Edmund Mach and the Institute of Biometeorology, CNR, Italy. The system is based on the ASD FieldSpec Pro spectroradiometer and was designed to acquire continuous radiometric measurements at the Eddy Covariance (EC) towers and to fill a gap in the scientific community: in fact, no system for continuous spectral measurements in the Short Wave Infrared was tested before at the EC sites. The paper illustrates the functioning of the WhiteRef and describes its main advantages and disadvantages. The WhiteRef system, being based on a robust and high quality commercially available instrument, has a clear potential for unattended continuous measurements aiming at the validation of satellites' vegetation products.

  2. Text Mining in Organizational Research

    PubMed Central

    Kobayashi, Vladimer B.; Berkers, Hannah A.; Kismihók, Gábor; Den Hartog, Deanne N.

    2017-01-01

    Despite the ubiquity of textual data, so far few researchers have applied text mining to answer organizational research questions. Text mining, which essentially entails a quantitative approach to the analysis of (usually) voluminous textual data, helps accelerate knowledge discovery by radically increasing the amount data that can be analyzed. This article aims to acquaint organizational researchers with the fundamental logic underpinning text mining, the analytical stages involved, and contemporary techniques that may be used to achieve different types of objectives. The specific analytical techniques reviewed are (a) dimensionality reduction, (b) distance and similarity computing, (c) clustering, (d) topic modeling, and (e) classification. We describe how text mining may extend contemporary organizational research by allowing the testing of existing or new research questions with data that are likely to be rich, contextualized, and ecologically valid. After an exploration of how evidence for the validity of text mining output may be generated, we conclude the article by illustrating the text mining process in a job analysis setting using a dataset composed of job vacancies. PMID:29881248

  3. Text Mining in Organizational Research.

    PubMed

    Kobayashi, Vladimer B; Mol, Stefan T; Berkers, Hannah A; Kismihók, Gábor; Den Hartog, Deanne N

    2018-07-01

    Despite the ubiquity of textual data, so far few researchers have applied text mining to answer organizational research questions. Text mining, which essentially entails a quantitative approach to the analysis of (usually) voluminous textual data, helps accelerate knowledge discovery by radically increasing the amount data that can be analyzed. This article aims to acquaint organizational researchers with the fundamental logic underpinning text mining, the analytical stages involved, and contemporary techniques that may be used to achieve different types of objectives. The specific analytical techniques reviewed are (a) dimensionality reduction, (b) distance and similarity computing, (c) clustering, (d) topic modeling, and (e) classification. We describe how text mining may extend contemporary organizational research by allowing the testing of existing or new research questions with data that are likely to be rich, contextualized, and ecologically valid. After an exploration of how evidence for the validity of text mining output may be generated, we conclude the article by illustrating the text mining process in a job analysis setting using a dataset composed of job vacancies.

  4. On the Prediction of Solar Cell Degradation in Space

    NASA Astrophysics Data System (ADS)

    Bourgoin, J. C.; Boizot, B.; Khirouni, K.; Khorenko, V.

    2014-08-01

    We discuss the validity of the procedure which is used to predict End Of Life performances of a solar cell in space. This procedure consists to measure the performances of the cell after it has been irradiated at the EOL fluence during a time ti very short compared to the duration tm of the mission in space, i.e. with a considerably larger flux. We show that this procedure is valid only when the defects created by the irradiation do not anneal (thermally or by carrier injection) with a time constant shorter than tm or larger than ti. This can be a common situation since annealing of irradiation induced defects occurs in all type of cells, at least in specific conditions (temperature, intensity of illumination, flux and nature of irradiating particles). Using modeling, we illustrate the effect of injection or thermal annealing on EOL prediction in the case GaInP, material at the heart of modern high efficiency space solar cells.

  5. A new hybrid double divisor ratio spectra method for the analysis of ternary mixtures

    NASA Astrophysics Data System (ADS)

    Youssef, Rasha M.; Maher, Hadir M.

    2008-10-01

    A new spectrophotometric method was developed for the simultaneous determination of ternary mixtures, without prior separation steps. This method is based on convolution of the double divisor ratio spectra, obtained by dividing the absorption spectrum of the ternary mixture by a standard spectrum of two of the three compounds in the mixture, using combined trigonometric Fourier functions. The magnitude of the Fourier function coefficients, at either maximum or minimum points, is related to the concentration of each drug in the mixture. The mathematical explanation of the procedure is illustrated. The method was applied for the assay of a model mixture consisting of isoniazid (ISN), rifampicin (RIF) and pyrazinamide (PYZ) in synthetic mixtures, commercial tablets and human urine samples. The developed method was compared with the double divisor ratio spectra derivative method (DDRD) and derivative ratio spectra-zero-crossing method (DRSZ). Linearity, validation, accuracy, precision, limits of detection, limits of quantitation, and other aspects of analytical validation are included in the text.

  6. Towards a Consolidated Approach for the Assessment of Evaluation Models of Nuclear Power Reactors

    DOE PAGES

    Epiney, A.; Canepa, S.; Zerkak, O.; ...

    2016-11-02

    The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less

  7. The landscape model: A model for exploring trade-offs between agricultural production and the environment.

    PubMed

    Coleman, Kevin; Muhammed, Shibu E; Milne, Alice E; Todman, Lindsay C; Dailey, A Gordon; Glendining, Margaret J; Whitmore, Andrew P

    2017-12-31

    We describe a model framework that simulates spatial and temporal interactions in agricultural landscapes and that can be used to explore trade-offs between production and environment so helping to determine solutions to the problems of sustainable food production. Here we focus on models of agricultural production, water movement and nutrient flow in a landscape. We validate these models against data from two long-term experiments, (the first a continuous wheat experiment and the other a permanent grass-land experiment) and an experiment where water and nutrient flow are measured from isolated catchments. The model simulated wheat yield (RMSE 20.3-28.6%), grain N (RMSE 21.3-42.5%) and P (RMSE 20.2-29% excluding the nil N plots), and total soil organic carbon particularly well (RMSE3.1-13.8%), the simulations of water flow were also reasonable (RMSE 180.36 and 226.02%). We illustrate the use of our model framework to explore trade-offs between production and nutrient losses. Copyright © 2017 Rothamsted Research. Published by Elsevier B.V. All rights reserved.

  8. Family Environment and Childhood Obesity: A New Framework with Structural Equation Modeling

    PubMed Central

    Huang, Hui; Wan Mohamed Radzi, Che Wan Jasimah bt; Salarzadeh Jenatabadi, Hashem

    2017-01-01

    The main purpose of the current article is to introduce a framework of the complexity of childhood obesity based on the family environment. A conceptual model that quantifies the relationships and interactions among parental socioeconomic status, family food security level, child’s food intake and certain aspects of parental feeding behaviour is presented using the structural equation modeling (SEM) concept. Structural models are analysed in terms of the direct and indirect connections among latent and measurement variables that lead to the child weight indicator. To illustrate the accuracy, fit, reliability and validity of the introduced framework, real data collected from 630 families from Urumqi (Xinjiang, China) were considered. The framework includes two categories of data comprising the normal body mass index (BMI) range and obesity data. The comparison analysis between two models provides some evidence that in obesity modeling, obesity data must be extracted from the dataset and analysis must be done separately from the normal BMI range. This study may be helpful for researchers interested in childhood obesity modeling based on family environment. PMID:28208833

  9. On the Effects of Artificial Feeding on Bee Colony Dynamics: A Mathematical Model

    PubMed Central

    Paiva, Juliana Pereira Lisboa Mohallem; Paiva, Henrique Mohallem; Esposito, Elisa; Morais, Michelle Manfrini

    2016-01-01

    This paper proposes a new mathematical model to evaluate the effects of artificial feeding on bee colony population dynamics. The proposed model is based on a classical framework and contains differential equations that describe the changes in the number of hive bees, forager bees, and brood cells, as a function of amounts of natural and artificial food. The model includes the following elements to characterize the artificial feeding scenario: a function to model the preference of the bees for natural food over artificial food; parameters to quantify the quality and palatability of artificial diets; a function to account for the efficiency of the foragers in gathering food under different environmental conditions; and a function to represent different approaches used by the beekeeper to feed the hive with artificial food. Simulated results are presented to illustrate the main characteristics of the model and its behavior under different scenarios. The model results are validated with experimental data from the literature involving four different artificial diets. A good match between simulated and experimental results was achieved. PMID:27875589

  10. Multi-day activity scheduling reactions to planned activities and future events in a dynamic model of activity-travel behavior

    NASA Astrophysics Data System (ADS)

    Nijland, Linda; Arentze, Theo; Timmermans, Harry

    2014-01-01

    Modeling multi-day planning has received scarce attention in activity-based transport demand modeling so far. However, new dynamic activity-based approaches are being developed at the current moment. The frequency and inflexibility of planned activities and events in activity schedules of individuals indicate the importance of incorporating those pre-planned activities in the new generation of dynamic travel demand models. Elaborating and combining previous work on event-driven activity generation, the aim of this paper is to develop and illustrate an extension of a need-based model of activity generation that takes into account possible influences of pre-planned activities and events. This paper describes the theory and shows the results of simulations of the extension. The simulation was conducted for six different activities, and the parameter values used were consistent with an earlier estimation study. The results show that the model works well and that the influences of the parameters are consistent, logical, and have clear interpretations. These findings offer further evidence of face and construct validity to the suggested modeling approach.

  11. Family Environment and Childhood Obesity: A New Framework with Structural Equation Modeling.

    PubMed

    Huang, Hui; Wan Mohamed Radzi, Che Wan Jasimah Bt; Salarzadeh Jenatabadi, Hashem

    2017-02-13

    The main purpose of the current article is to introduce a framework of the complexity of childhood obesity based on the family environment. A conceptual model that quantifies the relationships and interactions among parental socioeconomic status, family food security level, child's food intake and certain aspects of parental feeding behaviour is presented using the structural equation modeling (SEM) concept. Structural models are analysed in terms of the direct and indirect connections among latent and measurement variables that lead to the child weight indicator. To illustrate the accuracy, fit, reliability and validity of the introduced framework, real data collected from 630 families from Urumqi (Xinjiang, China) were considered. The framework includes two categories of data comprising the normal body mass index (BMI) range and obesity data. The comparison analysis between two models provides some evidence that in obesity modeling, obesity data must be extracted from the dataset and analysis must be done separately from the normal BMI range. This study may be helpful for researchers interested in childhood obesity modeling based on family environment.

  12. Experimental investigation on the infrared refraction and extinction properties of rock dust in tunneling face of coal mine.

    PubMed

    Wang, Wenzheng; Wang, Yanming; Shi, Guoqing

    2015-12-10

    Comprehensive experimental research on the fundamental optical properties of dust pollution in a coal mine is presented. Rock dust generated in a tunneling roadway was sampled and the spectral complex refractive index within an infrared range of 2.5-25 μm was obtained by Fourier transform infrared spectroscopy measurement and Kramers-Kronig relation. Experimental results were validated to be consistent with equivalent optical constants simulated by effective medium theory based on component analysis of x-ray fluorescence, which illustrates that the top three mineral components are SiO2 (62.06%), Al2O3 (21.26%), and Fe2O3 (4.27%). The complex refractive index and the spatial distribution tested by a filter dust and particle size analyzer were involved in the simulation of extinction properties of rock dust along the tunneling roadway solved by the discrete ordinates method and Mie scattering model. The compared results illustrate that transmission is obviously enhanced with the increase of height from the floor but weakened with increasing horizontal distance from the air duct.

  13. Feature-based RNN target recognition

    NASA Astrophysics Data System (ADS)

    Bakircioglu, Hakan; Gelenbe, Erol

    1998-09-01

    Detection and recognition of target signatures in sensory data obtained by synthetic aperture radar (SAR), forward- looking infrared, or laser radar, have received considerable attention in the literature. In this paper, we propose a feature based target classification methodology to detect and classify targets in cluttered SAR images, that makes use of selective signature data from sensory data, together with a neural network technique which uses a set of trained networks based on the Random Neural Network (RNN) model (Gelenbe 89, 90, 91, 93) which is trained to act as a matched filter. We propose and investigate radial features of target shapes that are invariant to rotation, translation, and scale, to characterize target and clutter signatures. These features are then used to train a set of learning RNNs which can be used to detect targets within clutter with high accuracy, and to classify the targets or man-made objects from natural clutter. Experimental data from SAR imagery is used to illustrate and validate the proposed method, and to calculate Receiver Operating Characteristics which illustrate the performance of the proposed algorithm.

  14. Modeling the temporal periodicity of growth increments based on harmonic functions

    PubMed Central

    Morales-Bojórquez, Enrique; González-Peláez, Sergio Scarry; Bautista-Romero, J. Jesús; Lluch-Cota, Daniel Bernardo

    2018-01-01

    Age estimation methods based on hard structures require a process of validation to confirm the periodical pattern of growth marks. Among such processes, one of the most used is the marginal increment ratio (MIR), which was stated to follow a sinusoidal cycle in a population. Despite its utility, in most cases, its implementation has lacked robust statistical analysis. Accordingly, we propose a modeling approach for the temporal periodicity of growth increments based on single and second order harmonic functions. For illustrative purposes, the MIR periodicities for two geoduck species (Panopea generosa and Panopea globosa) were modeled to identify the periodical pattern of growth increments in the shell. This model identified an annual periodicity for both species but described different temporal patterns. The proposed procedure can be broadly used to objectively define the timing of the peak, the degree of symmetry, and therefore, the synchrony of band deposition of different species on the basis of MIR data. PMID:29694381

  15. On the importance of controlling for effort in analysis of count survey data: Modeling population change from Christmas Bird Count data

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.; Helbig, Andreas J.; Flade, Martin

    1999-01-01

    Count survey data are commonly used for estimating temporal and spatial patterns of population change. Since count surveys are not censuses, counts can be influenced by 'nuisance factors' related to the probability of detecting animals but unrelated to the actual population size. The effects of systematic changes in these factors can be confounded with patterns of population change. Thus, valid analysis of count survey data requires the identification of nuisance factors and flexible models for their effects. We illustrate using data from the Christmas Bird Count (CBC), a midwinter survey of bird populations in North America. CBC survey effort has substantially increased in recent years, suggesting that unadjusted counts may overstate population growth (or understate declines). We describe a flexible family of models for the effect of effort, that includes models in which increasing effort leads to diminishing returns in terms of the number of birds counted.

  16. Numerical modeling of surface wave development under the action of wind

    NASA Astrophysics Data System (ADS)

    Chalikov, Dmitry

    2018-06-01

    The numerical modeling of two-dimensional surface wave development under the action of wind is performed. The model is based on three-dimensional equations of potential motion with a free surface written in a surface-following nonorthogonal curvilinear coordinate system in which depth is counted from a moving surface. A three-dimensional Poisson equation for the velocity potential is solved iteratively. A Fourier transform method, a second-order accuracy approximation of vertical derivatives on a stretched vertical grid and fourth-order Runge-Kutta time stepping are used. Both the input energy to waves and dissipation of wave energy are calculated on the basis of earlier developed and validated algorithms. A one-processor version of the model for PC allows us to simulate an evolution of the wave field with thousands of degrees of freedom over thousands of wave periods. A long-time evolution of a two-dimensional wave structure is illustrated by the spectra of wave surface and the input and output of energy.

  17. EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.

    PubMed

    Hadinia, M; Jafari, R; Soleimani, M

    2016-06-01

    This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.

  18. Bio-inspired flexible joints with passive feathering for robotic fish pectoral fins.

    PubMed

    Behbahani, Sanaz Bazaz; Tan, Xiaobo

    2016-05-04

    In this paper a novel flexible joint is proposed for robotic fish pectoral fins, which enables a swimming behavior emulating the fin motions of many aquatic animals. In particular, the pectoral fin operates primarily in the rowing mode, while undergoing passive feathering during the recovery stroke to reduce hydrodynamic drag on the fin. The latter enables effective locomotion even with symmetric base actuation during power and recovery strokes. A dynamic model is developed to facilitate the understanding and design of the joint, where blade element theory is used to calculate the hydrodynamic forces on the pectoral fins, and the joint is modeled as a paired torsion spring and damper. Experimental results on a robotic fish prototype are presented to illustrate the effectiveness of the joint mechanism, validate the proposed model, and indicate the utility of the proposed model for the optimal design of joint depth and stiffness in achieving the trade-off between swimming speed and mechanical efficiency.

  19. The Galileo System of Measurement: Preliminary Evidence for Precision, Stability, and Equivalance to Traditional Measures

    ERIC Educational Resources Information Center

    Gillham, James; Woelfel, Joseph

    1977-01-01

    Describes the Galileo system of measurement operations including reliability and validity data. Illustrations of some of the relations between Galileo measures and traditional procedures are provided. (MH)

  20. The Clinical Use of Hypnotic Regression and Progression in Psychotherapy.

    ERIC Educational Resources Information Center

    Goldberg, Bruce

    1990-01-01

    Discusses concept of time in therapy, presenting theoretical and clinical foundations to illustrate the validity of guiding patients into past lives and future lifetimes through hypnosis to resolve self-defeating sequences. (Author/TE)

  1. Overview and technical and practical aspects for use of geostatistics in hazardous-, toxic-, and radioactive-waste-site investigations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bossong, C.R.; Karlinger, M.R.; Troutman, B.M.

    1999-10-01

    Technical and practical aspects of applying geostatistics are developed for individuals involved in investigation at hazardous-, toxic-, and radioactive-waste sites. Important geostatistical concepts, such as variograms and ordinary, universal, and indicator kriging, are described in general terms for introductory purposes and in more detail for practical applications. Variogram modeling using measured ground-water elevation data is described in detail to illustrate principles of stationarity, anisotropy, transformations, and cross validation. Several examples of kriging applications are described using ground-water-level elevations, bedrock elevations, and ground-water-quality data. A review of contemporary literature and selected public domain software associated with geostatistics also is provided, asmore » is a discussion of alternative methods for spatial modeling, including inverse distance weighting, triangulation, splines, trend-surface analysis, and simulation.« less

  2. HAMP - the microwave package on the High Altitude and LOng range research aircraft (HALO)

    NASA Astrophysics Data System (ADS)

    Mech, M.; Orlandi, E.; Crewell, S.; Ament, F.; Hirsch, L.; Hagen, M.; Peters, G.; Stevens, B.

    2014-12-01

    An advanced package of microwave remote sensing instrumentation has been developed for the operation on the new German High Altitude LOng range research aircraft (HALO). The HALO Microwave Package, HAMP, consists of two nadir-looking instruments: a cloud radar at 36 GHz and a suite of passive microwave radiometers with 26 frequencies in different bands between 22.24 and 183.31 ± 12.5 GHz. We present a description of HAMP's instrumentation together with an illustration of its potential. To demonstrate this potential, synthetic measurements for the implemented passive microwave frequencies and the cloud radar based on cloud-resolving and radiative transfer model calculations were performed. These illustrate the advantage of HAMP's chosen frequency coverage, which allows for improved detection of hydrometeors both via the emission and scattering of radiation. Regression algorithms compare HAMP retrieval with standard satellite instruments from polar orbiters and show its advantages particularly for the lower atmosphere with a root-mean-square error reduced by 5 and 15% for temperature and humidity, respectively. HAMP's main advantage is the high spatial resolution of about 1 km, which is illustrated by first measurements from test flights. Together these qualities make it an exciting tool for gaining a better understanding of cloud processes, testing retrieval algorithms, defining future satellite instrument specifications, and validating platforms after they have been placed in orbit.

  3. HAMP - the microwave package on the High Altitude and LOng range research aircraft HALO

    NASA Astrophysics Data System (ADS)

    Mech, M.; Orlandi, E.; Crewell, S.; Ament, F.; Hirsch, L.; Hagen, M.; Peters, G.; Stevens, B.

    2014-05-01

    An advanced package of microwave remote sensing instrumentation has been developed for the operation on the new German High Altitude LOng range research aircraft (HALO). The HALO Microwave Package, HAMP, consists of two nadir looking instruments: a cloud radar at 36 GHz and a suite of passive microwave radiometers with 26 frequencies in different bands between 22.24 and 183.31 ± 12.5 GHz. We present a description of HAMP's instrumentation together with an illustration of its potential. To demonstrate this potential synthetic measurements for the implemented passive microwave frequencies and the cloud radar based on cloud resolving and radiative transfer model calculations were performed. These illustrate the advantage of HAMP's chosen frequency coverage, which allows for improved detection of hydrometeors both via the emission and scattering of radiation. Regression algorithms compare HAMP retrieval with standard satellite instruments from polar orbiters and show its advantages particularly for the lower atmosphere with a reduced root mean square error by 5 and 15% for temperature and humidity, respectively. HAMP's main advantage is the high spatial resolution of about 1 km which is illustrated by first measurements from test flights. Together these qualities make it an exciting tool for gaining better understanding of cloud processes, testing retrieval algorithms, defining future satellite instrument specifications, and validating platforms after they have been placed in orbit.

  4. Moving beyond qualitative evaluations of Bayesian models of cognition.

    PubMed

    Hemmer, Pernille; Tauber, Sean; Steyvers, Mark

    2015-06-01

    Bayesian models of cognition provide a powerful way to understand the behavior and goals of individuals from a computational point of view. Much of the focus in the Bayesian cognitive modeling approach has been on qualitative model evaluations, where predictions from the models are compared to data that is often averaged over individuals. In many cognitive tasks, however, there are pervasive individual differences. We introduce an approach to directly infer individual differences related to subjective mental representations within the framework of Bayesian models of cognition. In this approach, Bayesian data analysis methods are used to estimate cognitive parameters and motivate the inference process within a Bayesian cognitive model. We illustrate this integrative Bayesian approach on a model of memory. We apply the model to behavioral data from a memory experiment involving the recall of heights of people. A cross-validation analysis shows that the Bayesian memory model with inferred subjective priors predicts withheld data better than a Bayesian model where the priors are based on environmental statistics. In addition, the model with inferred priors at the individual subject level led to the best overall generalization performance, suggesting that individual differences are important to consider in Bayesian models of cognition.

  5. Identifying the starting point of a spreading process in complex networks.

    PubMed

    Comin, Cesar Henrique; Costa, Luciano da Fontoura

    2011-11-01

    When dealing with the dissemination of epidemics, one important question that can be asked is the location where the contamination began. In this paper, we analyze three spreading schemes and propose and validate an effective methodology for the identification of the source nodes. The method is based on the calculation of the centrality of the nodes on the sampled network, expressed here by degree, betweenness, closeness, and eigenvector centrality. We show that the source node tends to have the highest measurement values. The potential of the methodology is illustrated with respect to three theoretical complex network models as well as a real-world network, the email network of the University Rovira i Virgili.

  6. Engine Load Path Calculations - Project Neo

    NASA Technical Reports Server (NTRS)

    Fisher, Joseph

    2014-01-01

    A mathematical model of the engine and actuator geometry was developed and used to perform a static force analysis of the system with the engine at different pitch and yaw angles. This analysis yielded the direction and magnitude of the reaction forces at the mounting points of the engine and actuators. These data were used to validate the selection of the actuators installed in the system and to design a new spherical joint to mount the engine on the test fixture. To illustrate the motion of the system and to further interest in the project, a functional 3D printed version of the system was made, featuring the full mobility of the real system.

  7. FPGA implementation of current-sharing strategy for parallel-connected SEPICs

    NASA Astrophysics Data System (ADS)

    Ezhilarasi, A.; Ramaswamy, M.

    2016-01-01

    The attempt echoes to evolve an equal current-sharing algorithm over a number of single-ended primary inductance converters connected in parallel. The methodology involves the development of state-space model to predict the condition for the existence of a stable equilibrium portrait. It acquires the role of a variable structure controller to guide the trajectory, with a view to circumvent the circuit non-linearities and arrive at a stable performance through a preferred operating range. The design elicits an acceptable servo and regulatory characteristics, the desired time response and ensures regulation of the load voltage. The simulation results validated through a field programmable gate array-based prototype serves to illustrate its suitability for present-day applications.

  8. Data Processing for Atmospheric Phase Interferometers

    NASA Technical Reports Server (NTRS)

    Acosta, Roberto J.; Nessel, James A.; Morabito, David D.

    2009-01-01

    This paper presents a detailed discussion of calibration procedures used to analyze data recorded from a two-element atmospheric phase interferometer (API) deployed at Goldstone, California. In addition, we describe the data products derived from those measurements that can be used for site intercomparison and atmospheric modeling. Simulated data is used to demonstrate the effectiveness of the proposed algorithm and as a means for validating our procedure. A study of the effect of block size filtering is presented to justify our process for isolating atmospheric fluctuation phenomena from other system-induced effects (e.g., satellite motion, thermal drift). A simulated 24 hr interferometer phase data time series is analyzed to illustrate the step-by-step calibration procedure and desired data products.

  9. Berry connection in atom-molecule systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui Fucheng; Wu Biao; International Center for Quantum Materials, Peking University, 100871 Beijing

    2011-08-15

    In the mean-field theory of atom-molecule systems, where bosonic atoms combine to form molecules, there is no usual U(1) symmetry, presenting an apparent hurdle for defining the Berry phase and Berry curvature for these systems. We define a Berry connection for this system, with which the Berry phase and Berry curvature can be naturally computed. We use a three-level atom-molecule system to illustrate our results. In particular, we have computed the mean-field Berry curvature of this system analytically, and compared it to the Berry curvature computed with the second-quantized model of the same system. An excellent agreement is found, indicatingmore » the validity of our definition.« less

  10. Truncated Sum Rules and Their Use in Calculating Fundamental Limits of Nonlinear Susceptibilities

    NASA Astrophysics Data System (ADS)

    Kuzyk, Mark G.

    Truncated sum rules have been used to calculate the fundamental limits of the nonlinear susceptibilities and the results have been consistent with all measured molecules. However, given that finite-state models appear to result in inconsistencies in the sum rules, it may seem unclear why the method works. In this paper, the assumptions inherent in the truncation process are discussed and arguments based on physical grounds are presented in support of using truncated sum rules in calculating fundamental limits. The clipped harmonic oscillator is used as an illustration of how the validity of truncation can be tested and several limiting cases are discussed as examples of the nuances inherent in the method.

  11. Symmetry Relations in Chemical Kinetics Arising from Microscopic Reversibility

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.

    2006-01-01

    It is shown that the kinetics of time-reversible chemical reactions having the same equilibrium constant but different initial conditions are closely related to one another by a directly measurable symmetry relation analogous to chemical detailed balance. In contrast to detailed balance, however, this relation does not require knowledge of the elementary steps that underlie the reaction, and remains valid in regimes where the concept of rate constants is ill defined, such as at very short times and in the presence of low activation barriers. Numerical simulations of a model of isomerization in solution are provided to illustrate the symmetry under such conditions, and potential applications in protein folding or unfolding are pointed out.

  12. Managing design excellence tools during the development of new orthopaedic implants.

    PubMed

    Défossez, Henri J P; Serhan, Hassan

    2013-11-01

    Design excellence (DEX) tools have been widely used for years in some industries for their potential to facilitate new product development. The medical sector, targeted by cost pressures, has therefore started adopting them. Numerous tools are available; however only appropriate deployment during the new product development stages can optimize the overall process. The primary study objectives were to describe generic tools and illustrate their implementation and management during the development of new orthopaedic implants, and compile a reference package. Secondary objectives were to present the DEX tool investment costs and savings, since the method can require significant resources for which companies must carefully plan. The publicly available DEX method "Define Measure Analyze Design Verify Validate" was adopted and implemented during the development of a new spinal implant. Several tools proved most successful at developing the correct product, addressing clinical needs, and increasing market penetration potential, while reducing design iterations and manufacturing validations. Cost analysis and Pugh Matrix coupled with multi generation planning enabled developing a strong rationale to activate the project, set the vision and goals. improved risk management and product map established a robust technical verification-validation program. Design of experiments and process quantification facilitated design for manufacturing of critical features, as early as the concept phase. Biomechanical testing with analysis of variance provided a validation model with a recognized statistical performance baseline. Within those tools, only certain ones required minimum resources (i.e., business case, multi generational plan, project value proposition, Pugh Matrix, critical To quality process validation techniques), while others required significant investments (i.e., voice of customer, product usage map, improved risk management, design of experiments, biomechanical testing techniques). All used techniques provided savings exceeding investment costs. Some other tools were considered and found less relevant. A matrix summarized the investment costs and generated estimated savings. Globally, all companies can benefit from using DEX by smartly selecting and estimating those tools with best return on investment at the start of the project. For this, a good understanding of the available company resources, background and development strategy are needed. In conclusion, it was possible to illustrate that appropriate management of design excellence tools can greatly facilitate the development of new orthopaedic implant systems.

  13. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  14. Actor groups, related needs, and challenges at the climate downscaling interface

    NASA Astrophysics Data System (ADS)

    Rössler, Ole; Benestad, Rasmus; Diamando, Vlachogannis; Heike, Hübener; Kanamaru, Hideki; Pagé, Christian; Margarida Cardoso, Rita; Soares, Pedro; Maraun, Douglas; Kreienkamp, Frank; Christodoulides, Paul; Fischer, Andreas; Szabo, Peter

    2016-04-01

    At the climate downscaling interface, numerous downscaling techniques and different philosophies compete on being the best method in their specific terms. Thereby, it remains unclear to what extent and for which purpose these downscaling techniques are valid or even the most appropriate choice. A common validation framework that compares all the different available methods was missing so far. The initiative VALUE closes this gap with such a common validation framework. An essential part of a validation framework for downscaling techniques is the definition of appropriate validation measures. The selection of validation measures should consider the needs of the stakeholder: some might need a temporal or spatial average of a certain variable, others might need temporal or spatial distributions of some variables, still others might need extremes for the variables of interest or even inter-variable dependencies. Hence, a close interaction of climate data providers and climate data users is necessary. Thus, the challenge in formulating a common validation framework mirrors also the challenges between the climate data providers and the impact assessment community. This poster elaborates the issues and challenges at the downscaling interface as it is seen within the VALUE community. It suggests three different actor groups: one group consisting of the climate data providers, the other two groups being climate data users (impact modellers and societal users). Hence, the downscaling interface faces classical transdisciplinary challenges. We depict a graphical illustration of actors involved and their interactions. In addition, we identified four different types of issues that need to be considered: i.e. data based, knowledge based, communication based, and structural issues. They all may, individually or jointly, hinder an optimal exchange of data and information between the actor groups at the downscaling interface. Finally, some possible ways to tackle these issues are discussed.

  15. USING DIRICHLET TESSELLATION TO HELP ESTIMATE MICROBIAL BIOMASS CONCENTRATIONS

    EPA Science Inventory

    Dirichlet tessellation was applied to estimate microbial concentrations from microscope well slides. The use of microscopy/Dirichlet tessellation to quantify biomass was illustrated with two species of morphologically distinct cyanobacteria, and validated empirically by compariso...

  16. An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian

    For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less

  17. A Comprehensive Snow Density Model for Integrating Lidar-Derived Snow Depth Data into Spatial Snow Modeling

    NASA Astrophysics Data System (ADS)

    Marks, D. G.; Kormos, P.; Johnson, M.; Bormann, K. J.; Hedrick, A. R.; Havens, S.; Robertson, M.; Painter, T. H.

    2017-12-01

    Lidar-derived snow depths when combined with modeled or estimated snow density can provide reliable estimates of the distribution of SWE over large mountain areas. Application of this approach is transforming western snow hydrology. We present a comprehensive approach toward modeling bulk snow density that is reliable over a vast range of weather and snow conditions. The method is applied and evaluated over mountainous regions of California, Idaho, Oregon and Colorado in the western US. Simulated and measured snow density are compared at fourteen validation sites across the western US where measurements of snow mass (SWE) and depth are co-located. Fitting statistics for ten sites from three mountain catchments (two in Idaho, one in California) show an average Nash-Sutcliff model efficiency coefficient of 0.83, and mean bias of 4 kg m-3. Results illustrate issues associated with monitoring snow depth and SWE and show the effectiveness of the model, with a small mean bias across a range of snow and climate conditions in the west.

  18. Experimental and Numerical Modeling of Fluid Flow Processes in Continuous Casting: Results from the LIMMCAST-Project

    NASA Astrophysics Data System (ADS)

    Timmel, K.; Kratzsch, C.; Asad, A.; Schurmann, D.; Schwarze, R.; Eckert, S.

    2017-07-01

    The present paper reports about numerical simulations and model experiments concerned with the fluid flow in the continuous casting process of steel. This work was carried out in the LIMMCAST project in the framework of the Helmholtz alliance LIMTECH. A brief description of the LIMMCAST facilities used for the experimental modeling at HZDR is given here. Ultrasonic and inductive techniques and the X-ray radioscopy were employed for flow measurements or visualizations of two-phase flow regimes occurring in the submerged entry nozzle and the mold. Corresponding numerical simulations were performed at TUBAF taking into account the dimensions and properties of the model experiments. Numerical models were successfully validated using the experimental data base. The reasonable and in many cases excellent agreement of numerical with experimental data allows to extrapolate the models to real casting configurations. Exemplary results will be presented here showing the effect of electromagnetic brakes or electromagnetic stirrers on the flow in the mold or illustrating the properties of two-phase flows resulting from an Ar injection through the stopper rod.

  19. Kernel spectral clustering with memory effect

    NASA Astrophysics Data System (ADS)

    Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.

    2013-05-01

    Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.

  20. The International Index of Erectile Function: a methodological critique and suggestions for improvement.

    PubMed

    Yule, Morag; Davison, Joyce; Brotto, Lori

    2011-01-01

    The International Index of Erectile Function is a well-worded and psychometrically valid self-report questionnaire widely used as the standard for the evaluation of male sexual function. However, some conceptual and statistical problems arise when using the measure with men who are not sexually active. These problems are illustrated using 2 empirical examples, and the authors provide recommended solutions to further strengthen the efficacy and validity of this measure.

  1. Validating a Method for Enhanced Communications and Situational Awareness at the Incident Command Level

    DTIC Science & Technology

    2006-03-01

    operations, and other applications for the MITOC that are beneficial to national security. It will illustrate how the concept was validated by the...of the potential impact on funding, a concern was noted in discussion among members of the National Emergency Management Association (NEMA) in their...This concept of a “virtual” Homeland Security-focused National Laboratory was comprised of the combined resources of the public and private

  2. High dimensional model representation method for fuzzy structural dynamics

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  3. Two-wavelength mid-IR diagnostic for temperature and n-dodecane concentration in an aerosol shock tube

    NASA Astrophysics Data System (ADS)

    Klingbeil, A. E.; Jeffries, J. B.; Davidson, D. F.; Hanson, R. K.

    2008-11-01

    A two-wavelength, mid-IR optical absorption diagnostic is developed for simultaneous temperature and n-dodecane vapor concentration measurements in an aerosol-laden shock tube. FTIR absorption spectra for the temperature range 323 to 773 K are used to select the two wavelengths (3409.0 and 3432.4 nm). Shock-heated mixtures of n-dodecane vapor in argon are then used to extend absorption cross section data at these wavelengths to 1322 K. The sensor is used to validate a model of the post-evaporation temperature and pressure of shock-heated fuel aerosol, which can ultimately be used for the study of the chemistry of low-vapor-pressure compounds and fuel blends. The signal-to-noise ratio of the temperature and concentration are ˜20 and ˜30, respectively, illustrating the sensitivity of this diagnostic. The good agreement between model and measurement provide confidence in the use of this aerosol shock tube to provide well-known thermodynamic conditions. At high temperatures, pseudo-first-order decomposition rates are extracted from time-resolved concentration measurements, and data from vapor and aerosol shocks are found to be in good agreement. Notably, the n-dodecane concentration measurements exhibit slower decomposition than predicted by models using two published reaction mechanisms, illustrating the need for further kinetic studies of this hydrocarbon. These results demonstrate the potential of multi-wavelength mid-IR laser sensors for hydrocarbon measurements in environments with time-varying temperature and concentration.

  4. MODIS polarization performance and anomalous four-cycle polarization phenomenon

    NASA Astrophysics Data System (ADS)

    Young, James B.; Knight, Ed; Merrow, Cindy

    1998-10-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) will be one of the primary instruments observing the earth on the Earth Observing System (EOS) scheduled for launch in 1999. MODIS polarization performance characterization was required for the 0.4 to 0.6 micrometers (VIS), 0.6 micrometers to 1.0 micrometers (NIR), and 1.0 micrometers to 2.3 micrometers (SWIR) regions. A polarized source assembly (PSA) consisting of a collimator with a rotatable Ahrens polarizer was used to illuminate MODIS with a linearly polarized beam. MODIS signal function having two-cycles per 360 degrees prism rotation signal function was expected. However, some spectral bands had a distinct four-cycle anomalous signal. The expected two-cycle function was present in all regions with the four-cycle anomaly being limited to the NIR region. Fourier analysis was very useful tooling determining the cause of the anomaly. A simplified polarization model of the PSA and MODIS was generated using Mueller matrices-Stokes vector formalism. Parametric modeling illustrated that this anomaly could be produced by energy having multiple passes between PSA Ahrens prism and the MODIS focal plane filters. Furthermore, the model gave NIR four-cycle magnitudes that were consistent with observations. The IVS and SWIR optical trans had birefringent elements that served to scramble the multiple pass anomaly. The model validity was demonstrated with an experimental setup that had partial aperture illumination which eliminated the possibility of multiple passes. The four-cycle response was eliminated while producing the same two-cycle polarization response. Data will be shown to illustrate the four-cycle phenomenon.

  5. HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, R.A.; Lowery, P.S.; Lessor, D.L.

    1987-09-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations formore » conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.« less

  6. Discriminant content validity: a quantitative methodology for assessing content of theory-based measures, with illustrative applications.

    PubMed

    Johnston, Marie; Dixon, Diane; Hart, Jo; Glidewell, Liz; Schröder, Carin; Pollard, Beth

    2014-05-01

    In studies involving theoretical constructs, it is important that measures have good content validity and that there is not contamination of measures by content from other constructs. While reliability and construct validity are routinely reported, to date, there has not been a satisfactory, transparent, and systematic method of assessing and reporting content validity. In this paper, we describe a methodology of discriminant content validity (DCV) and illustrate its application in three studies. Discriminant content validity involves six steps: construct definition, item selection, judge identification, judgement format, single-sample test of content validity, and assessment of discriminant items. In three studies, these steps were applied to a measure of illness perceptions (IPQ-R) and control cognitions. The IPQ-R performed well with most items being purely related to their target construct, although timeline and consequences had small problems. By contrast, the study of control cognitions identified problems in measuring constructs independently. In the final study, direct estimation response formats for theory of planned behaviour constructs were found to have as good DCV as Likert format. The DCV method allowed quantitative assessment of each item and can therefore inform the content validity of the measures assessed. The methods can be applied to assess content validity before or after collecting data to select the appropriate items to measure theoretical constructs. Further, the data reported for each item in Appendix S1 can be used in item or measure selection. Statement of contribution What is already known on this subject? There are agreed methods of assessing and reporting construct validity of measures of theoretical constructs, but not their content validity. Content validity is rarely reported in a systematic and transparent manner. What does this study add? The paper proposes discriminant content validity (DCV), a systematic and transparent method of assessing and reporting whether items assess the intended theoretical construct and only that construct. In three studies, DCV was applied to measures of illness perceptions, control cognitions, and theory of planned behaviour response formats. Appendix S1 gives content validity indices for each item of each questionnaire investigated. Discriminant content validity is ideally applied while the measure is being developed, before using to measure the construct(s), but can also be applied after using a measure. © 2014 The British Psychological Society.

  7. How Many Batches Are Needed for Process Validation under the New FDA Guidance?

    PubMed

    Yang, Harry

    2013-01-01

    The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and they highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance. The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and THEY highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance.

  8. A musculoskeletal model for the lumbar spine.

    PubMed

    Christophy, Miguel; Faruk Senan, Nur Adila; Lotz, Jeffrey C; O'Reilly, Oliver M

    2012-01-01

    A new musculoskeletal model for the lumbar spine is described in this paper. This model features a rigid pelvis and sacrum, the five lumbar vertebrae, and a rigid torso consisting of a lumped thoracic spine and ribcage. The motion of the individual lumbar vertebrae was defined as a fraction of the net lumbar movement about the three rotational degrees of freedom: flexion-extension lateral bending, and axial rotation. Additionally, the eight main muscle groups of the lumbar spine were incorporated using 238 muscle fascicles with prescriptions for the parameters in the Hill-type muscle models obtained with the help of an extensive literature survey. The features of the model include the abilities to predict joint reactions, muscle forces, and muscle activation patterns. To illustrate the capabilities of the model and validate its physiological similarity, the model's predictions for the moment arms of the muscles are shown for a range of flexion-extension motions of the lower back. The model uses the OpenSim platform and is freely available on https://www.simtk.org/home/lumbarspine to other spinal researchers interested in analyzing the kinematics of the spine. The model can also be integrated with existing OpenSim models to build more comprehensive models of the human body.

  9. Economic modeling of HIV treatments.

    PubMed

    Simpson, Kit N

    2010-05-01

    To review the general literature on microeconomic modeling and key points that must be considered in the general assessment of economic modeling reports, discuss the evolution of HIV economic models and identify models that illustrate this development over time, as well as examples of current studies. Recommend improvements in HIV economic modeling. Recent economic modeling studies of HIV include examinations of scaling up antiretroviral (ARV) in South Africa, screening prior to use of abacavir, preexposure prophylaxis, early start of ARV in developing countries and cost-effectiveness comparisons of specific ARV drugs using data from clinical trials. These studies all used extensively published second-generation Markov models in their analyses. There have been attempts to simplify approaches to cost-effectiveness estimates by using simple decision trees or cost-effectiveness calculations with short-time horizons. However, these approaches leave out important cumulative economic effects that will not appear early in a treatment. Many economic modeling studies were identified in the 'gray' literature, but limited descriptions precluded an assessment of their adherence to modeling guidelines, and thus to the validity of their findings. There is a need for developing third-generation models to accommodate new knowledge about adherence, adverse effects, and viral resistance.

  10. Signal Trees: Communicating Attribution of Climate Change Impacts Through Causal Chain Illustrations

    NASA Astrophysics Data System (ADS)

    Cutting, H.

    2016-12-01

    Communicating the attribution of current climate change impacts is a key task for engagment with the general public, news media and policy makers, particularly as climate events unfold in real time. The IPCC WGII in AR5 validated the use of causal chain illustrations to depict attribution of individual climate change impacts. Climate Signals, an online digital platform for mapping and cataloging climate change impacts (launched in May of 2016), explores the use of such illustrations for communicating attribution. The Climate Signals project has developed semi-automated graphing software to produce custom attribution trees for numerous climate change events. This effort offers lessons for engagement of the general public and policy makers in the attribution of climate change impacts.

  11. Designing Illustrations for CBVE Technical Procedures.

    ERIC Educational Resources Information Center

    Laugen, Ronald C.

    A model was formulated for developing functional illustrations for text-based competency-based vocational education (CBVE) instructional materials. The proposed model contained four prescriptive steps that address the events of instruction to be provided or supported and the locations, content, and learning cues for each illustration. Usefulness…

  12. Atom-type-based AI topological descriptors: application in structure-boiling point correlations of oxo organic compounds.

    PubMed

    Ren, Biye

    2003-01-01

    Structure-boiling point relationships are studied for a series of oxo organic compounds by means of multiple linear regression (MLR) analysis. Excellent MLR models based on the recently introduced Xu index and the atom-type-based AI indices are obtained for the two subsets containing respectively 77 ethers and 107 carbonyl compounds and a combined set of 184 oxo compounds. The best models are tested using the leave-one-out cross-validation and an external test set, respectively. The MLR model produces a correlation coefficient of r = 0.9977 and a standard error of s = 3.99 degrees C for the training set of 184 compounds, and r(cv) = 0.9974 and s(cv) = 4.16 degrees C for the cross-validation set, and r(pred) = 0.9949 and s(pred) = 4.38 degrees C for the prediction set of 21 compounds. For the two subsets containing respectively 77 ethers and 107 carbonyl compounds, the quality of the models is further improved. The standard errors are reduced to 3.30 and 3.02 degrees C, respectively. Furthermore, the results obtained from this study indicate that the boiling points of the studied oxo compound dominantly depend on molecular size and also depend on individual atom types, especially oxygen heteroatoms in molecules due to strong polar interactions between molecules. These excellent structure-boiling point models not only provide profound insights into the role of structural features in a molecule but also illustrate the usefulness of these indices in QSPR/QSAR modeling of complex compounds.

  13. On Finding and Using Identifiable Parameter Combinations in Nonlinear Dynamic Systems Biology Models and COMBOS: A Novel Web Implementation

    PubMed Central

    DiStefano, Joseph

    2014-01-01

    Parameter identifiability problems can plague biomodelers when they reach the quantification stage of development, even for relatively simple models. Structural identifiability (SI) is the primary question, usually understood as knowing which of P unknown biomodel parameters p 1,…, pi,…, pP are-and which are not-quantifiable in principle from particular input-output (I-O) biodata. It is not widely appreciated that the same database also can provide quantitative information about the structurally unidentifiable (not quantifiable) subset, in the form of explicit algebraic relationships among unidentifiable pi. Importantly, this is a first step toward finding what else is needed to quantify particular unidentifiable parameters of interest from new I–O experiments. We further develop, implement and exemplify novel algorithms that address and solve the SI problem for a practical class of ordinary differential equation (ODE) systems biology models, as a user-friendly and universally-accessible web application (app)–COMBOS. Users provide the structural ODE and output measurement models in one of two standard forms to a remote server via their web browser. COMBOS provides a list of uniquely and non-uniquely SI model parameters, and–importantly-the combinations of parameters not individually SI. If non-uniquely SI, it also provides the maximum number of different solutions, with important practical implications. The behind-the-scenes symbolic differential algebra algorithms are based on computing Gröbner bases of model attributes established after some algebraic transformations, using the computer-algebra system Maxima. COMBOS was developed for facile instructional and research use as well as modeling. We use it in the classroom to illustrate SI analysis; and have simplified complex models of tumor suppressor p53 and hormone regulation, based on explicit computation of parameter combinations. It’s illustrated and validated here for models of moderate complexity, with and without initial conditions. Built-in examples include unidentifiable 2 to 4-compartment and HIV dynamics models. PMID:25350289

  14. The effect of illustrations on patient comprehension of medication instruction labels.

    PubMed

    Hwang, Stephen W; Tram, Carolyn Q N; Knarr, Nadia

    2005-06-16

    Labels with special instructions regarding how a prescription medication should be taken or its possible side effects are often applied to pill bottles. The goal of this study was to determine whether the addition of illustrations to these labels affects patient comprehension. Study participants (N = 130) were enrolled by approaching patients at three family practice clinics in Toronto, Canada. Participants were asked to interpret two sets of medication instruction labels, the first with text only and the second with the same text accompanied by illustrations. Two investigators coded participants' responses as incorrect, partially correct, or completely correct. Health literacy levels of participants were measured using a validated instrument, the REALM test. All participants gave a completely correct interpretation for three out of five instruction labels, regardless of whether illustrations were present or not. For the two most complex labels, only 34-55% of interpretations of the text-only version were completely correct. The addition of illustrations was associated with improved performance in 5-7% of subjects and worsened performance in 7-9% of subjects. The commonly-used illustrations on the medication labels used in this study were of little or no use in improving patients' comprehension of the accompanying written instructions.

  15. Identifying the behavioural characteristics of clay cliffs using intensive monitoring and geotechnical numerical modelling

    NASA Astrophysics Data System (ADS)

    Quinn, J. D.; Rosser, N. J.; Murphy, W.; Lawrence, J. A.

    2010-08-01

    Coastal monitoring is routinely undertaken to provide an archival record of cliff-line movement that can be used in the development and validation of predictive coast retreat and evolution models. However, coastal monitoring is often purely quantitative in nature, and financial necessity requires deployment over extensive coastal sections. As a result, for local site conditions in particular, only limited geomorphological data are available or included during the development of such predictive models. This has resulted in many current models incorporating a simplistic or generalised representation of cliff behaviour, an approach that progressively loses local credibility when deployed over extensive heterogeneous coastlines. This study addresses this situation at a site of extreme coastline retreat, Holderness, UK, through the application of intensive monitoring of six representative cliff sections nested within a general geomorphological appraisal of the wider coastline as a whole. The data from these surveys have been used to validate a finite difference-based geotechnical modelling assessment of clay cliff stability. Once validated, the geotechnical model was used to simulate a range of scenarios that were sufficient to represent the range of topographic, hydrogeological, geological, and littoral conditions exhibited throughout the region. Our assessment identified that the cliff retreat occurs through the combined influence of direct marine erosion of the cliff, with shallow, structurally controlled failures or substantial mass failures. Critically, the predisposition to any one of these failure mechanisms arises principally as a result of initial cliff height. The results of the numerical modelling have been combined into an empirical slope model that derives the rate of landslide-induced retreat that would arise from mass failures under various future scenarios. Results of this study can be used in the selection and development of retreat models at coastlines of similar physiographic setting to that found at Holderness. The results represent a key step in linking material deformation properties to the processes of cliff change and the subsequent range of landforms found on clay cliffs. As such, the results could also be used more generally to illustrate the likely cliff behaviour of other soft rock coastlines.

  16. The Model-Building Process in Introductory College Geography: An Illustrative Example

    ERIC Educational Resources Information Center

    Cadwallader, Martin

    1978-01-01

    Illustrates the five elements of conceptual models by developing a model of consumer behavior in choosing among alternative supermarkets. The elements are: identifying the problem, constructing a conceptual model, translating it into a symbolic model, operationalizing the model, and testing. (Author/AV)

  17. A validated methodology for the prediction of heating and cooling energy demand for buildings within the Urban Heat Island: Case-study of London

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolokotroni, Maria; Bhuiyan, Saiful; Davies, Michael

    2010-12-15

    This paper describes a method for predicting air temperatures within the Urban Heat Island at discreet locations based on input data from one meteorological station for the time the prediction is required and historic measured air temperatures within the city. It uses London as a case-study to describe the method and its applications. The prediction model is based on Artificial Neural Network (ANN) modelling and it is termed the London Site Specific Air Temperature (LSSAT) predictor. The temporal and spatial validity of the model was tested using data measured 8 years later from the original dataset; it was found thatmore » site specific hourly air temperature prediction provides acceptable accuracy and improves considerably for average monthly values. It thus is a very reliable tool for use as part of the process of predicting heating and cooling loads for urban buildings. This is illustrated by the computation of Heating Degree Days (HDD) and Cooling Degree Hours (CDH) for a West-East Transect within London. The described method could be used for any city for which historic hourly air temperatures are available for a number of locations; for example air pollution measuring sites, common in many cities, typically measure air temperature on an hourly basis. (author)« less

  18. Human skeletal muscle behavior in vivo: Finite element implementation, experiment, and passive mechanical characterization.

    PubMed

    Clemen, Christof B; Benderoth, Günther E K; Schmidt, Andreas; Hübner, Frank; Vogl, Thomas J; Silber, Gerhard

    2017-01-01

    In this study, useful methods for active human skeletal muscle material parameter determination are provided. First, a straightforward approach to the implementation of a transversely isotropic hyperelastic continuum mechanical material model in an invariant formulation is presented. This procedure is found to be feasible even if the strain energy is formulated in terms of invariants other than those predetermined by the software's requirements. Next, an appropriate experimental setup for the observation of activation-dependent material behavior, corresponding data acquisition, and evaluation is given. Geometry reconstruction based on magnetic resonance imaging of different deformation states is used to generate realistic, subject-specific finite element models of the upper arm. Using the deterministic SIMPLEX optimization strategy, a convenient quasi-static passive-elastic material characterization is pursued; the results of this approach used to characterize the behavior of human biceps in vivo indicate the feasibility of the illustrated methods to identify active material parameters comprising multiple loading modes. A comparison of a contact simulation incorporating the optimized parameters to a reconstructed deformed geometry of an indented upper arm shows the validity of the obtained results regarding deformation scenarios perpendicular to the effective direction of the nonactivated biceps. However, for a valid, activatable, general-purpose material characterization, the material model needs some modifications as well as a multicriteria optimization of the force-displacement data for different loading modes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Evolving Waves and Turbulence in the Outer Corona and Inner Heliosphere: The Accelerating Expanding Box

    NASA Astrophysics Data System (ADS)

    Tenerani, Anna; Velli, Marco

    2017-07-01

    Alfvénic fluctuations in the solar wind display many properties reflecting an ongoing nonlinear cascade, e.g., a well-defined spectrum in frequency, together with some characteristics more commonly associated with the linear propagation of waves from the Sun, such as the variation of fluctuation amplitude with distance, dominated by solar wind expansion effects. Therefore, both nonlinearities and expansion must be included simultaneously in any successful model of solar wind turbulence evolution. Because of the disparate spatial scales involved, direct numerical simulations of turbulence in the solar wind represent an arduous task, especially if one wants to go beyond the incompressible approximation. Indeed, most simulations neglect solar wind expansion effects entirely. Here we develop a numerical model to simulate turbulent fluctuations from the outer corona to 1 au and beyond, including the sub-Alfvénic corona. The accelerating expanding box (AEB) extends the validity of previous expanding box models by taking into account both the acceleration of the solar wind and the inhomogeneity of background density and magnetic field. Our method incorporates a background accelerating wind within a magnetic field that naturally follows the Parker spiral evolution using a two-scale analysis in which the macroscopic spatial effect coupling fluctuations with background gradients becomes a time-dependent coupling term in a homogeneous box. In this paper we describe the AEB model in detail and discuss its main properties, illustrating its validity by studying Alfvén wave propagation across the Alfvén critical point.

  20. Design of novel quinazolinone derivatives as inhibitors for 5HT7 receptor.

    PubMed

    Chitta, Aparna; Jatavath, Mohan Babu; Fatima, Sabiha; Manga, Vijjulatha

    2012-02-01

    To study the pharmacophore properties of quinazolinone derivatives as 5HT(7) inhibitors, 3D QSAR methodologies, namely Comparative Molecular Field Analysis (CoMFA) and Comparative Molecular Similarity Indices Analysis (CoMSIA) were applied, partial least square (PLS) analysis was performed and QSAR models were generated. The derived model showed good statistical reliability in terms of predicting the 5HT(7) inhibitory activity of the quinazolione derivative, based on molecular property fields like steric, electrostatic, hydrophobic, hydrogen bond donor and hydrogen bond acceptor fields. This is evident from statistical parameters like q(2) (cross validated correlation coefficient) of 0.642, 0.602 and r(2) (conventional correlation coefficient) of 0.937, 0.908 for CoMFA and CoMSIA respectively. The predictive ability of the models to determine 5HT(7) antagonistic activity is validated using a test set of 26 molecules that were not included in the training set and the predictive r(2) obtained for the test set was 0.512 & 0.541. Further, the results of the derived model are illustrated by means of contour maps, which give an insight into the interaction of the drug with the receptor. The molecular fields so obtained served as the basis for the design of twenty new ligands. In addition, ADME (Adsorption, Distribution, Metabolism and Elimination) have been calculated in order to predict the relevant pharmaceutical properties, and the results are in conformity with required drug like properties.

  1. Fluid-structure interaction including volumetric coupling with homogenised subdomains for modeling respiratory mechanics.

    PubMed

    Yoshihara, Lena; Roth, Christian J; Wall, Wolfgang A

    2017-04-01

    In this article, a novel approach is presented for combining standard fluid-structure interaction with additional volumetric constraints to model fluid flow into and from homogenised solid domains. The proposed algorithm is particularly interesting for investigations in the field of respiratory mechanics as it enables the mutual coupling of airflow in the conducting part and local tissue deformation in the respiratory part of the lung by means of a volume constraint. In combination with a classical monolithic fluid-structure interaction approach, a comprehensive model of the human lung can be established that will be useful to gain new insights into respiratory mechanics in health and disease. To illustrate the validity and versatility of the novel approach, three numerical examples including a patient-specific lung model are presented. The proposed algorithm proves its capability of computing clinically relevant airflow distribution and tissue strain data at a level of detail that is not yet achievable, neither with current imaging techniques nor with existing computational models. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Integration of logistic regression, Markov chain and cellular automata models to simulate urban expansion

    NASA Astrophysics Data System (ADS)

    Jokar Arsanjani, Jamal; Helbich, Marco; Kainz, Wolfgang; Darvishi Boloorani, Ali

    2013-04-01

    This research analyses the suburban expansion in the metropolitan area of Tehran, Iran. A hybrid model consisting of logistic regression model, Markov chain (MC), and cellular automata (CA) was designed to improve the performance of the standard logistic regression model. Environmental and socio-economic variables dealing with urban sprawl were operationalised to create a probability surface of spatiotemporal states of built-up land use for the years 2006, 2016, and 2026. For validation, the model was evaluated by means of relative operating characteristic values for different sets of variables. The approach was calibrated for 2006 by cross comparing of actual and simulated land use maps. The achieved outcomes represent a match of 89% between simulated and actual maps of 2006, which was satisfactory to approve the calibration process. Thereafter, the calibrated hybrid approach was implemented for forthcoming years. Finally, future land use maps for 2016 and 2026 were predicted by means of this hybrid approach. The simulated maps illustrate a new wave of suburban development in the vicinity of Tehran at the western border of the metropolis during the next decades.

  3. H∞ output tracking control of uncertain and disturbed nonlinear systems based on neural network model

    NASA Astrophysics Data System (ADS)

    Li, Chengcheng; Li, Yuefeng; Wang, Guanglin

    2017-07-01

    The work presented in this paper seeks to address the tracking problem for uncertain continuous nonlinear systems with external disturbances. The objective is to obtain a model that uses a reference-based output feedback tracking control law. The control scheme is based on neural networks and a linear difference inclusion (LDI) model, and a PDC structure and H∞ performance criterion are used to attenuate external disturbances. The stability of the whole closed-loop model is investigated using the well-known quadratic Lyapunov function. The key principles of the proposed approach are as follows: neural networks are first used to approximate nonlinearities, to enable a nonlinear system to then be represented as a linearised LDI model. An LMI (linear matrix inequality) formula is obtained for uncertain and disturbed linear systems. This formula enables a solution to be obtained through an interior point optimisation method for some nonlinear output tracking control problems. Finally, simulations and comparisons are provided on two practical examples to illustrate the validity and effectiveness of the proposed method.

  4. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  5. Micro-porous layer stochastic reconstruction and transport parameter determination

    NASA Astrophysics Data System (ADS)

    El Hannach, Mohamed; Singh, Randhir; Djilali, Ned; Kjeang, Erik

    2015-05-01

    The Micro-Porous Layer (MPL) is a porous, thin layer commonly used in fuel cells at the interfaces between the catalyst layers and gas diffusion media. It is generally made from spherical carbon nanoparticles and PTFE acting as hydrophobic agent. The scale and brittle nature of the MPL structure makes it challenging to study experimentally. In the present work, a 3D stochastic model is developed to virtually reconstruct the MPL structure. The carbon nanoparticle and PTFE phases are fully distinguished by the algorithm. The model is shown to capture the actual structural morphology of the MPL and is validated by comparing the results to available experimental data. The model shows a good capability in generating a realistic MPL successfully using a set of parameters introduced to capture specific morphological features of the MPL. A numerical model that resolves diffusive transport at the pore scale is used to compute the effective transport properties of the reconstructed MPLs. A parametric study is conducted to illustrate the capability of the model as an MPL design tool that can be used to guide and optimize the functionality of the material.

  6. An adaptive state of charge estimation approach for lithium-ion series-connected battery system

    NASA Astrophysics Data System (ADS)

    Peng, Simin; Zhu, Xuelai; Xing, Yinjiao; Shi, Hongbing; Cai, Xu; Pecht, Michael

    2018-07-01

    Due to the incorrect or unknown noise statistics of a battery system and its cell-to-cell variations, state of charge (SOC) estimation of a lithium-ion series-connected battery system is usually inaccurate or even divergent using model-based methods, such as extended Kalman filter (EKF) and unscented Kalman filter (UKF). To resolve this problem, an adaptive unscented Kalman filter (AUKF) based on a noise statistics estimator and a model parameter regulator is developed to accurately estimate the SOC of a series-connected battery system. An equivalent circuit model is first built based on the model parameter regulator that illustrates the influence of cell-to-cell variation on the battery system. A noise statistics estimator is then used to attain adaptively the estimated noise statistics for the AUKF when its prior noise statistics are not accurate or exactly Gaussian. The accuracy and effectiveness of the SOC estimation method is validated by comparing the developed AUKF and UKF when model and measurement statistics noises are inaccurate, respectively. Compared with the UKF and EKF, the developed method shows the highest SOC estimation accuracy.

  7. Acoustic scaling: A re-evaluation of the acoustic model of Manchester Studio 7

    NASA Astrophysics Data System (ADS)

    Walker, R.

    1984-12-01

    The reasons for the reconstruction and re-evaluation of the acoustic scale mode of a large music studio are discussed. The design and construction of the model using mechanical and structural considerations rather than purely acoustic absorption criteria is described and the results obtained are given. The results confirm that structural elements within the studio gave rise to unexpected and unwanted low-frequency acoustic absorption. The results also show that at least for the relatively well understood mechanisms of sound energy absorption physical modelling of the structural and internal components gives an acoustically accurate scale model, within the usual tolerances of acoustic design. The poor reliability of measurements of acoustic absorption coefficients, is well illustrated. The conclusion is reached that such acoustic scale modelling is a valid and, for large scale projects, financially justifiable technique for predicting fundamental acoustic effects. It is not appropriate for the prediction of fine details because such small details are unlikely to be reproduced exactly at a different size without extensive measurements of the material's performance at both scales.

  8. Non parametric, self organizing, scalable modeling of spatiotemporal inputs: the sign language paradigm.

    PubMed

    Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S

    2012-12-01

    Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Using 3D dynamic cartography and hydrological modelling for linear streamflow mapping

    NASA Astrophysics Data System (ADS)

    Drogue, G.; Pfister, L.; Leviandier, T.; Humbert, J.; Hoffmann, L.; El Idrissi, A.; Iffly, J.-F.

    2002-10-01

    This paper presents a regionalization methodology and an original representation of the downstream variation of daily streamflow using a conceptual rainfall-runoff model (HRM) and the 3D visualization tools of the GIS ArcView. The regionalization of the parameters of the HRM model was obtained by fitting simultaneously the runoff series from five sub-basins of the Alzette river basin (Grand-Duchy of Luxembourg) according to the permeability of geological formations. After validating the transposability of the regional parameter values on five test basins, streamflow series were simulated with the model at ungauged sites in one medium size geologically contrasted test basin and interpolated assuming a linear increase of streamflow between modelling points. 3D spatio-temporal cartography of mean annual and high raw and specific discharges are illustrated. During a severe flooding, the propagation of the flood waves in the different parts of the stream network shows an important contribution of sub-basins lying on impervious geological formations (direct runoff) compared with those including permeable geological formations which have a more contrasted hydrological response. The effect of spatial variability of rainfall is clearly perceptible.

  10. Personalized dynamic prediction of death according to tumour progression and high-dimensional genetic factors: Meta-analysis with a joint model.

    PubMed

    Emura, Takeshi; Nakatochi, Masahiro; Matsui, Shigeyuki; Michimae, Hirofumi; Rondeau, Virginie

    2017-01-01

    Developing a personalized risk prediction model of death is fundamental for improving patient care and touches on the realm of personalized medicine. The increasing availability of genomic information and large-scale meta-analytic data sets for clinicians has motivated the extension of traditional survival prediction based on the Cox proportional hazards model. The aim of our paper is to develop a personalized risk prediction formula for death according to genetic factors and dynamic tumour progression status based on meta-analytic data. To this end, we extend the existing joint frailty-copula model to a model allowing for high-dimensional genetic factors. In addition, we propose a dynamic prediction formula to predict death given tumour progression events possibly occurring after treatment or surgery. For clinical use, we implement the computation software of the prediction formula in the joint.Cox R package. We also develop a tool to validate the performance of the prediction formula by assessing the prediction error. We illustrate the method with the meta-analysis of individual patient data on ovarian cancer patients.

  11. Testing multiple statistical hypotheses resulted in spurious associations: a study of astrological signs and health.

    PubMed

    Austin, Peter C; Mamdani, Muhammad M; Juurlink, David N; Hux, Janet E

    2006-09-01

    To illustrate how multiple hypotheses testing can produce associations with no clinical plausibility. We conducted a study of all 10,674,945 residents of Ontario aged between 18 and 100 years in 2000. Residents were randomly assigned to equally sized derivation and validation cohorts and classified according to their astrological sign. Using the derivation cohort, we searched through 223 of the most common diagnoses for hospitalization until we identified two for which subjects born under one astrological sign had a significantly higher probability of hospitalization compared to subjects born under the remaining signs combined (P<0.05). We tested these 24 associations in the independent validation cohort. Residents born under Leo had a higher probability of gastrointestinal hemorrhage (P=0.0447), while Sagittarians had a higher probability of humerus fracture (P=0.0123) compared to all other signs combined. After adjusting the significance level to account for multiple comparisons, none of the identified associations remained significant in either the derivation or validation cohort. Our analyses illustrate how the testing of multiple, non-prespecified hypotheses increases the likelihood of detecting implausible associations. Our findings have important implications for the analysis and interpretation of clinical studies.

  12. In Silico Evaluation of Pharmacokinetic Optimization for Antimitogram-Based Clinical Trials.

    PubMed

    Haviari, Skerdi; You, Benoît; Tod, Michel

    2018-04-01

    Antimitograms are prototype in vitro tests for evaluating chemotherapeutic efficacy using patient-derived primary cancer cells. These tests might help optimize treatment from a pharmacodynamic standpoint by guiding treatment selection. However, they are technically challenging and require refinements and trials to demonstrate benefit to be widely used. In this study, we performed simulations aimed at exploring how to validate antimitograms and how to complement them by pharmacokinetic optimization. A generic model of advanced cancer, including pharmacokinetic-pharmacodynamic monitoring, was used to link dosing schedules with progression-free survival (PFS), as built from previously validated modules. This model was used to explore different possible situations in terms of pharmacokinetic variability, pharmacodynamic variability, and antimitogram performance. The model recapitulated tumor dynamics and standalone therapeutic drug monitoring efficacy consistent with published clinical results. Simulations showed that combining pharmacokinetic and pharmacodynamic optimization should increase PFS in a synergistic fashion. Simulated data were then used to compute required clinical trial sizes, which were 30% to 90% smaller when pharmacokinetic optimization was added to pharmacodynamic optimization. This improvement was observed even when pharmacokinetic optimization alone exhibited only modest benefit. Overall, our work illustrates the synergy derived from combining antimitograms with therapeutic drug monitoring, permitting a disproportionate reduction of the trial size required to prove a benefit on PFS. Accordingly, we suggest that strategies with benefits too small for standalone clinical trials could be validated in combination in a similar manner. Significance: This work offers a method to reduce the number of patients needed for a clinical trial to prove the hypothesized benefit of a drug to progression-free survival, possibly easing opportunities to evaluate combinations. Cancer Res; 78(7); 1873-82. ©2018 AACR . ©2018 American Association for Cancer Research.

  13. Can linear regression modeling help clinicians in the interpretation of genotypic resistance data? An application to derive a lopinavir-score.

    PubMed

    Cozzi-Lepri, Alessandro; Prosperi, Mattia C F; Kjær, Jesper; Dunn, David; Paredes, Roger; Sabin, Caroline A; Lundgren, Jens D; Phillips, Andrew N; Pillay, Deenan

    2011-01-01

    The question of whether a score for a specific antiretroviral (e.g. lopinavir/r in this analysis) that improves prediction of viral load response given by existing expert-based interpretation systems (IS) could be derived from analyzing the correlation between genotypic data and virological response using statistical methods remains largely unanswered. We used the data of the patients from the UK Collaborative HIV Cohort (UK CHIC) Study for whom genotypic data were stored in the UK HIV Drug Resistance Database (UK HDRD) to construct a training/validation dataset of treatment change episodes (TCE). We used the average square error (ASE) on a 10-fold cross-validation and on a test dataset (the EuroSIDA TCE database) to compare the performance of a newly derived lopinavir/r score with that of the 3 most widely used expert-based interpretation rules (ANRS, HIVDB and Rega). Our analysis identified mutations V82A, I54V, K20I and I62V, which were associated with reduced viral response and mutations I15V and V91S which determined lopinavir/r hypersensitivity. All models performed equally well (ASE on test ranging between 1.1 and 1.3, p = 0.34). We fully explored the potential of linear regression to construct a simple predictive model for lopinavir/r-based TCE. Although, the performance of our proposed score was similar to that of already existing IS, previously unrecognized lopinavir/r-associated mutations were identified. The analysis illustrates an approach of validation of expert-based IS that could be used in the future for other antiretrovirals and in other settings outside HIV research.

  14. 4D Model on Assessing Psychomotor Aspect in Continental Food Processing Practice

    NASA Astrophysics Data System (ADS)

    Nurafiati, P.; Ana, A.; Ratnasusanti, H.; Maulana, I.

    2018-02-01

    This research aims to develop and find out the response of observers for the assessment instrument of student’s psychomotor aspect on continental food processing practice. This research belongs to development research with 4P model that confined till the definition, design, and development stages. The data that gained during the research is analyzed descriptively. Research’s product is assessment instrument rubric form that consists of performance’s aspect which should be assessed and performance’s quality which stated in gradation score with 0-4 level and performance description that completed with picture illustration in every single score. Product was validate and responded based on material, construction, language, objectively, systematic, and practicability aspects. The result show that assessment instrument of student’s psychomotor aspect on continental food processing practice which developed gain very good response with percentage of 84,47%.

  15. An info-gap application to robust design of a prestressed space structure under epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Hot, Aurélien; Weisser, Thomas; Cogan, Scott

    2017-07-01

    Uncertainty quantification is an integral part of the model validation process and is important to take into account during the design of mechanical systems. Sources of uncertainty are diverse but generally fall into two categories: aleatory due to random process and epistemic resulting from a lack of knowledge. This work focuses on the behavior of solar arrays in their stowed configuration. To avoid impacts during launch, snubbers are used to prestress the panels. Since the mechanical properties of the snubbers and the associated preload configurations are difficult to characterize precisely, an info-gap approach is proposed to investigate the influence of such uncertainties on design configurations obtained for different values of safety factors. This eventually allows to revise the typical values of these factors and to reevaluate them with respect to a targeted robustness level. The proposed methodology is illustrated using a simplified finite element model of a solar array.

  16. Quantifying Adventitious Error in a Covariance Structure as a Random Effect

    PubMed Central

    Wu, Hao; Browne, Michael W.

    2017-01-01

    We present an approach to quantifying errors in covariance structures in which adventitious error, identified as the process underlying the discrepancy between the population and the structured model, is explicitly modeled as a random effect with a distribution, and the dispersion parameter of this distribution to be estimated gives a measure of misspecification. Analytical properties of the resultant procedure are investigated and the measure of misspecification is found to be related to the RMSEA. An algorithm is developed for numerical implementation of the procedure. The consistency and asymptotic sampling distributions of the estimators are established under a new asymptotic paradigm and an assumption weaker than the standard Pitman drift assumption. Simulations validate the asymptotic sampling distributions and demonstrate the importance of accounting for the variations in the parameter estimates due to adventitious error. Two examples are also given as illustrations. PMID:25813463

  17. Three-dimensional finite elements for the analysis of soil contamination using a multiple-porosity approach

    NASA Astrophysics Data System (ADS)

    El-Zein, Abbas; Carter, John P.; Airey, David W.

    2006-06-01

    A three-dimensional finite-element model of contaminant migration in fissured clays or contaminated sand which includes multiple sources of non-equilibrium processes is proposed. The conceptual framework can accommodate a regular network of fissures in 1D, 2D or 3D and immobile solutions in the macro-pores of aggregated topsoils, as well as non-equilibrium sorption. A Galerkin weighted-residual statement for the three-dimensional form of the equations in the Laplace domain is formulated. Equations are discretized using linear and quadratic prism elements. The system of algebraic equations is solved in the Laplace domain and solution is inverted to the time domain numerically. The model is validated and its scope is illustrated through the analysis of three problems: a waste repository deeply buried in fissured clay, a storage tank leaking into sand and a sanitary landfill leaching into fissured clay over a sand aquifer.

  18. Temperature corrected-calibration of GRACE's accelerometer

    NASA Astrophysics Data System (ADS)

    Encarnacao, J.; Save, H.; Siemes, C.; Doornbos, E.; Tapley, B. D.

    2017-12-01

    Since April 2011, the thermal control of the accelerometers on board the GRACE satellites has been turned off. The time series of along-track bias clearly show a drastic change in the behaviour of this parameter, while the calibration model has remained unchanged throughout the entire mission lifetime. In an effort to improve the quality of the gravity field models produced at CSR in future mission-long re-processing of GRACE data, we quantify the added value of different calibration strategies. In one approach, the temperature effects that distort the raw accelerometer measurements collected without thermal control are corrected considering the housekeeping temperature readings. In this way, one single calibration strategy can be consistently applied during the whole mission lifetime, since it is valid to thermal the conditions before and after April 2011. Finally, we illustrate that the resulting calibrated accelerations are suitable for neutral thermospheric density studies.

  19. Computational Fluid Dynamics (CFD): Future role and requirements as viewed by an applied aerodynamicist. [computer systems design

    NASA Technical Reports Server (NTRS)

    Yoshihara, H.

    1978-01-01

    The problem of designing the wing-fuselage configuration of an advanced transonic commercial airliner and the optimization of a supercruiser fighter are sketched, pointing out the essential fluid mechanical phenomena that play an important role. Such problems suggest that for a numerical method to be useful, it must be able to treat highly three dimensional turbulent separations, flows with jet engine exhausts, and complex vehicle configurations. Weaknesses of the two principal tools of the aerodynamicist, the wind tunnel and the computer, suggest a complementing combined use of these tools, which is illustrated by the case of the transonic wing-fuselage design. The anticipated difficulties in developing an adequate turbulent transport model suggest that such an approach may have to suffice for an extended period. On a longer term, experimentation of turbulent transport in meaningful cases must be intensified to provide a data base for both modeling and theory validation purposes.

  20. Bayesian Group Bridge for Bi-level Variable Selection.

    PubMed

    Mallick, Himel; Yi, Nengjun

    2017-06-01

    A Bayesian bi-level variable selection method (BAGB: Bayesian Analysis of Group Bridge) is developed for regularized regression and classification. This new development is motivated by grouped data, where generic variables can be divided into multiple groups, with variables in the same group being mechanistically related or statistically correlated. As an alternative to frequentist group variable selection methods, BAGB incorporates structural information among predictors through a group-wise shrinkage prior. Posterior computation proceeds via an efficient MCMC algorithm. In addition to the usual ease-of-interpretation of hierarchical linear models, the Bayesian formulation produces valid standard errors, a feature that is notably absent in the frequentist framework. Empirical evidence of the attractiveness of the method is illustrated by extensive Monte Carlo simulations and real data analysis. Finally, several extensions of this new approach are presented, providing a unified framework for bi-level variable selection in general models with flexible penalties.

Top