Sample records for ii model validation

  1. Model validation for control and controller validation in a prediction error identication framework - Part II : illustrations

    Microsoft Academic Search

    Michel Gevers; Xavier Bombois; Brian D. O. Anderson

    2002-01-01

    In this paper, we illustrate our new results on model validation for control and controller validation in a prediction error identication framework, developed in a companion paper (Gevers et al., 2002), through two realistic simulation examples, covering widely dieren t control design applications. The rst is the control of a exible mechanical system (the Landau benchmark example) with a tracking

  2. Model validation for control and controller validation in a prediction error identification framework - Part II: illustrations

    Microsoft Academic Search

    Michel Gevers; Xavier Bombois; Gérard Scorletti; Brian D. O. Anderson

    2003-01-01

    In this paper, we illustrate our new results on model validation for control and controller validation in a prediction error identification framework, developed in a companion paper (Gevers et al., Automatica (2003) 39(3) pii: S005-1098(02)00234-0), through two realistic simulation examples, covering widely different control design applications. The first is the control of a flexible mechanical system (the Landau benchmark example)

  3. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    SciTech Connect

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and sensible heat fluxes appear to occupy intermediate positions between these extremes, but the existing large observational uncertainties in these processes make this a provisional assessment. In all selected processes as well, the error statistics are found to be sensitive to season and latitude sector, confirming the need for finer-scale analyses which also are in progress.

  4. A scattering model for perfectly conducting random surfaces. I - Model development. II - Range of validity

    NASA Technical Reports Server (NTRS)

    Fung, A. K.; Pan, G. W.

    1987-01-01

    The surface current on a perfectly conducting randomly rough surface is estimated by solving iteratively a standard integral equation, and the estimate is then used to compute the far-zone scattered fields and the backscattering coefficients for vertical, horizontal and cross polarizations. The model developed here yields a simple backscattering coefficient expression in terms of the surface parameters. The expression reduces analytically to the Kirchhoff and the first-order small-perturbation model in the high- and low-frequency regions, respectively. The range of validity of the model is determined.

  5. VV&A; II: enhancing modeling and simulation accreditation by structuring Verification and Validation results

    Microsoft Academic Search

    Dirk Brade; Intelligenter Systeme

    2000-01-01

    Model Verification, Validation and Accreditation (VV&A) is as complex as developing a Modeling and Simulation (M&S) application itself. For the purpose of structuring both Verification and Validation (V&V) activities and V&V results, we introduce a refined V&V process. After identification of the major influence factors on applicable V&V, a conceptual approach for subphase-wise organization of V&V activities is presented. Finally

  6. Process Modeling of Composite Materials: Residual Stress Development during Cure. Part II. Experimental Validation

    Microsoft Academic Search

    S. R. White; H. T. Hahn

    1992-01-01

    In a companion paper [1] a process model was developed for investigation of residual stress development during autoclave or hot press processing of thermosetting polymer matrix composites. Several material property characterization studies are re quired as input to this model. The present paper summarizes the results of the character ization studies required for input to the model and validation of

  7. Development of a livestock odor dispersion model: part II. Evaluation and validation.

    PubMed

    Yu, Zimu; Guo, Huiqing; Laguë, Claude

    2011-03-01

    A livestock odor dispersion model (LODM) was developed to predict odor concentration and odor frequency using routine hourly meteorological data input. The odor concentrations predicted by the LODM were compared with the results obtained from other commercial models (Industrial Source Complex Short-Term model, version 3, CALPUFF) to evaluate its appropriateness. Two sets of field odor plume measurement data were used to validate the model. The model-predicted mean odor concentrations and odor frequencies were compared with those measured. Results show that this model has good performance for predicting odor concentrations and odor frequencies. PMID:21416754

  8. A wheat grazing model for simulating grain and beef production: Part II - model validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Model evaluation is a prerequisite to its adoption and successful application. The objective of this paper is to evaluate the ability of a newly developed wheat grazing model to predict fall-winter forage and grain yields of winter wheat (Triticum aestivum L.) as well as daily weight gains per steer...

  9. SAGE II aerosol data validation based on retrieved aerosol model size distribution from SAGE II aerosol measurements

    NASA Technical Reports Server (NTRS)

    Wang, Pi-Huan; Mccormick, M. P.; Mcmaster, L. R.; Chu, W. P.; Swissler, T. J.; Osborn, M. T.; Russell, P. B.; Oberbeck, V. R.; Livingston, J.; Rosen, J. M.

    1989-01-01

    Consideration is given to aerosol correlative measurements experiments for the Stratospheric Aerosol and Gas Experiment (SAGE) II, conducted between November 1984 and July 1986. The correlative measurements were taken with an impactor/laser probe, a dustsonde, and an airborne 36-cm lidar system. The primary aerosol quantities measured by the ground-based instruments are compared with those calculated from the aerosol size distributions from SAGE II aerosol extinction measurements. Good agreement is found between the two sets of measurements.

  10. SAGE II aerosol data validation based on retrieved aerosol model size distribution from SAGE II aerosol measurements.

    PubMed

    Wang, P H; McCormick, M P; McMaster, L R; Chu, W P; Swissler, T J; Osborn, M T; Russell, P B; Oberbeck, V R; Livingston, J; Rosen, J M; Hofmann, D J; Grams, G W; Fuller, W H; Yue, G K

    1989-06-20

    This paper describes an investigation of the comprehensive aerosol correlative measurement experiments conducted between November 1984 and July 1986 for satellite measurement program of the Stratospheric Aerosol and Gas Experiment (SAGE II). The correlative sensors involved in the experiments consist of the NASA Ames Research Center impactor/laser probe, the University of Wyoming dustsonde, and the NASA Langley Research Center airborne 14-inch (36 cm) lidar system. The approach of the analysis is to compare the primary aerosol quantities measured by the ground-based instruments with the calculated ones based on the aerosol size distributions retrieved from the SAGE II aerosol extinction measurements. The analysis shows that the aerosol size distributions derived from the SAGE II observations agree qualitatively with the in situ measurements made by the impactor/laser probe. The SAGE II-derived vertical distributions of the ratio N0.15/N0.25 (where Nr is the cumulative aerosol concentration for particle radii greater than r, in micrometers) and the aerosol backscatter profiles at 0.532- and 0.6943-micrometer lidar wavelengths are shown to agree with the dustsonde and the 14-inch (36-cm) lidar observations, with the differences being within the respective uncertainties of the SAGE II and the other instruments. PMID:11539801

  11. The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models

    EPA Science Inventory

    The second phase of the MicroArray Quality Control (MAQC-II) project evaluated common practices for developing and validating microarray-based models aimed at predicting toxicological and clinical endpoints. Thirty-six teams developed classifiers for 13 endpoints - some easy, som...

  12. High-fidelity modeling of MEMS resonators. Part II. Coupled beam-substrate dynamics and validation

    Microsoft Academic Search

    Yong-Hwa Park; K. C. Park

    2004-01-01

    A computational multiphysics model of the coupled beam-substrate-electrostatic actuation dynamics of MEMS resonators has been developed for the model-based prediction of Q-factor and design sensitivity studies of the clamped vibrating beam. The substrate and resonator beam are modeled independently and then integrated by enforcing their interface compatibility condition and the force equilibrium to arrive at the multiphysics model. The present

  13. Assessing Wildlife Habitat Value of New England Salt Marshes: II. Model Testing and Validation

    EPA Science Inventory

    We test a previously described model to assess the wildlife habitat value of New England salt marshes by comparing modeled habitat values and scores with bird abundance and species richness at sixteen salt marshes in Narragansett Bay, Rhode Island USA. Assessment scores ranged f...

  14. Model Validation Status Review

    SciTech Connect

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and engineered barriers, plus the TSPA model itself Description of the model areas is provided in Section 3, and the documents reviewed are described in Section 4. The responsible manager for the Model Validation Status Review was the Chief Science Officer (CSO) for Bechtel-SAIC Co. (BSC). The team lead was assigned by the CSO. A total of 32 technical specialists were engaged to evaluate model validation status in the 21 model areas. The technical specialists were generally independent of the work reviewed, meeting technical qualifications as discussed in Section 5.

  15. Black liquor combustion validated recovery boiler modeling: Final year report. Volume 3 (Appendices II, sections 2--3 and III)

    Microsoft Academic Search

    T. M. Grace; W. J. Frederick; M. Salcudean; R. A. Wessel

    1998-01-01

    This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished

  16. INACTIVATION OF CRYPTOSPORIDIUM OOCYSTS IN A PILOT-SCALE OZONE BUBBLE-DIFFUSER CONTACTOR - II: MODEL VALIDATION AND APPLICATION

    EPA Science Inventory

    The ADR model developed in Part I of this study was successfully validated with experimenta data obtained for the inactivation of C. parvum and C. muris oocysts with a pilot-scale ozone-bubble diffuser contactor operated with treated Ohio River water. Kinetic parameters, required...

  17. Black liquor combustion validated recovery boiler modeling: Final year report. Volume 2 (Appendices I, section 5 and II, section 1)

    Microsoft Academic Search

    T. M. Grace; W. J. Frederick; M. Salcudean; R. A. Wessel

    1998-01-01

    This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished

  18. Black Liquor Combustion Validated Recovery Boiler Modeling, Final Year Report, Volume 3: Appendix II, Sections 2 & 3 and Appendix III

    Microsoft Academic Search

    T. M. Grace; W. J. Frederick; M. Salcudean; R. A. Wessel

    1998-01-01

    This project was initiated in October 1990 with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. Many of these objectives were accomplished

  19. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  20. Black liquor combustion validated recovery boiler modeling: Final year report. Volume 3 (Appendices II, sections 2--3 and III)

    SciTech Connect

    Grace, T.M.; Frederick, W.J.; Salcudean, M.; Wessel, R.A.

    1998-08-01

    This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished were as follows: (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes. (2) Validate the enhanced furnace models, so that users can have confidence in the predicted results. (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler. (4) Facilitate the transfer of codes, black liquid submodels, and fundamental knowledge to the US kraft pulp industry. Volume 3 contains the following appendix sections: Formation and destruction of nitrogen oxides in recovery boilers; Sintering and densification of recovery boiler deposits laboratory data and a rate model; and Experimental data on rates of particulate formation during char bed burning.

  1. Discriminant analysis for predicting dystocia in beef cattle. II. Derivation and validation of a prebreeding prediction model.

    PubMed

    Morrison, D G; Humes, P E; Keith, N K; Godke, R A

    1985-03-01

    Discriminant analysis was utilized to derive and validate a model for predicting dystocia using only data available at the beginning of the breeding season. Data were collected from 211 Chianina crossbred cows (2 to 6 yr old) bred to Chianina bulls. A proportionally stratified sampling procedure divided females into an analysis sample (n = 134) on which the model was derived and a hold-out sample (n = 77) on which the prediction model was validated (tested). Variables available during the derivation stage were cow age, cow weight, pelvic height, pelvic width, pelvic area and calf sire. Dystocia was categorized as either unassisted or assisted. Occurrence of dystocia was 17.2 and 18.2% in the analysis and hold-out samples, respectively. All data were standardized to a mean of zero and a variance of one before statistical analysis. The centroid of cows experiencing dystocia differed (P less than .01) from that of cows calving unassisted in the analysis sample. Significant variables were pelvic area and cow age (standardized coefficients = .56 and .51, respectively). This model correctly classified 85.1% of the cows in the analysis sample. This was 13.5% greater than the proportional chance criterion. For model validation, prediction accuracy was 84.4% in the hold-out group, which was 14.2% greater than the proportional chance criterion. However, only 57.1% of the cows that experienced dystocia were correctly classified. Examination of the data revealed that those cows misclassified were 3 yr of age or older.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:3988638

  2. Black Liquor Combustion Validated Recovery Boiler Modeling, Final Year Report, Volume 3: Appendix II, Sections 2 & 3 and Appendix III

    SciTech Connect

    T.M. Grace, W.J. Frederick, M. Salcudean, R.A. Wessel

    1998-08-01

    This project was initiated in October 1990 with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. Many of these objectives were accomplished at the end of the first five years and documented in a comprehensive report on that work (DOE/CE/40936-T3, 1996). A critical review of recovery boiler modeling, carried out in 1995, concluded that further enhancements of the model were needed to make reliable predictions of key output variables. In addition, there was a need for sufficient understanding of fouling and plugging processes to allow model outputs to be interpreted in terms of the effect on plugging and fouling. As a result, the project was restructured and reinitiated at the end of October 1995, and was completed in June 1997. The entire project is now complete and this report summarizes all of the work done on the project since it was restructured. The key tasks to be accomplished under the restructured project were to (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes; (2) Validate the enhanced furnace models, so that users can have confidence in the results; (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler; and (4) Facilitate the transfer of codes, black liquor submodels, and fundamental knowledge to the U.S. kraft pulp industry.

  3. Validation of performance assessment models

    SciTech Connect

    Bergeron, M.P.; Kincaid, C.T.

    1991-11-01

    The purpose of model validation in a low-level waste site performance assessment is to increase confidence in predictions of the migration and fate of future releases from the wastes. Unlike the process of computer code verification, model validation is a site-specific process that requires site-specific data. This paper provides an overview of the topic of model validation and describes the general approaches, strategies, and limitations of model validation being considered by various researchers concerned with the subject.

  4. Data Assimilation of Photosynthetic Light-use Efficiency using Multi-angular Satellite Data: II Model Implementation and Validation

    NASA Technical Reports Server (NTRS)

    Hilker, Thomas; Hall, Forest G.; Tucker, J.; Coops, Nicholas C.; Black, T. Andrew; Nichol, Caroline J.; Sellers, Piers J.; Barr, Alan; Hollinger, David Y.; Munger, J. W.

    2012-01-01

    Spatially explicit and temporally continuous estimates of photosynthesis will be of great importance for increasing our understanding of and ultimately closing the terrestrial carbon cycle. Current capabilities to model photosynthesis, however, are limited by accurate enough representations of the complexity of the underlying biochemical processes and the numerous environmental constraints imposed upon plant primary production. A potentially powerful alternative to model photosynthesis through these indirect observations is the use of multi-angular satellite data to infer light-use efficiency (e) directly from spectral reflectance properties in connection with canopy shadow fractions. Hall et al. (this issue) introduced a new approach for predicting gross ecosystem production that would allow the use of such observations in a data assimilation mode to obtain spatially explicit variations in e from infrequent polar-orbiting satellite observations, while meteorological data are used to account for the more dynamic responses of e to variations in environmental conditions caused by changes in weather and illumination. In this second part of the study we implement and validate the approach of Hall et al. (this issue) across an ecologically diverse array of eight flux-tower sites in North America using data acquired from the Compact High Resolution Imaging Spectroradiometer (CHRIS) and eddy-flux observations. Our results show significantly enhanced estimates of e and therefore cumulative gross ecosystem production (GEP) over the course of one year at all examined sites. We also demonstrate that e is greatly heterogeneous even across small study areas. Data assimilation and direct inference of GEP from space using a new, proposed sensor could therefore be a significant step towards closing the terrestrial carbon cycle.

  5. Hydrogen peroxide metabolism and sensing in human erythrocytes: a validated kinetic model and reappraisal of the role of peroxiredoxin II.

    PubMed

    Benfeitas, Rui; Selvaggio, Gianluca; Antunes, Fernando; Coelho, Pedro M B M; Salvador, Armindo

    2014-09-01

    Hydrogen peroxide (H2O2) metabolism in human erythrocytes has been thoroughly investigated, but unclear points persist. By integrating the available data into a mathematical model that accurately represents the current understanding and comparing computational predictions to observations we sought to (a) identify inconsistencies in present knowledge, (b) propose resolutions, and (c) examine their functional implications. The systematic confrontation of computational predictions with experimental observations of the responses of intact erythrocytes highlighted the following important discrepancy. The high rate constant (10(7)-10(8) M(-1) s(-1)) for H2O2 reduction determined for purified peroxiredoxin II (Prx2) and the high abundance of this protein indicate that under physiological conditions it consumes practically all the H2O2. However, this is inconsistent with extensive evidence that Prx2's contribution to H2O2 elimination is comparable to that of catalase. Models modified such that Prx2's effective peroxidase activity is just 10(5) M(-1) s(-1) agree near quantitatively with extensive experimental observations. This low effective activity is probably due to a strong but readily reversible inhibition of Prx2's peroxidatic activity in intact cells, implying that the main role of Prx2 in human erythrocytes is not to eliminate peroxide substrates. Simulations of the responses to physiological H2O2 stimuli highlight that a design combining abundant Prx2 with a low effective peroxidase activity spares NADPH while improving potential signaling properties of the Prx2/thioredoxin/thioredoxin reductase system. PMID:24952139

  6. Verification, validation and accreditation of simulation models

    Microsoft Academic Search

    Robert G. Sargent

    2000-01-01

    The paper discusses verification, validation, and accreditation of simulation models. The different approaches to deciding model validity are presented; how model verification and validation relate to the model development process are discussed; various validation techniques are defined; conceptual model validity, model verification, operational validity, and data validity are described; ways to document results are given; a recommended procedure is presented;

  7. Ab Initio Structural Modeling of and Experimental Validation for Chlamydia trachomatis Protein CT296 Reveal Structural Similarity to Fe(II) 2-Oxoglutarate-Dependent Enzymes?

    PubMed Central

    Kemege, Kyle E.; Hickey, John M.; Lovell, Scott; Battaile, Kevin P.; Zhang, Yang; Hefty, P. Scott

    2011-01-01

    Chlamydia trachomatis is a medically important pathogen that encodes a relatively high percentage of proteins with unknown function. The three-dimensional structure of a protein can be very informative regarding the protein's functional characteristics; however, determining protein structures experimentally can be very challenging. Computational methods that model protein structures with sufficient accuracy to facilitate functional studies have had notable successes. To evaluate the accuracy and potential impact of computational protein structure modeling of hypothetical proteins encoded by Chlamydia, a successful computational method termed I-TASSER was utilized to model the three-dimensional structure of a hypothetical protein encoded by open reading frame (ORF) CT296. CT296 has been reported to exhibit functional properties of a divalent cation transcription repressor (DcrA), with similarity to the Escherichia coli iron-responsive transcriptional repressor, Fur. Unexpectedly, the I-TASSER model of CT296 exhibited no structural similarity to any DNA-interacting proteins or motifs. To validate the I-TASSER-generated model, the structure of CT296 was solved experimentally using X-ray crystallography. Impressively, the ab initio I-TASSER-generated model closely matched (2.72-? C? root mean square deviation [RMSD]) the high-resolution (1.8-?) crystal structure of CT296. Modeled and experimentally determined structures of CT296 share structural characteristics of non-heme Fe(II) 2-oxoglutarate-dependent enzymes, although key enzymatic residues are not conserved, suggesting a unique biochemical process is likely associated with CT296 function. Additionally, functional analyses did not support prior reports that CT296 has properties shared with divalent cation repressors such as Fur. PMID:21965559

  8. Development of a new analyzing model for quantifying pedestrian slip resistance characteristics: part II—Experiments and validations

    Microsoft Academic Search

    In-Ju Kim

    2004-01-01

    In a companion paper (Development of a new analyzing model for quantifying pedestrian slip resistance characteristics: Part I. Theory and evaluation), a new tribology model capable of quantifying the surface interactions and wear evolutions between shoes and floors was presented. In the current paper, the model is used to analyze the surface interlocking mechanisms and wear developments between the shoe

  9. Predicting germination in semi-arid wildland seedbeds II. Field validation of wet thermal-time models

    Microsoft Academic Search

    Jennifer K. Rawlins; Bruce A. Roundy; Dennis Egget; Nathan Cline

    Accurate prediction of germination for species used for semi-arid land revegetation would support selection of plant materials for specific climatic conditions and sites. Wet thermal-time models predict germination time by summing progress toward germination subpopulation percentages as a function of temperature across intermittent wet periods or within singular wet periods. Wet periods may be defined by any reasonable seedbed water

  10. Model Fe-Al Steel with Exceptional Resistance to High Temperature Coarsening. Part II: Experimental Validation and Applications

    NASA Astrophysics Data System (ADS)

    Zhou, Tihe; Zhang, Peng; O'Malley, Ronald J.; Zurob, Hatem S.; Subramanian, Mani

    2015-01-01

    In order to achieve a fine uniform grain-size distribution using the process of thin slab casting and directing rolling (TSCDR), it is necessary to control the grain-size prior to the onset of thermomechanical processing. In the companion paper, Model Fe- Al Steel with Exceptional Resistance to High Temperature Coarsening. Part I: Coarsening Mechanism and Particle Pinning Effects, a new steel composition which uses a small volume fraction of austenite particles to pin the growth of delta-ferrite grains at high temperature was proposed and grain growth was studied in reheated samples. This paper will focus on the development of a simple laboratory-scale setup to simulate thin-slab casting of the newly developed steel and demonstrate the potential for grain size control under industrial conditions. Steel bars with different diameters are briefly dipped into the molten steel to create a shell of solidified material. These are then cooled down to room temperature at different cooling rates. During cooling, the austenite particles nucleate along the delta-ferrite grain boundaries and greatly retard grain growth. With decreasing temperature, more austenite particles precipitate, and grain growth can be completely arrested in the holding furnace. Additional applications of the model alloy are discussed including grain-size control in the heat affected zone in welds and grain-growth resistance at high temperature.

  11. Probabilistic Methods for Model Validation 

    E-print Network

    Halder, Abhishek

    2014-05-01

    This dissertation develops a probabilistic method for validation and verification (V&V) of uncertain nonlinear systems. Existing systems-control literature on model and controller V&V either deal with linear systems with norm-bounded uncertainties...

  12. MODEL VALIDATION AND UNCERTAINTY QUANTIFICATION.

    SciTech Connect

    Hemez, F.M.; Doebling, S.W.

    2000-10-01

    This session offers an open forum to discuss issues and directions of research in the areas of model updating, predictive quality of computer simulations, model validation and uncertainty quantification. Technical presentations review the state-of-the-art in nonlinear dynamics and model validation for structural dynamics. A panel discussion introduces the discussion on technology needs, future trends and challenges ahead with an emphasis placed on soliciting participation of the audience, One of the goals is to show, through invited contributions, how other scientific communities are approaching and solving difficulties similar to those encountered in structural dynamics. The session also serves the purpose of presenting the on-going organization of technical meetings sponsored by the U.S. Department of Energy and dedicated to health monitoring, damage prognosis, model validation and uncertainty quantification in engineering applications. The session is part of the SD-2000 Forum, a forum to identify research trends, funding opportunities and to discuss the future of structural dynamics.

  13. Black Liquor Combustion Validated Recovery Boiler Modeling, Final Year Report, Volume 2: Appendix I, Section 5, and Appendix II, Section 1

    Microsoft Academic Search

    T. M. Grace; W. J. Frederick; M. Salcudean; R. A. Wessel

    1998-01-01

    This project was initiated in October 1990 with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. Many of these objectives were accomplished

  14. A musculoskeletal model of the equine forelimb for determining surface stresses and strains in the humerus-part II. Experimental testing and model validation.

    PubMed

    Pollock, Sarah; Stover, Susan M; Hull, M L; Galuppo, Larry D

    2008-08-01

    The first objective of this study was to experimentally determine surface bone strain magnitudes and directions at the donor site for bone grafts, the site predisposed to stress fracture, the medial and cranial aspects of the transverse cross section corresponding to the stress fracture site, and the middle of the diaphysis of the humerus of a simplified in vitro laboratory preparation. The second objective was to determine whether computing strains solely in the direction of the longitudinal axis of the humerus in the mathematical model was inherently limited by comparing the strains measured along the longitudinal axis of the bone to the principal strain magnitudes and directions. The final objective was to determine whether the mathematical model formulated in Part I [Pollock et al., 2008, ASME J. Biomech. Eng., 130, p. 041006] is valid for determining the bone surface strains at the various locations on the humerus where experimentally measured longitudinal strains are comparable to principal strains. Triple rosette strain gauges were applied at four locations circumferentially on each of two cross sections of interest using a simplified in vitro laboratory preparation. The muscles included the biceps brachii muscle in addition to loaded shoulder muscles that were predicted active by the mathematical model. Strains from the middle grid of each rosette, aligned along the longitudinal axis of the humerus, were compared with calculated principal strain magnitudes and directions. The results indicated that calculating strains solely in the direction of the longitudinal axis is appropriate at six of eight locations. At the cranial and medial aspects of the middle of the diaphysis, the average minimum principal strain was not comparable to the average experimental longitudinal strain. Further analysis at the remaining six locations indicated that the mathematical model formulated in Part I predicts strains within +/-2 standard deviations of experimental strains at four of these locations and predicts negligible strains at the remaining two locations, which is consistent with experimental strains. Experimentally determined longitudinal strains at the middle of the diaphysis of the humerus indicate that tensile strains occur at the cranial aspect and compressive strains occur at the caudal aspect while the horse is standing, which is useful for fracture fixation. PMID:18601449

  15. MODEL VALIDATION REPORT FOR THE HOUSATONIC RIVER

    EPA Science Inventory

    The Model Validation Report will present a comparison of model validation runs to existing data for the model validation period. The validation period spans a twenty year time span to test the predictive capability of the model over a longer time period, similar to that which wil...

  16. Partial Validation of Multibody Program to Optimize Simulated Trajectories II (POST II) Parachute Simulation With Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Ben; Queen, Eric M.

    2002-01-01

    A capability to simulate trajectories Of Multiple interacting rigid bodies has been developed. This capability uses the Program to Optimize Simulated Trajectories II (POST II). Previously, POST II had the ability to simulate multiple bodies without interacting forces. The current implementation is used for the Simulation of parachute trajectories, in which the parachute and suspended bodies can be treated as rigid bodies. An arbitrary set of connecting lines can be included in the model and are treated as massless spring-dampers. This paper discusses details of the connection line modeling and results of several test cases used to validate the capability.

  17. Statistical validation of system models

    SciTech Connect

    Barney, P. [Sandia National Labs., Albuquerque, NM (United States); Ferregut, C.; Perez, L.E. [Texas Univ., El Paso, TX (United States); Hunter, N.F. [Los Alamos National Lab., NM (United States); Paez, T.L. [Sandia National Labs., Albuquerque, NM (United States)

    1997-01-01

    It is common practice in system analysis to develop mathematical models for system behavior. Frequently, the actual system being modeled is also available for testing and observation, and sometimes the test data are used to help identify the parameters of the mathematical model. However, no general-purpose technique exists for formally, statistically judging the quality of a model. This paper suggests a formal statistical procedure for the validation of mathematical models of systems when data taken during operation of the system are available. The statistical validation procedure is based on the bootstrap, and it seeks to build a framework where a statistical test of hypothesis can be run to determine whether or not a mathematical model is an acceptable model of a system with regard to user-specified measures of system behavior. The approach to model validation developed in this study uses experimental data to estimate the marginal and joint confidence intervals of statistics of interest of the system. These same measures of behavior are estimated for the mathematical model. The statistics of interest from the mathematical model are located relative to the confidence intervals for the statistics obtained from the experimental data. These relative locations are used to judge the accuracy of the mathematical model. An extension of the technique is also suggested, wherein randomness may be included in the mathematical model through the introduction of random variable and random process terms. These terms cause random system behavior that can be compared to the randomness in the bootstrap evaluation of experimental system behavior. In this framework, the stochastic mathematical model can be evaluated. A numerical example is presented to demonstrate the application of the technique.

  18. Model Validation with Hybrid Dynamic Simulation

    SciTech Connect

    Huang, Zhenyu; Kosterev, Dmitry; Guttromson, Ross T.; Nguyen, Tony B.

    2006-06-18

    Abstract—Model validation has been one of the central topics in power engineering studies for years. As model validation aims at obtaining reasonable models to represent actual behavior of power system components, it has been essential to validate models against actual measurements or known benchmark behavior. System-wide model simulation results can be compared with actual recordings. However, it is difficult to construct a simulation case for a large power system such as the WECC system and to narrow down to problematic models in a large system. Hybrid dynamic simulation with its capability of injecting external signals into dynamic simulation enables rigorous comparison of measurements and simulation in a small subsystem of interest. This paper presents such a model validation methodology with hybrid dynamic simulation. Two application examples on generator and load model validation are presented to show the validity of this model validation methodology. This methodology is further extended for automatic model validation and dichotomous subsystem model validation.

  19. Turbulence Modeling Verification and Validation

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2014-01-01

    Computational fluid dynamics (CFD) software that solves the Reynolds-averaged Navier-Stokes (RANS) equations has been in routine use for more than a quarter of a century. It is currently employed not only for basic research in fluid dynamics, but also for the analysis and design processes in many industries worldwide, including aerospace, automotive, power generation, chemical manufacturing, polymer processing, and petroleum exploration. A key feature of RANS CFD is the turbulence model. Because the RANS equations are unclosed, a model is necessary to describe the effects of the turbulence on the mean flow, through the Reynolds stress terms. The turbulence model is one of the largest sources of uncertainty in RANS CFD, and most models are known to be flawed in one way or another. Alternative methods such as direct numerical simulations (DNS) and large eddy simulations (LES) rely less on modeling and hence include more physics than RANS. In DNS all turbulent scales are resolved, and in LES the large scales are resolved and the effects of the smallest turbulence scales are modeled. However, both DNS and LES are too expensive for most routine industrial usage on today's computers. Hybrid RANS-LES, which blends RANS near walls with LES away from walls, helps to moderate the cost while still retaining some of the scale-resolving capability of LES, but for some applications it can still be too expensive. Even considering its associated uncertainties, RANS turbulence modeling has proved to be very useful for a wide variety of applications. For example, in the aerospace field, many RANS models are considered to be reliable for computing attached flows. However, existing turbulence models are known to be inaccurate for many flows involving separation. Research has been ongoing for decades in an attempt to improve turbulence models for separated and other nonequilibrium flows. When developing or improving turbulence models, both verification and validation are important steps in the process. Verification insures that the CFD code is solving the equations as intended (no errors in the implementation). This is typically done either through the method of manufactured solutions (MMS) or through careful step-by-step comparisons with other verified codes. After the verification step is concluded, validation is performed to document the ability of the turbulence model to represent different types of flow physics. Validation can involve a large number of test case comparisons with experiments, theory, or DNS. Organized workshops have proved to be valuable resources for the turbulence modeling community in its pursuit of turbulence modeling verification and validation. Workshop contributors using different CFD codes run the same cases, often according to strict guidelines, and compare results. Through these comparisons, it is often possible to (1) identify codes that have likely implementation errors, and (2) gain insight into the capabilities and shortcomings of different turbulence models to predict the flow physics associated with particular types of flows. These are valuable lessons because they help bring consistency to CFD codes by encouraging the correction of faulty programming and facilitating the adoption of better models. They also sometimes point to specific areas needed for improvement in the models. In this paper, several recent workshops are summarized primarily from the point of view of turbulence modeling verification and validation. Furthermore, the NASA Langley Turbulence Modeling Resource website is described. The purpose of this site is to provide a central location where RANS turbulence models are documented, and test cases, grids, and data are provided. The goal of this paper is to provide an abbreviated survey of turbulence modeling verification and validation efforts, summarize some of the outcomes, and give some ideas for future endeavors in this area.

  20. Statistical validation of stochastic models

    SciTech Connect

    Hunter, N.F. [Los Alamos National Lab., NM (United States). Engineering Science and Analysis Div.; Barney, P.; Paez, T.L. [Sandia National Labs., Albuquerque, NM (United States). Experimental Structural Dynamics Dept.; Ferregut, C.; Perez, L. [Univ. of Texas, El Paso, TX (United States). Dept. of Civil Engineering

    1996-12-31

    It is common practice in structural dynamics to develop mathematical models for system behavior, and the authors are now capable of developing stochastic models, i.e., models whose parameters are random variables. Such models have random characteristics that are meant to simulate the randomness in characteristics of experimentally observed systems. This paper suggests a formal statistical procedure for the validation of mathematical models of stochastic systems when data taken during operation of the stochastic system are available. The statistical characteristics of the experimental system are obtained using the bootstrap, a technique for the statistical analysis of non-Gaussian data. The authors propose a procedure to determine whether or not a mathematical model is an acceptable model of a stochastic system with regard to user-specified measures of system behavior. A numerical example is presented to demonstrate the application of the technique.

  1. Black Liquor Combustion Validated Recovery Boiler Modeling, Final Year Report, Volume 2: Appendix I, Section 5, and Appendix II, Section 1

    SciTech Connect

    T.M. Grace, W.J. Frederick, M. Salcudean, R.A. Wessel

    1998-08-01

    This project was initiated in October 1990 with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. Many of these objectives were accomplished at the end of the first five years and documented in a comprehensive report on that work (DOE/CE/40936-T3, 1996). A critical review of recovery boiler modeling, carried out in 1995, concluded that further enhancements of the model were needed to make reliable predictions of key output variables. In addition, there was a need for sufficient understanding of fouling and plugging processes to allow model outputs to be interpreted in terms of the effect on plugging and fouling. As a result, the project was restructured and reinitiated at the end of October 1995, and was completed in June 1997. The entire project is now complete and this report summarizes all of the work done on the project since it was restructured. The key tasks to be accomplished under the restructured project were to (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes; (2) Validate the enhanced furnace models, so that users can have confidence in the results; (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler; and (4) Facilitate the transfer of codes, black liquor submodels, and fundamental knowledge to the U.S. kraft pulp industry.

  2. Validating Computational Models Kathleen M. Carley

    E-print Network

    Sadeh, Norman M.

    Validating Computational Models Kathleen M. Carley Associate Professor of Sociology Department.S. government. #12;Validating Computational Models Abstract The use of computational models in the social fewer have an understanding of how to validate such models. And while many papers extort the relative

  3. Construct Validation of the Self-Description Questionnaire II with a French Sample

    Microsoft Academic Search

    Florence Guérin; Herbert W. Marsh; Jean-Pierre Famose

    2003-01-01

    This investigation is a French validation of the Self-Description Questionnaire (SDQ) II, an instrument derived from the Marsh and Shavelson model and designed to measure adolescent self-concept. Previous theoretical and methodological considerations in SDQ research provided guidelines for the instrument \\

  4. Validation of models: statistical techniques and data availability

    Microsoft Academic Search

    Jack P. C. Kleijnen

    1999-01-01

    This paper shows which statistical techniques can be used to validate simulation models, depending on which real-life data are available. Concerning this availability, three situations are distinguished (i) no data, (ii) only output data, and (iii) both input and output data. In case (i) - no real data - the analysts can still experiment with the simulation model to obtain

  5. Ensuring the Validity of the Micro Foundation in DSGE Models

    Microsoft Academic Search

    Martin Møller Andreasen

    2008-01-01

    The presence of i) stochastic trends, ii) deterministic trends, and\\/or iii) stochastic volatil- ity in DSGE models may imply that the agents' objective functions attain infinite values. We say that such models do not have a valid micro foundation. The paper derives sufficient condi- tions which ensure that the objective functions of the households and the firms are finite even

  6. Model Validation with Hybrid Dynamic Simulation

    SciTech Connect

    Huang, Zhenyu; Kosterev, Dmitry; Guttromson, Ross T.; Nguyen, Tony B.

    2006-06-22

    Abstract—Model validation has been one of the central topics in power engineering studies for years. As model validation aims at obtaining reasonable models to represent actual behavior of power system components, it has been essential to validate models against actual measurements or known benchmark behavior. System-wide model simulation results can be compared with actual recordings. However, it is difficult to construct a simulation case for a large power system such as the WECC system and to narrow down to problematic models in a large system. Hybrid dynamic simulation with its capability of injecting external signals into dynamic simulation enables rigorous comparison of measurements and simulation in a small subsystem of interest. This paper presents such a model validation methodology with hybrid dynamic simulation. Two application examples on generator and load model validation are presented to show the validity of this model validation methodology. This methodology is further extended for automatic model validation and dichotomous subsystem model validation. A few methods to define model quality indices have been proposed to quantify model error for model validation criteria development.

  7. Validation of the Curiosity and Exploration Inventory-II (CEI-II) Among Chinese University Students in Hong Kong.

    PubMed

    Ye, Shengquan; Ng, Ting Kin; Yim, Kin Hang; Wang, Jun

    2015-01-01

    This study aimed at validating the Curiosity and Exploration Inventory-II (CEI-II; Kashdan et al., 2009 ) in a Chinese context. A total of 294 Chinese first-year undergraduate students in Hong Kong completed the CEI-II and measures of satisfaction with university life, the Big Five personality traits, and human values. The results of exploratory structural equation modeling, parallel analysis, and confirmatory factor analysis supported a 1-factor solution and did not replicate the original 2-factor structure. Time invariance of the 1-factor structure was obtained among 242 participants who completed the questionnaires again after 4 months. The latent means and correlation indicated that curiosity as measured by the CEI-II was quite stable over the period of investigation. The CEI-II was found to be positively correlated with satisfaction with university life, extraversion, agreeableness, conscientiousness, openness to experience, and openness to change values, but negatively with neuroticism and conservation values. The results of hierarchical multiple regression analyses showed that the CEI-II score had incremental validity above and beyond the Big Five personality traits in predicting human values and satisfaction with university life. PMID:25774779

  8. Validation of PEP-II Resonantly Excited Turn-by-Turn BPM Data

    SciTech Connect

    Yan, Yiton T.; Cai, Yunhai; Colocho, William.; Decker, Franz-Josef; /SLAC

    2007-06-28

    For optics measurement and modeling of the PEP-II electron (HER) and position (LER) storage rings, we have been doing well with MIA [1] which requires analyzing turn-by-turn Beam Position Monitor (BPM) data that are resonantly excited at the horizontal, vertical, and longitudinal tunes. However, in anticipation that certain BPM buttons and even pins in the PEP-II IR region would be missing for the run starting in January 2007, we had been developing a data validation process to reduce the effect due to the reduced BPM data accuracy on PEP-II optics measurement and modeling. Besides the routine process for ranking BPM noise level through data correlation among BPMs with a singular-value decomposition (SVD), we could also check BPM data symplecticity by comparing the invariant ratios. Results from PEP-II measurement will be presented.

  9. Proposed Modifications to the Conceptual Model of Coaching Efficacy and Additional Validity Evidence for the Coaching Efficacy Scale II-High School Teams

    ERIC Educational Resources Information Center

    Myers, Nicholas; Feltz, Deborah; Chase, Melissa

    2011-01-01

    The purpose of this study was to determine whether theoretically relevant sources of coaching efficacy could predict the measures derived from the Coaching Efficacy Scale II-High School Teams (CES II-HST). Data were collected from head coaches of high school teams in the United States (N = 799). The analytic framework was a multiple-group…

  10. Validation of Multibody Program to Optimize Simulated Trajectories II Parachute Simulation with Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.

    2009-01-01

    A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.

  11. Geochemistry Model Validation Report: External Accumulation Model

    SciTech Connect

    K. Zarrabi

    2001-09-27

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation, density of accumulation, and the geometry of the accumulation zone. The density of accumulation and the geometry of the accumulation zone are calculated using a characterization of the fracture system based on field measurements made in the proposed repository (BSC 2001k). The model predicts that accumulation would spread out in a conical accumulation volume. The accumulation volume is represented with layers as shown in Figure 1. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance.

  12. Verifying and validating a simulation model

    Microsoft Academic Search

    Anbin Hu; Ye San; Zicai Wang

    2001-01-01

    This paper presents the verification and validation (V&V) of simulation model with the emphasis on the possible modification. Based on the analysis, a new framework is proposed, and new terms are defined. An example is employed to demonstrate how the framework and terms related are used in verifying and validating an existing model.

  13. Verification and validation of simulation models

    Microsoft Academic Search

    Jack P. C. Kleijnen

    1995-01-01

    This paper surveys verification and validation of models, especially simulation models in operations research. For verification it discusses 1) general good programming practice (such as modular programming), 2) checking intermediate simulation outputs through tracing and statistical testing per module, 3) statistical testing of final simulation outputs against analytical results, and 4) animation. For validation it discusses 1) obtaining real-worl data,

  14. NASA GSFC CCMC Recent Model Validation Activities

    NASA Technical Reports Server (NTRS)

    Rastaetter, L.; Pulkkinen, A.; Taktakishvill, A.; Macneice, P.; Shim, J. S.; Zheng, Yihua; Kuznetsova, M. M.; Hesse, M.

    2012-01-01

    The Community Coordinated Modeling Center (CCMC) holds the largest assembly of state-of-the-art physics-based space weather models developed by the international space physics community. In addition to providing the community easy access to these modern space research models to support science research, its another primary goal is to test and validate models for transition from research to operations. In this presentation, we provide an overview of the space science models available at CCMC. Then we will focus on the community-wide model validation efforts led by CCMC in all domains of the Sun-Earth system and the internal validation efforts at CCMC to support space weather servicesjoperations provided its sibling organization - NASA GSFC Space Weather Center (http://swc.gsfc.nasa.gov). We will also discuss our efforts in operational model validation in collaboration with NOAA/SWPC.

  15. Modeling Input Validation in UML

    Microsoft Academic Search

    Pedram Hayati; Nastaran Jafari; S. Mohammad Rezaei; Saeed Sarenche; Vidyasagar Potdar

    2008-01-01

    Abstract Security is an integral part of most,software systems but it is not considered ,as an ,explicit part in the development,process yet. Input validation is the most critical part of software security that is not covered in the design phase ,of software ,development ,life-cycle resulting,in many ,security ,vulnerabilities. Our

  16. Validation of module assembly physical models

    Microsoft Academic Search

    R. Iannuzzelli

    1990-01-01

    Some typical models used in the assembly of electronic modules are presented along with data establishing the validity of these models. Six cases are examined: two cases of PTH\\/PWB (plated-through-hole\\/printed wiring board) model validation: SMT (surface mount technology) reliability prediction using the matrix creep method; prediction of creep rupture times of SMT butt joints; PGA (pin grid array) cracking; and

  17. Computational Modeling and Experimental Validation of Aviation

    E-print Network

    Zhang, Richard "Hao"

    Computational Modeling and Experimental Validation of Aviation Security Procedures Uwe Gl/srastkar/mvajihol}@cs.sfu.ca January 2006 Abstract Security of civil aviation has become a major concern in recent years, leading and experimental validation of aviation security combining abstract state machine (ASM) specifica- tion techniques

  18. Algorithm for model validation: Theory and applications

    PubMed Central

    Sornette, D.; Davis, A. B.; Ide, K.; Vixie, K. R.; Pisarenko, V.; Kamm, J. R.

    2007-01-01

    Validation is often defined as the process of determining the degree to which a model is an accurate representation of the real world from the perspective of its intended uses. Validation is crucial as industries and governments depend increasingly on predictions by computer models to justify their decisions. We propose to formulate the validation of a given model as an iterative construction process that mimics the often implicit process occurring in the minds of scientists. We offer a formal representation of the progressive build-up of trust in the model. Thus, we replace static claims on the impossibility of validating a given model by a dynamic process of constructive approximation. This approach is better adapted to the fuzzy, coarse-grained nature of validation. Our procedure factors in the degree of redundancy versus novelty of the experiments used for validation as well as the degree to which the model predicts the observations. We illustrate the methodology first with the maturation of quantum mechanics as the arguably best established physics theory and then with several concrete examples drawn from some of our primary scientific interests: a cellular automaton model for earthquakes, a multifractal random walk model for financial time series, an anomalous diffusion model for solar radiation transport in the cloudy atmosphere, and a computational fluid dynamics code for the Richtmyer–Meshkov instability. PMID:17420476

  19. The range of validity of the two-body approximation in models of terrestrial planet accumulation. II - Gravitational cross sections and runaway accretion

    NASA Technical Reports Server (NTRS)

    Wetherill, G. W.; Cox, L. P.

    1985-01-01

    The validity of the two-body approximation in calculating encounters between planetesimals has been evaluated as a function of the ratio of unperturbed planetesimal velocity (with respect to a circular orbit) to mutual escape velocity when their surfaces are in contact (V/V-sub-e). Impact rates as a function of this ratio are calculated to within about 20 percent by numerical integration of the equations of motion. It is found that when the ratio is greater than 0.4 the two-body approximation is a good one. Consequences of reducing the ratio to less than 0.02 are examined. Factors leading to an optimal size for growth of planetesimals from a swarm of given eccentricity and placing a limit on the extent of runaway accretion are derived.

  20. Systematic Independent Validation of Inner Heliospheric Models

    NASA Technical Reports Server (NTRS)

    MacNeice, P. J.; Takakishvili, Alexandre

    2008-01-01

    This presentation is the first in a series which will provide independent validation of community models of the outer corona and inner heliosphere. In this work we establish a set of measures to be used in validating this group of models. We use these procedures to generate a comprehensive set of results from the Wang- Sheeley-Arge (WSA) model which will be used as a baseline, or reference, against which to compare all other models. We also run a test of the validation procedures by applying them to a small set of results produced by the ENLIL Magnetohydrodynamic (MHD) model. In future presentations we will validate other models currently hosted by the Community Coordinated Modeling Center(CCMC), including a comprehensive validation of the ENLIL model. The Wang-Sheeley-Arge (WSA) model is widely used to model the Solar wind, and is used by a number of agencies to predict Solar wind conditions at Earth as much as four days into the future. Because it is so important to both the research and space weather forecasting communities, it is essential that its performance be measured systematically, and independently. In this paper we offer just such an independent and systematic validation. We report skill scores for the model's predictions of wind speed and IMF polarity for a large set of Carrington rotations. The model was run in all its routinely used configurations. It ingests line of sight magnetograms. For this study we generated model results for monthly magnetograms from the National Solar Observatory (SOLIS), Mount Wilson Observatory and the GONG network, spanning the Carrington rotation range from 1650 to 2068. We compare the influence of the different magnetogram sources, performance at quiet and active times, and estimate the effect of different empirical wind speed tunings. We also consider the ability of the WSA model to identify sharp transitions in wind speed from slow to fast wind. These results will serve as a baseline against which to compare future versions of the model

  1. Tracer travel time and model validation

    SciTech Connect

    Tsang, Chin-Fu

    1988-01-01

    The performance assessment of a nuclear waste repository demands much more in comparison to the safety evaluation of any civil constructions such as dams, or the resource evaluation of a petroleum or geothermal reservoir. It involves the estimation of low probability (low concentration) of radionuclide transport extrapolated 1000's of years into the future. Thus models used to make these estimates need to be carefully validated. A number of recent efforts have been devoted to the study of this problem. Some general comments on model validation were given by Tsang. The present paper discusses some issues of validation in regards to radionuclide transport. 5 refs.

  2. An approach to validation of thermomechanical models

    SciTech Connect

    Costin, L.S. [Sandia National Labs., Albuquerque, NM (United States); Hardy, M.P.; Brechtel, C.E. [Agapito (J.F.T.) and Associates, Inc., Grand Junction, CO (United States)

    1993-08-01

    Thermomechanical models are being developed to support the design of an Exploratory Studies Facility (ESF) and a potential high-level nuclear waste repository at Yucca Mountain, Nevada. These models are used for preclosure design of underground openings, such as access drifts, emplacement drifts, and waste emplacement boreholes; and in support of postclosure issue resolution relating to waste canister performance, disturbance of the hydrological properties of the host rock, and overall system performance assessment. For both design and performance assessment, the purpose of using models in analyses is to better understand and quantify some phenomenon or process. Therefore, validation is an important process that must be pursued in conjunction with the development and application of models. The Site Characterization Plan (SCP) addressed some general aspects of model validation, but no specific approach has, as yet, been developed for either design or performance assessment models. This paper will discuss a proposed process for thermomechanical model validation and will focus on the use of laboratory and in situ experiments as part of the validation process. The process may be generic enough in nature that it could be applied to the validation of other types of models, for example, models of unsaturated hydrologic flow.

  3. Validation of a synoptic solar wind model

    Microsoft Academic Search

    O. Cohen; I. V. Sokolov; I. I. Roussev; T. I. Gombosi

    2008-01-01

    We present a validation of a three-dimensional magnetohydrodynamic model for the solar corona and the inner heliosphere. We compare the results of the model with long-term satellite data at 1 AU for a 1 year period during solar minimum and another year period of solar maximum. Overall, the model predicts rather well the magnitude of the magnetohydrodynamical variables for solar

  4. Biomass burning emissions over northern Australia constrained by aerosol measurements: II—Model validation, and impacts on air quality and radiative forcing

    Microsoft Academic Search

    Ashok K. Luhar; Ross M. Mitchell; Yi Qin; Susan Campbell; John L. Gras; David Parry

    2008-01-01

    This two-part series investigates the emission and transport of biomass burning aerosol (or particulate matter) across the Top End of the Northern Territory of Australia. In Part I, Meyer et al. [2008. Biomass burning emissions over northern Australia constrained by aerosol measurements: I—Modelling the distribution of hourly emissions. Atmospheric Environment, in press, doi:10.1016\\/j.atmosenv.2007.10.089.] used a fuel load distribution coupled with

  5. Concepts of Model Verification and Validation

    Microsoft Academic Search

    B. H. Thacker; S. W. Doebling; F. M. Hemez; M. C. Anderson; J. E. Pepin; E. A. Rodriguez

    2004-01-01

    Model verification and validation (V&V) is an enabling methodology for the development of computational models that can be used to make engineering predictions with quantified confidence. Model V&V procedures are needed by government and industry to reduce the time, cost, and risk associated with full-scale testing of products, materials, and weapon systems. Quantifying the confidence and predictive accuracy of model

  6. Selected results of a model validation exercise

    NASA Astrophysics Data System (ADS)

    Piringer, M.; Baumann-Stanzer, K.

    2009-04-01

    The concentration fields calculated with three Gaussian models and one Lagrangian dispersion model are validated against a set of SF6 concentration data provided by the German environmental programme BWPLUS. The source was a pig fattening unit in fairly flat terrain. The results reveal that, in flat terrain with steady undisturbed flow, the use of Gauss models is still justified, whereas Lagrangian models should be used whenever the flow is modified by obstacles or topography.

  7. Experimental Validation of IRA Models

    Microsoft Academic Search

    Everett G. Farr; C. Jerald Buchenauer

    We present here data taken on a large tabletop transient antenna range to characterize lens IRAS, reflector IRA, and TEM horns. Comparisons are made to simple analytical models. Based on these preliminary results, we find good agreement between our experiments and the theory.

  8. Numerical model representation and validation strategies

    SciTech Connect

    Dolin, R.M.; Hefele, J.

    1997-10-01

    This paper describes model representation and validation strategies for use in numerical tools that define models in terms of topology, geometry, or topography. Examples of such tools include Computer-Assisted Engineering (CAE), Computer-Assisted Manufacturing (CAM), Finite Element Analysis (FEA), and Virtual Environment Simulation (VES) tools. These tools represent either physical objects or conceptual ideas using numerical models for the purpose of posing a question, performing a task, or generating information. Dependence on these numerical representations require that models be precise, consistent across different applications, and verifiable. This paper describes a strategy for ensuring precise, consistent, and verifiable numerical model representations in a topographic framework. The main assertion put forth is that topographic model descriptions are more appropriate for numerical applications than topological or geometrical descriptions. A topographic model verification and validation methodology is presented.

  9. Testing the Validity of Cost-Effectiveness Models

    E-print Network

    Oakley, Jeremy

    Testing the Validity of Cost-Effectiveness Models Chris McCabe and Simon Dixon Sheffield Health . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501 1. Review of Previous Attempts to Establish Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 1.2 Validating Cost-Effectiveness Models

  10. Statistical validation of physical system models

    SciTech Connect

    Paez, T.L.; Barney, P. [Sandia National Lab., Albuquerque, NM (United States); Hunter, N.F. [Los Alamos National Lab., NM (United States); Ferregut, C.; Perez, L.E. [Univ. of Texas, El Paso, TX (United States). FAST Center for Structural Integrity of Aerospace Systems

    1996-10-01

    It is common practice in applied mechanics to develop mathematical models for mechanical system behavior. Frequently, the actual physical system being modeled is also available for testing, and sometimes the test data are used to help identify the parameters of the mathematical model. However, no general-purpose technique exists for formally, statistically judging the quality of a model. This paper suggests a formal statistical procedure for the validation of mathematical models of physical systems when data taken during operation of the physical system are available. The statistical validation procedure is based on the bootstrap, and it seeks to build a framework where a statistical test of hypothesis can be run to determine whether or not a mathematical model is an acceptable model of a physical system with regard to user-specified measures of system behavior. The approach to model validation developed in this study uses experimental data to estimate the marginal and joint confidence intervals of statistics of interest of the physical system. These same measures of behavior are estimated for the mathematical model. The statistics of interest from the mathematical model are located relative to the confidence intervals for the statistics obtained from the experimental data. These relative locations are used to judge the accuracy of the mathematical model. A numerical example is presented to demonstrate the application of the technique.

  11. Structural system identification: Structural dynamics model validation

    SciTech Connect

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  12. Validation of Space Weather Models at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, M. M.; Pulkkinen, A.; Rastaetter, L.; Hesse, M.; Chulaki, A.; Maddox, M.

    2011-01-01

    The Community Coordinated Modeling Center (CCMC) is a multiagency partnership, which aims at the creation of next generation space weather modes. CCMC goal is to support the research and developmental work necessary to substantially increase space weather modeling capabilities and to facilitate advanced models deployment in forecasting operations. The CCMC conducts unbiased model testing and validation and evaluates model readiness for operational environment. The presentation will demonstrate the recent progress in CCMC metrics and validation activities.

  13. Validation of Hadronic Models in GEANT4

    SciTech Connect

    Koi, Tatsumi; Wright, Dennis H.; /SLAC; Folger, Gunter; Ivanchenko, Vladimir; Kossov, Mikhail; Starkov, Nikolai; /CERN; Heikkinen, Aatos; /Helsinki Inst. of Phys.; Truscott,; Lei, Fan; /QinetiQ; Wellisch, Hans-Peter

    2007-09-26

    Geant4 is a software toolkit for the simulation of the passage of particles through matter. It has abundant hadronic models from thermal neutron interactions to ultra relativistic hadrons. An overview of validations in Geant4 hadronic physics is presented based on thin target measurements. In most cases, good agreement is available between Monte Carlo prediction and experimental data; however, several problems have been detected which require some improvement in the models.

  14. Verification validation and accreditation of simulation models

    Microsoft Academic Search

    Osman Balci

    1997-01-01

    This paper presents guidelines for conducting verifica- tion, validation and accreditation (VV&A) of simulation models. Fifteen guiding principles are introduced to help the researchers, practitioners and managers better com- prehend what VV&A is all about. The VV&A activities are described in the modeling and simulation life cycle. A taxonomy of more than 77 V&V techniques is provided to assist simulationists

  15. Calibration and validation of rockfall models

    NASA Astrophysics Data System (ADS)

    Frattini, Paolo; Valagussa, Andrea; Zenoni, Stefania; Crosta, Giovanni B.

    2013-04-01

    Calibrating and validating landslide models is extremely difficult due to the particular characteristic of landslides: limited recurrence in time, relatively low frequency of the events, short durability of post-event traces, poor availability of continuous monitoring data, especially for small landslide and rockfalls. For this reason, most of the rockfall models presented in literature completely lack calibration and validation of the results. In this contribution, we explore different strategies for rockfall model calibration and validation starting from both an historical event and a full-scale field test. The event occurred in 2012 in Courmayeur (Western Alps, Italy), and caused serious damages to quarrying facilities. This event has been studied soon after the occurrence through a field campaign aimed at mapping the blocks arrested along the slope, the shape and location of the detachment area, and the traces of scars associated to impacts of blocks on the slope. The full-scale field test was performed by Geovert Ltd in the Christchurch area (New Zealand) after the 2011 earthquake. During the test, a number of large blocks have been mobilized from the upper part of the slope and filmed with high velocity cameras from different viewpoints. The movies of each released block were analysed to identify the block shape, the propagation path, the location of impacts, the height of the trajectory and the velocity of the block along the path. Both calibration and validation of rockfall models should be based on the optimization of the agreement between the actual trajectories or location of arrested blocks and the simulated ones. A measure that describe this agreement is therefore needed. For calibration purpose, this measure should simple enough to allow trial and error repetitions of the model for parameter optimization. In this contribution we explore different calibration/validation measures: (1) the percentage of simulated blocks arresting within a buffer of the actual blocks, (2) the percentage of trajectories passing through the buffer of the actual rockfall path, (3) the mean distance between the location of arrest of each simulated blocks and the location of the nearest actual blocks; (4) the mean distance between the location of detachment of each simulated block and the location of detachment of the actual block located closer to the arrest position. By applying the four measures to the case studies, we observed that all measures are able to represent the model performance for validation purposes. However, the third measure is more simple and reliable than the others, and seems to be optimal for model calibration, especially when using a parameter estimation and optimization modelling software for automated calibration.

  16. Solar Sail Model Validation from Echo Trajectories

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Brickerhoff, Adam T.

    2007-01-01

    The NASA In-Space Propulsion program has been engaged in a project to increase the technology readiness of solar sails. Recently, these efforts came to fruition in the form of several software tools to model solar sail guidance, navigation and control. Furthermore, solar sails are one of five technologies competing for the New Millennium Program Space Technology 9 flight demonstration mission. The historic Echo 1 and Echo 2 balloons were comprised of aluminized Mylar, which is the near-term material of choice for solar sails. Both spacecraft, but particularly Echo 2, were in low Earth orbits with characteristics similar to the proposed Space Technology 9 orbit. Therefore, the Echo balloons are excellent test cases for solar sail model validation. We present the results of studies of Echo trajectories that validate solar sail models of optics, solar radiation pressure, shape and low-thrust orbital dynamics.

  17. Model validation: Cooling-tower performance

    SciTech Connect

    Miller, P.B.; Starnes, G.L. (Pacific Gas and Electric Co., San Ramon, CA (USA). Technical and Ecological Services)

    1989-11-01

    The purpose of the fill performance validation project is to examine the accuracy of the cooling tower computer models and fill performance data that have recently been made available through EPRI. This project compares actual full scale tower performance test results to those predicted by the tower models. The cooling tower models used in this project include: FACTS/FACTR, developed by the Tennessee Valley Authority (TVA); VERA2D-86/VERAT, developed by CHAM of North America; and TEFERI, developed by Electricite de France. All full scale cooling tower performance test data used to validate the models were collected at PG E's Full Scale Crossflow Testing Facility. Three different fills have been tested in this facility during this project. Test results for one of the fill types (Type T') are directly compared to predictions based on results from the EPRI Small Scale Test Facility. The other full scale test results are used to validate the hybrid fill modeling capabilities of the FACTS computer code. 9 refs., 6 figs., 7 tabs.

  18. Validity evidence based on internal structure of scores on the Spanish version of the Self-Description Questionnaire-II.

    PubMed

    Ingles, Cándido J; Torregrosa, María S; Hidalgo, María D; Nuñez, Jose C; Castejón, Juan L; García-Fernández, Jose M; Valles, Antonio

    2012-03-01

    The aim of this study was to analyze the reliability and validity evidence of scores on the Spanish version of Self-Description Questionnaire II (SDQ-II). The instrument was administered in a sample of 2022 Spanish students (51.1% boys) from grades 7 to 10. Confirmatory factor analysis (CFA) was used to examine validity evidence based on internal structure drawn from the scores on the SDQ-II. CFA replicated the correlated II first-order factor structure. Furthermore, hierarchical confirmatory factor analysis (HCFA) was used to examine the hierarchical ordering of self-concept, as measured by scores on the Spanish version of the SDQ-II. Although a series of HCFA models were tested to assess academic and non-academic components organization, support for those hierarchical models was weaker than for the correlated 11 first-order factor structure. Results also indicated that scores on the Spanish version of the SDQ-II had internal consistency and test-retest reliability estimates within an acceptable range. PMID:22379728

  19. O`ahu Grid Study: Validation of Grid Models

    E-print Network

    O`ahu Grid Study: Validation of Grid Models Prepared for the U.S. Department of Energy Office............................................................................................................. 1 2 Model Validation .................................................................................................... 1 2.1 Production Cost Modeling (GE MAPSTM Analysis) ......................................... 2 2

  20. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    USGS Publications Warehouse

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead of promoting passive or self-righteous decisions.

  1. Using Model Checking to Validate AI Planner Domain Models

    NASA Technical Reports Server (NTRS)

    Penix, John; Pecheur, Charles; Havelund, Klaus

    1999-01-01

    This report describes an investigation into using model checking to assist validation of domain models for the HSTS planner. The planner models are specified using a qualitative temporal interval logic with quantitative duration constraints. We conducted several experiments to translate the domain modeling language into the SMV, Spin and Murphi model checkers. This allowed a direct comparison of how the different systems would support specific types of validation tasks. The preliminary results indicate that model checking is useful for finding faults in models that may not be easily identified by generating test plans.

  2. Chromatographic Method Validation: A Review of Current Practices and Procedures. II. Guidelines for Primary Validation Parameters

    Microsoft Academic Search

    Dennis R. Jenke

    1996-01-01

    Validation of analytical methodologies is an important aspect of their development\\/utilization and is widely required in support of product registration applications. In this manuscript, definitions, procedures and acceptance criteria which appear in the pharmaceutical literature are summarized for the more commonly encountered validation parameters. Parameters examined include accuracy, precision, specificity, linearity and sensitivity limits.

  3. Fundamental Diagram and Validation of Crowd Models

    Microsoft Academic Search

    Armin Seyfried; Andreas Schadschneider

    2008-01-01

    In recent years, several approaches for crowd modeling have been proposed. However, so far not much attention has been paid\\u000a to their quantitative validation. The fundamental diagram , i.e. the density-dependence of the flow or velocity, is probably the most important\\u000a relation as it connects the basic parameter to describe the dynamic of crowds. But specifications in different handbooks as

  4. Historical validation of an attrition model

    SciTech Connect

    Hartley, D.S. III.

    1990-05-01

    This paper is the third in a series of reports on the breakthrough research in historical validation of attrition in conflict. Significant defense policy decisions, including weapons acquisition and arms reduction, are based, in part, on models of conflict. Most of these models are driven by their attrition algorithms, usually forms of the Lanchester square and linear laws. None of these algorithms have been validated. Helmbold demonstrated a relationship between the Helmbold ratio, a ratio containing initial force sizes and causalities, and the initial force ratio in a large number of historical battles. It has also been shown that at least two models of warfare could produce these results, a mixed linear-logarithmic Lanchestrain attrition law and a constraint (of battle engagement and termination) model of attrition. This paper examines the distribution statistics of the historical data and determines that the mixed law model is favored. The differential form of the mixed law model that best fits the casualty data is found. This model also provides a parameter to predict the victor. 6 refs., 28 figs., 13 tabs.

  5. Morphodynamic model validation for tropical river junctions

    NASA Astrophysics Data System (ADS)

    Dixon, Simon; Nicholas, Andrew; Sambrook Smith, Greg

    2015-04-01

    The use of morphodynamic numerical modelling as an exploratory tool for understanding tropical braided river evolution and processes is well established. However there remains a challenge in confirming how well complex numerical models are representing reality. Complete validation of morphodynamic models is likely to prove impossible with confirmation of model predictions inherently partial and validation only ever possible in relative terms. Within these limitations it is still vital for researchers to confirm that models are accurately representing morphodynamic processes and that model output is shown to match to a variety of field observations to increase the probability the model is performing correctly. To date the majority of morphodynamic model validation has focused on comparing planform features or statistics from a single time slice. Furthermore, these approaches have also usually only discriminated between "wet" and "dry" parts of the system with no account for vegetation. There is therefore a need for a robust method to compare the morphological evolution of tropical braided rivers to model output. In this presentation we describe a method for extracting land cover classification data from Landsat imagery using a supervised classification system. By generating land cover classifications, including vegetation, for multiple years we are then able to generate areas of erosion and deposition between years. These data allow comparison between the predictions generated by an established morphodynamic model (HSTAR) and field data between time-steps, as well as for individual time steps. This effectively allows the "dynamic" aspect of the morphodynamic model predictions to be compared to observations. We further advance these comparisons by using image analysis techniques to compare the: planform, erosional and depositional shapes generated by the model and from field observations. Using this suite of techniques we are able to dramatically increase the number and detail of our observational data and the robustness of resulting comparisons to model predictions. By increasing our confidence in model output we are able to subsequently use numerical modelling as a heuristic tool to investigate tropical river processes and morphodynamics at large river junctions.

  6. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  7. Validation of Space Weather Models at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, M. M.; Hesse, M.; Pulkkinen, A.; Maddox, M.; Rastaetter, L.; Berrios, D.; Zheng, Y.; MacNeice, P. J.; Shim, J.; Taktakishvili, A.; Chulaki, A.

    2011-01-01

    The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership to support the research and developmental work necessary to substantially increase space weather modeling capabilities and to facilitate advanced models deployment in forecasting operations. Space weather models and coupled model chains hosted at the CCMC range from the solar corona to the Earth's upper atmosphere. CCMC has developed a number of real-time modeling systems, as well as a large number of modeling and data products tailored to address the space weather needs of NASA's robotic missions. The CCMC conducts unbiased model testing and validation and evaluates model readiness for operational environment. CCMC has been leading recent comprehensive modeling challenges under GEM, CEDAR and SHINE programs. The presentation will focus on experience in carrying out comprehensive and systematic validation of large sets of. space weather models

  8. Validity

    NSDL National Science Digital Library

    Edwin P. Christmann

    2008-11-01

    In this chapter, the authors will describe the four types of validity: construct validity, content validity, concurrent validity, and predictive validity. Depending on the test and the rationale or purpose for its administration, and understanding of the

  9. Validation of Computational Models in Biomechanics

    PubMed Central

    Henninger, Heath B.; Reese, Shawn P.; Anderson, Andrew E.; Weiss, Jeffrey A.

    2010-01-01

    The topics of verification and validation (V&V) have increasingly been discussed in the field of computational biomechanics, and many recent articles have applied these concepts in an attempt to build credibility for models of complex biological systems. V&V are evolving techniques that, if used improperly, can lead to false conclusions about a system under study. In basic science these erroneous conclusions may lead to failure of a subsequent hypothesis, but they can have more profound effects if the model is designed to predict patient outcomes. While several authors have reviewed V&V as they pertain to traditional solid and fluid mechanics, it is the intent of this manuscript to present them in the context of computational biomechanics. Specifically, the task of model validation will be discussed with a focus on current techniques. It is hoped that this review will encourage investigators to engage and adopt the V&V process in an effort to increase peer acceptance of computational biomechanics models. PMID:20839648

  10. VALIDITY OF THE STANDARD CROSS-CORRELATION TEST FOR MODEL

    E-print Network

    Van den Hof, Paul

    VALIDITY OF THE STANDARD CROSS-CORRELATION TEST FOR MODEL STRUCTURE VALIDATION Sippe G. Douma uncertainty regions. An essential step in a system identification procedure is the (in)validation that this standard test itself is valid only under exactly those assumptions it is meant to verify. As a result

  11. Model Validation of Power System Components Using Hybrid Dynamic Simulation

    SciTech Connect

    Huang, Zhenyu; Nguyen, Tony B.; Kosterev, Dmitry; Guttromson, Ross T.

    2008-05-31

    Hybrid dynamic simulation, with its capability of injecting external signals into dynamic simulation, opens the traditional dynamic simulation loop for interaction with actual field measurements. This simulation technique enables rigorous comparison between simulation results and actual measurements and model validation of individual power system components within a small subsystem. This paper uses a real example of generator model validation to illustrate the procedure and validity of the component model validation methodology using hybrid dynamic simulation. Initial model calibration has also been carried out to show how model validation results would be used to improve component models

  12. Model Validation of Power System Components Using Hybrid Dynamic Simulation

    SciTech Connect

    Huang, Zhenyu; Nguyen, Tony B.; Kosterev, Dmitry; Guttromson, Ross T.

    2006-05-21

    Abstract—Hybrid dynamic simulation, with its capability of injecting external signals into dynamic simulation, opens the traditional dynamic simulation loop for interaction with actual field measurements. This simulation technique enables rigorous comparison between simulation results and actual measurements and model validation of individual power system components within a small subsystem. This paper uses a real example of generator model validation to illustrate the procedure and validity of the component model validation methodology using hybrid dynamic simulation. Initial model calibration has also been carried out to show how model validation results would be used to improve component models.

  13. Computational modeling and validation for hypersonic inlets

    NASA Technical Reports Server (NTRS)

    Povinelli, Louis A.

    1990-01-01

    Hypersonic inlet research activity at NASA is reviewed. The basis is the experimental tests performed with three inlets: the NASA Lewis Research Center Mach 5, the McDonnell Douglas Mach 12, and the NASA Langley Mach 18. Both three-dimensional parabolized Navier-Stokes and Navier-Stokes codes were used to compute the flow within the three inlets. Modeling assumptions in the codes involve the turbulence model, the nature of the boundary layer, shock wave boundary layer interaction, and the flow spilled to the outside of the inlet. Use of the codes in conjunction with the experimental data are helping to develop a clearer understanding of the inlet flow physics and to focus on the modeling improvements required in order to arrive at validated codes.

  14. Validation of Kp Estimation and Prediction Models

    NASA Astrophysics Data System (ADS)

    McCollough, J. P., II; Young, S. L.; Frey, W.

    2014-12-01

    Specifi cation and forecast of geomagnetic indices is an important capability for space weather operations. The University Partnering for Operational Support (UPOS) eff ort at the Applied Physics Laboratory of Johns Hopkins University (JHU/APL) produced many space weather models, including the Kp Predictor and Kp Estimator. We perform a validation of index forecast products against de finitive indices computed by the Deutches GeoForschungsZentstrum Potsdam (GFZ). We compute continuous predictant skill scores, as well as 2x2 contingency tables and associated scalar quantities for diff erent index thresholds. We also compute a skill score against a nowcast persistence model. We discuss various sources of error for the models and how they may potentially be improved.

  15. Integrated Medical Model Verification, Validation, and Credibility

    NASA Technical Reports Server (NTRS)

    Walton, Marlei; Kerstman, Eric; Foy, Millennia; Shah, Ronak; Saile, Lynn; Boley, Lynn; Butler, Doug; Myers, Jerry

    2014-01-01

    The Integrated Medical Model (IMM) was designed to forecast relative changes for a specified set of crew health and mission success risk metrics by using a probabilistic (stochastic process) model based on historical data, cohort data, and subject matter expert opinion. A probabilistic approach is taken since exact (deterministic) results would not appropriately reflect the uncertainty in the IMM inputs. Once the IMM was conceptualized, a plan was needed to rigorously assess input information, framework and code, and output results of the IMM, and ensure that end user requests and requirements were considered during all stages of model development and implementation. METHODS: In 2008, the IMM team developed a comprehensive verification and validation (VV) plan, which specified internal and external review criteria encompassing 1) verification of data and IMM structure to ensure proper implementation of the IMM, 2) several validation techniques to confirm that the simulation capability of the IMM appropriately represents occurrences and consequences of medical conditions during space missions, and 3) credibility processes to develop user confidence in the information derived from the IMM. When the NASA-STD-7009 (7009) was published, the IMM team updated their verification, validation, and credibility (VVC) project plan to meet 7009 requirements and include 7009 tools in reporting VVC status of the IMM. RESULTS: IMM VVC updates are compiled recurrently and include 7009 Compliance and Credibility matrices, IMM VV Plan status, and a synopsis of any changes or updates to the IMM during the reporting period. Reporting tools have evolved over the lifetime of the IMM project to better communicate VVC status. This has included refining original 7009 methodology with augmentation from the NASA-STD-7009 Guidance Document. End user requests and requirements are being satisfied as evidenced by ISS Program acceptance of IMM risk forecasts, transition to an operational model and simulation tool, and completion of service requests from a broad end user consortium including Operations, Science and Technology Planning, and Exploration Planning. CONCLUSIONS: The VVC approach established by the IMM project of combining the IMM VV Plan with 7009 requirements is comprehensive and includes the involvement of end users at every stage in IMM evolution. Methods and techniques used to quantify the VVC status of the IMM have not only received approval from the local NASA community but have also garnered recognition by other federal agencies seeking to develop similar guidelines in the medical modeling community.

  16. Boron-10 Lined Proportional Counter Model Validation

    SciTech Connect

    Lintereur, Azaree T.; Siciliano, Edward R.; Kouzes, Richard T.

    2012-06-30

    The Department of Energy Office of Nuclear Safeguards (NA-241) is supporting the project “Coincidence Counting With Boron-Based Alternative Neutron Detection Technology” at Pacific Northwest National Laboratory (PNNL) for the development of an alternative neutron coincidence counter. The goal of this project is to design, build and demonstrate a boron-lined proportional tube-based alternative system in the configuration of a coincidence counter. This report discusses the validation studies performed to establish the degree of accuracy of the computer modeling methods current used to simulate the response of boron-lined tubes. This is the precursor to developing models for the uranium neutron coincidence collar under Task 2 of this project.

  17. Validated predictive modelling of the environmental resistome.

    PubMed

    Amos, Gregory Ca; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  18. Validated predictive modelling of the environmental resistome

    PubMed Central

    Amos, Gregory CA; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-01-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  19. A Proposed Model for Simulation Validation Process Maturity

    Microsoft Academic Search

    S. Y. Harmon; Simone M. Youngblood

    2005-01-01

    This paper proposes a model of process maturity for simulation validation. The development of this model begins by recognizing validation as a process that generates information as its sole product and therefore resembles a systematic quest for truth. These characteristics distinguish the simulation validation process from other processes such as those for manufacturing or software engineering. This development then substitutes

  20. Biomechanical Modeling and Sensitivity Analysis of Bipedal Running Ability. II. Extinct Taxa

    E-print Network

    Hutchinson, John

    Biomechanical Modeling and Sensitivity Analysis of Bipedal Running Ability. II. Extinct Taxa John R. Hutchinson* Biomechanical Engineering Division, Stanford University, Stanford, California 94305-4038 ABSTRACT Using an inverse dynamics biomechanical analysis that was previously validated for extant bipeds, I

  1. Modeling distributed hybrid systems in Ptolemy II

    Microsoft Academic Search

    Jie Liu; Xiaojun Liu; Edward A. Lee

    2001-01-01

    We present Ptolemy II as a modeling and simulation environment for distributed hybrid systems. In Ptolemy II, a distributed hybrid system is specified as a hierarchy of models: an event-based top level and distributed islands of hybrid systems. Each hybrid system is in turn a hierarchy of continuous-time models and finite state machines. A variety of models of computation was

  2. Validation and application of the SCALP model

    NASA Astrophysics Data System (ADS)

    Smith, D. A. J.; Martin, C. E.; Saunders, C. J.; Smith, D. A.; Stokes, P. H.

    The Satellite Collision Assessment for the UK Licensing Process (SCALP) model was first introduced in a paper presented at IAC 2003. As a follow-on, this paper details the steps taken to validate the model and describes some of its applications. SCALP was developed for the British National Space Centre (BNSC) to support liability assessments as part of the UK's satellite license application process. Specifically, the model determines the collision risk that a satellite will pose to other orbiting objects during both its operational and post-mission phases. To date SCALP has been used to assess several LEO and GEO satellites for BNSC, and subsequently to provide the necessary technical basis for licenses to be issued. SCALP utilises the current population of operational satellites residing in LEO and GEO (extracted from ESA's DISCOS database) as a starting point. Realistic orbital dynamics, including the approximate simulation of generic GEO station-keeping strategies are used to propagate the objects over time. The method takes into account all of the appropriate orbit perturbations for LEO and GEO altitudes and allows rapid run times for multiple objects over time periods of many years. The orbit of a target satellite is also propagated in a similar fashion. During these orbital evolutions, a collision prediction and close approach algorithm assesses the collision risk posed to the satellite population. To validate SCALP, specific cases were set up to enable the comparison of collision risk results with other established models, such as the ESA MASTER model. Additionally, the propagation of operational GEO satellites within SCALP was compared with the expected behaviour of controlled GEO objects. The sensitivity of the model to changing the initial conditions of the target satellite such as semi-major axis and inclination has also been demonstrated. A further study shows the effect of including extra objects from the GTO population (which can pass through the LEO and GEO regimes) in the determination of collision risk. Lastly, the effect of altering the simulation environment, by varying parameters such as the extent of the uncertainty volume used in the GEO collision assessment method has been investigated.

  3. Concurrent Validity of the Online Version of the Keirsey Temperament Sorter II.

    ERIC Educational Resources Information Center

    Kelly, Kevin R.; Jugovic, Heidi

    2001-01-01

    Data from the Keirsey Temperament Sorter II online instrument and Myers Briggs Type Indicator (MBTI) for 203 college freshmen were analyzed. Positive correlations appeared between the concurrent MBTI and Keirsey measures of psychological type, giving preliminary support to the validity of the online version of Keirsey. (Contains 28 references.)…

  4. Validation of the Portuguese version of the RDC/TMD Axis II questionnaire.

    PubMed

    de Lucena, Luciana Barbosa Sousa; Kosminsky, Maurício; da Costa, Lino João; de Góes, Paulo Sávio Angeiras

    2006-01-01

    The present paper aimed at evaluating the validity of the Portuguese version of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Axis II Questionnaire. The sample was comprised of 155 patients with signs and symptoms of Temporomandibular Disorders (TMD), evaluated at the Orofacial Pain Control Center, School of Dentistry, University of Pernambuco, Brazil, between July 2003 and February 2004. Data collection was performed with the following tools: the RDC/TMD Axis I (clinical evaluation and TMD classification), and Axis II (psychosocial evaluation), as well as specific questionnaires for evaluation of Oral Health Related Quality of Life, namely, Oral Impacts on Daily Performances and the Oral Health Impact Profile-14, considered to be gold standard criteria. Validity evaluation consisted of internal consistency evaluation by the Cronbach alfa reliability test, reliability and reproducibility estimated by the Kappa test and the Spearman's correlation, and concurring validation through Spearman's correlation. The Portuguese version of the RDC/TMD Axis II questionnaire was considered consistent (Cronbach alfa = 0.72), reproducible (Kappa values from 0.73 to 0.91, p < 0.01), and valid (p < 0.01). It was concluded that this version showed valid and reproducible results for the Brazilian population, thus paving the way for including Brazil in transcultural epidemiological studies on TMD. PMID:17242791

  5. Constructing and Validating a Decadal Prediction Model

    NASA Astrophysics Data System (ADS)

    Foss, I.; Woolf, D. K.; Gagnon, A. S.; Merchant, C. J.

    2010-05-01

    For the purpose of identifying potential sources of predictability of Scottish mean air temperature (SMAT), a redundancy analysis (RA) was accomplished to quantitatively assess the predictability of SMAT from North Atlantic SSTs as well as the temporal consistency of this predictability. The RA was performed between the main principal components of North Atlantic SST anomalies and SMAT anomalies for two time periods: 1890-1960 and 1960-2006. The RA models developed using data from the 1890-1960 period were validated using the 1960-2006 period; in a similar way the model developed based on the 1960-2006 period was validated using data from the 1890-1960 period. The results indicate the potential to forecast decadal trends in SMAT for all seasons in 1960-2006 time period and all seasons with the exception of winter for the period 1890-1960 with the best predictability achieved in summer. The statistical models show the best performance when SST anomalies in the European shelf seas (45°N-65°N, 20W-20E) rather than those for the SSTs over the entire North Atlantic (30°N-75°N, 80°W-30°E) were used as a predictor. The results of the RA demonstrated that similar SSTs modes were responsible for predictions in the first and second half of the 20th century, establishing temporal consistency, though with stronger influence in the more recent half. The SST pattern responsible for explaining the largest amount of variance in SMAT was stronger in the second half of the 20th century and showed increasing influence from the area of the North Sea, possibly due to faster sea-surface warming in that region in comparison with the open North Atlantic. The Wavelet Transform (WT), Cross Wavelet Transform (XWT) and Wavelet Coherence (WTC) techniques were used to analyse RA-model-based forecasts of SMAT in the time-frequency domain. Wavelet-based techniques applied to the predicted and observed time series revealed a good performance of RA models to predict the frequency variability in the SMAT time series. A better performance was obtained for predicting the SMAT during the period 1960-2006 based on 1890-1960 than vice versa, with the exception of winter 1890-1960. In the same frequency bands and in the same time interval there was high coherence between observed and predicted time series. In particular, winter, spring and summer wavelets at 8±1.5 year band were highly correlated in both time periods, with higher correlation in 1960-2006 and in summer.

  6. Simultaneous heat and water model: Model use, calibration and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A discussion of calibration and validation procedures used for the Simultaneous Heat and Water model is presented. Three calibration approaches are presented and compared for simulating soil water content. Approaches included a stepwise local search methodology, trial-and-error calibration, and an...

  7. Validating agent based models through virtual worlds.

    SciTech Connect

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina [Sandia National Laboratories, Livermore, CA; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E. [North Carolina State University, Raleigh, NC; Bernstein, Jeremy Ray Rhythm [Gaikai, Inc., Aliso Viejo, CA

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior. Results from our work indicate that virtual worlds have the potential for serving as a proxy in allocating and populating behaviors that would be used within further agent-based modeling studies.

  8. Document Degradation Models: Parameter Estimation and Model Validation

    Microsoft Academic Search

    Tapas Kanungo; Robert M. Haralick; Henry S. Baird; Werner Stuetzle; David Madigan

    1994-01-01

    version (from foreground to background and vice- versa) that occurs independently at each pixel Scanned documents are noisy. Recently, (KHP93, due to light intensity fluctuations and threshold- KHP94, BaiSO), document degradation models were ing level, and (ii) the blurring that occurs due to proposed that model the local distortion introduced the point-spread function of the optical system the during the

  9. Validation of CFD Model for Research into Displacement Ventilation

    Microsoft Academic Search

    Zhang Lin; T. T. Chow; Qiuwang Wang; K. F. Fong; L. S. Chan

    2005-01-01

    The use of turbulence models leads to uncertainties in the computed results because the models are not universal. Therefore, it is essential to validate the CFD program by experimental data. A computational fluid dynamics (CFD) program with the Re-Normalization Group (RNG) k-? model was used for prediction. Validation was conducted by comparing the flow patterns, vertical profiles of temperature, concentration,

  10. Validation of drum boiler models through complete dynamic tests

    Microsoft Academic Search

    Alberto Leva; Claudio Maffezzoni; Giancarlo Benelli

    1999-01-01

    This paper describes the validation of a model library for the simulation of drum boilers on the basis of static and dynamic experimental data obtained from a small-scale plant. All the steps of the validation process are described in detail, with particular reference to the modelling principles, to the trade-off between model complexity and accuracy, to the solution strategy and

  11. Design and Development Research: A Model Validation Case

    ERIC Educational Resources Information Center

    Tracey, Monica W.

    2009-01-01

    This is a report of one case of a design and development research study that aimed to validate an overlay instructional design model incorporating the theory of multiple intelligences into instructional systems design. After design and expert review model validation, The Multiple Intelligence (MI) Design Model, used with an Instructional Systems…

  12. Collaborative Infrastructure for Test-Driven Scientific Model Validation

    E-print Network

    Aldrich, Jonathan

    University, USA rgerkin@asu.edu ABSTRACT One of the pillars of the modern scientific method is modelCollaborative Infrastructure for Test-Driven Scientific Model Validation Cyrus Omar, Jonathan validation: comparing a scientific model's predictions against empirical observations. Today, a scientist

  13. What do we mean by validating a prognostic model?

    Microsoft Academic Search

    Douglas G. Altman; Patrick Royston

    2000-01-01

    SUMMARY Prognostic models are used in medicine for investigating patient outcome in relation to patient and disease characteristics. Such models do not always work well in practice, so it is widely recommended that they need to be validated. The idea of validating a prognostic model is generally taken to mean establishing that it works satisfactorily for patients other than those

  14. Empirical data validation for model building

    NASA Astrophysics Data System (ADS)

    Kazarian, Aram

    2008-03-01

    Optical Proximity Correction (OPC) has become an integral and critical part of process development for advanced technologies with challenging k I requirements. OPC solutions in turn require stable, predictive models to be built that can project the behavior of all structures. These structures must comprehend all geometries that can occur in the layout in order to define the optimal corrections by feature, and thus enable a manufacturing process with acceptable margin. The model is built upon two main component blocks. First, is knowledge of the process conditions which includes the optical parameters (e.g. illumination source, wavelength, lens characteristics, etc) as well as mask definition, resist parameters and process film stack information. Second, is the empirical critical dimension (CD) data collected using this process on specific test features the results of which are used to fit and validate the model and to project resist contours for all allowable feature layouts. The quality of the model therefore is highly dependent on the integrity of the process data collected for this purpose. Since the test pattern suite generally extends to below the resolution limit that the process can support with adequate latitude, the CD measurements collected can often be quite noisy with marginal signal-to-noise ratios. In order for the model to be reliable and a best representation of the process behavior, it is necessary to scrutinize empirical data to ensure that it is not dominated by measurement noise or flyer/outlier points. The primary approach for generating a clean, smooth and dependable empirical data set should be a replicated measurement sampling that can help to statistically reduce measurement noise by averaging. However, it can often be impractical to collect the amount of data needed to ensure a clean data set by this method. An alternate approach is studied in this paper to further smooth the measured data by means of curve fitting to identify remaining questionable measurement points for engineering scrutiny since they may run the risk of incorrectly skewing the model. In addition to purely statistical data curve fitting, another concept also merits investigation, that of using first principle, simulation-based characteristic coherence curves to fit the measured data.

  15. EXPERIMENTAL MODELS FOR VALIDATING COMPUTER TECHNOLOGY

    Microsoft Academic Search

    Marvin V. Zelkowitz; Dolores Wallace

    1998-01-01

    Experimentation is important within science for determining the effectiveness of proposed theories and methods. However, computer science has not developed a concise taxonomy of methods applicable for demonstrating the validity of a new technique. In this paper we discuss the methods generally employed to validate an experiment and propose a taxonomy consisting of 12 techniques that can be used to

  16. Parameterisation, calibration and validation of distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Refsgaard, Jens Christian

    1997-11-01

    This paper emphasizes the different requirements for calibration and validation of lumped and distributed models. On the basis of a theoretically founded modelling protocol, the different steps in distributed hydrological modelling are illustrated through a case study based on the MIKE SHE code and the 440 km 2 Karup catchment in Denmark. The importance of a rigorous and purposeful parameterisation is emphasized in order to get as few "free" parameters as possible for which assessments through calibration are required. Calibration and validation using a split-sample procedure were carried out for catchment discharge and piezometric heads at seven selected observation wells. The validated model was then used for two further validation tests. Firstly, model simulations were compared with observations from three additional discharge sites and four additional wells located within the catchment. This internal validation showed significantly poorer results compared to the calibration/validation sites. Secondly, the validated model based on a 500 m model grid was used to generate three additional models with 1000 m, 2000 m and 4000 m grids through interpolation of model parameters. The results from the multi-scale validation suggested that a maximum grid size of 1000 m should be used for simulations of discharge and ground-water heads, while the results deteriorated with coarser model grids.

  17. Micromachined accelerometer design, modeling and validation

    SciTech Connect

    Davies, B.R.; Bateman, V.I.; Brown, F.A.; Montague, S.; Murray, J.R.; Rey, D.; Smith, J.H.

    1998-04-01

    Micromachining technologies enable the development of low-cost devices capable of sensing motion in a reliable and accurate manner. The development of various surface micromachined accelerometers and gyroscopes to sense motion is an ongoing activity at Sandia National Laboratories. In addition, Sandia has developed a fabrication process for integrating both the micromechanical structures and microelectronics circuitry of Micro-Electro-Mechanical Systems (MEMS) on the same chip. This integrated surface micromachining process provides substantial performance and reliability advantages in the development of MEMS accelerometers and gyros. A Sandia MEMS team developed a single-axis, micromachined silicon accelerometer capable of surviving and measuring very high accelerations, up to 50,000 times the acceleration due to gravity or 50 k-G (actually measured to 46,000 G). The Sandia integrated surface micromachining process was selected for fabrication of the sensor due to the extreme measurement sensitivity potential associated with integrated microelectronics. Measurement electronics capable of measuring at to Farad (10{sup {minus}18} Farad) changes in capacitance were required due to the very small accelerometer proof mass (< 200 {times} 10{sup {minus}9} gram) used in this surface micromachining process. The small proof mass corresponded to small sensor deflections which in turn required very sensitive electronics to enable accurate acceleration measurement over a range of 1 to 50 k-G. A prototype sensor, based on a suspended plate mass configuration, was developed and the details of the design, modeling, and validation of the device will be presented in this paper. The device was analyzed using both conventional lumped parameter modeling techniques and finite element analysis tools. The device was tested and performed well over its design range.

  18. Geochemistry Model Validation Report: Material Degradation and Release Model

    SciTech Connect

    H. Stockman

    2001-09-28

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17).

  19. The SEGUE Stellar Parameter Pipeline. II. Validation with Galactic Globular and Open Clusters

    E-print Network

    Y. S. Lee; T. C. Beers; T. Sivarani; J. A. Johnson; D. An; R. Wilhelm; C. Allende Prieto; L. Koesterke; P. Re Fiorentin; C. A. L. Bailer-Jones; J. E. Norris; B. Yanny; C. M. Rockosi; H. J. Newberg; K. M. Cudworth; K. Pan

    2007-10-31

    We validate the performance and accuracy of the current SEGUE (Sloan Extension for Galactic Understanding and Exploration) Stellar Parameter Pipeline (SSPP), which determines stellar atmospheric parameters (effective temperature, surface gravity, and metallicity) by comparing derived overall metallicities and radial velocities from selected likely members of three globular clusters (M 13, M 15, and M 2) and two open clusters (NGC 2420 and M 67) to the literature values. Spectroscopic and photometric data obtained during the course of the original Sloan Digital Sky Survey (SDSS-I) and its first extension (SDSS-II/SEGUE) are used to determine stellar radial velocities and atmospheric parameter estimates for stars in these clusters. Based on the scatter in the metallicities derived for the members of each cluster, we quantify the typical uncertainty of the SSPP values, sigma([Fe/H]) = 0.13 dex for stars in the range of 4500 K < Teff < 7500 K and 2.0 < log g < 5.0, at least over the metallicity interval spanned by the clusters studied (-2.3 < [Fe/H] < 0). The surface gravities and effective temperatures derived by the SSPP are also compared with those estimated from the comparison of the color-magnitude diagrams with stellar evolution models; we find satisfactory agreement. At present, the SSPP underestimates [Fe/H] for near-solar-metallicity stars, represented by members of M 67 in this study, by about 0.3 dex.

  20. Statistical Validation of Normal Tissue Complication Probability Models

    SciTech Connect

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van't; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)] [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands) [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  1. Petri Net Based Model Validation in Systems Biology

    Microsoft Academic Search

    Monika Heiner; Ina Koch

    2004-01-01

    This paper describes the thriving application of Petri net theory for model validation of different types of molecular biological sys- tems. After a short introduction into systems biology we demonstrate how to develop and validate qualitative models of biological pathways in a systematic manner using the well-established Petri net analysis tech- nique of place and transition invariants. We discuss special

  2. Bayesian-based simulation model validation for spacecraft thermal systems

    E-print Network

    Stout, Kevin Dale

    2015-01-01

    Over the last several decades of space flight, spacecraft thermal system modeling software has advanced significantly, but the model validation process, in general, has changed very little. Although most thermal systems ...

  3. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    1998-01-01

    Given experimental data and a priori assumptions on nominal model and a linear fractional transformation uncertainty structure, feasible conditions for model validation is given. All unknown but bounded exogenous inputs are assumed to occur at the plant outputs. With the satisfaction of the feasible conditions for model validation, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization can be used as a basis for the development of a systematic way to construct model validating uncertainty models which have specific linear fractional transformation structure for use in robust control design and analysis. The proposed feasible condition (existence) test and the parameterization is computationally attractive as compared to similar tests currently available.

  4. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    SciTech Connect

    Freeman, J.; Whitmore, J.; Kaffine, L.; Blair, N.; Dobos, A. P.

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  5. Modeling and Simulation of the Apache Rotor System in CAMRAD II

    Microsoft Academic Search

    Donald L. Kunz; Henry E. Jones

    The unique capabilities of CAMRAD II are used to develop two enhanced models of the AH-64 Apache rotor system. Based on existing sources of structural and dynamic blade properties, common characteristics of both models are first validated. Then, a single-load- path, kinematic joint model, which includes the exact kinematics of the blade retention system, is developed and validated. A multiple-load-path

  6. Validating UML Models and OCL Constraints

    Microsoft Academic Search

    Mark Richters; Martin Gogolla

    2000-01-01

    . The UML has been widely accepted as a standard for modelingsoftware systems and is supported by a great number of CASEtools. However, UML tools often provide only little support for validatingmodels early during the design stage. Also, there is generally nosubstantial support for constraints written in the Object Constraint Language(OCL). We present an approach for the validation of UML

  7. Multi-terminal Subsystem Model Validation for Pacific DC Intertie

    SciTech Connect

    Yang, Bo; Huang, Zhenyu; Kosterev, Dmitry

    2008-07-20

    this paper proposes to validate dynamic model of Pacific DC Intertie with the concept of hybrid simulation by combing simulation with PMU measurements. The Playback function available in GE PSLF is adopted for hybrid simulation. It is demonstrated for the first time the feasibility of using Playback function on multi-terminal subsystem. Sensitivity studies are also presented as a result of common PMU measurement quality problem, ie, offset noise and time synchronization. Results indicate a good tolerance of PDCI model generally. It is recommended that requirements should apply to phasor measurements in model validation work to ensure better analysis. Key parameters are identified based on impact of value change to model behavior. Two events are employed for preliminary model validation with PMU measurements. Suggestions are made for PDCI model validation work in the future.

  8. A Process Improvement Model for Software Verification and Validation

    Microsoft Academic Search

    John Callahan; George Sabolish

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on

  9. Validation of Numerical Shallow Water Models for Tidal Lagoons

    SciTech Connect

    Eliason, D.; Bourgeois, A.

    1999-11-01

    An analytical solution is presented for the case of a stratified, tidally forced lagoon. This solution, especially its energetics, is useful for the validation of numerical shallow water models under stratified, tidally forced conditions. The utility of the analytical solution for validation is demonstrated for a simple finite difference numerical model. A comparison is presented of the energetics of the numerical and analytical solutions in terms of the convergence of model results to the analytical solution with increasing spatial and temporal resolution.

  10. Adolescent Personality: A Five-Factor Model Construct Validation

    ERIC Educational Resources Information Center

    Baker, Spencer T.; Victor, James B.; Chambers, Anthony L.; Halverson, Jr., Charles F.

    2004-01-01

    The purpose of this study was to investigate convergent and discriminant validity of the five-factor model of adolescent personality in a school setting using three different raters (methods): self-ratings, peer ratings, and teacher ratings. The authors investigated validity through a multitrait-multimethod matrix and a confirmatory factor…

  11. SAGE III Aerosol Extinction Validation in the Arctic Winter: Comparisons with SAGE II and POAM III

    NASA Technical Reports Server (NTRS)

    Thomason, L. W.; Poole, L. R.; Randall, C. E.

    2007-01-01

    The use of SAGE III multiwavelength aerosol extinction coefficient measurements to infer PSC type is contingent on the robustness of both the extinction magnitude and its spectral variation. Past validation with SAGE II and other similar measurements has shown that the SAGE III extinction coefficient measurements are reliable though the comparisons have been greatly weighted toward measurements made at mid-latitudes. Some aerosol comparisons made in the Arctic winter as a part of SOLVE II suggested that SAGE III values, particularly at longer wavelengths, are too small with the implication that both the magnitude and the wavelength dependence are not reliable. Comparisons with POAM III have also suggested a similar discrepancy. Herein, we use SAGE II data as a common standard for comparison of SAGE III and POAM III measurements in the Arctic winters of 2002/2003 through 2004/2005. During the winter, SAGE II measurements are made infrequently at the same latitudes as these instruments. We have mitigated this problem through the use potential vorticity as a spatial coordinate and thus greatly increased the number of coincident events. We find that SAGE II and III extinction coefficient measurements show a high degree of compatibility at both 1020 nm and 450 nm except a 10-20% bias at both wavelengths. In addition, the 452 to 1020-nm extinction ratio shows a consistent bias of approx. 30% throughout the lower stratosphere. We also find that SAGE II and POAM III are on average consistent though the comparisons show a much higher variability and larger bias than SAGE II/III comparisons. In addition, we find that the two data sets are not well correlated below 18 km. Overall, we find both the extinction values and the spectral dependence from SAGE III are robust and we find no evidence of a significant defect within the Arctic vortex.

  12. End-to-end modelling of He II flow systems

    NASA Technical Reports Server (NTRS)

    Mord, A. J.; Snyder, H. A.; Newell, D. A.

    1992-01-01

    A practical computer code has been developed which uses the accepted two-fluid model to simulate He II flow in complicated systems. The full set of equations are used, retaining the coupling between the pressure, temperature and velocity fields. This permits modeling He II flow over the full range of conditions, from strongly or weakly driven flow through large pipes, narrow channels and porous media. The system may include most of the components used in modern superfluid flow systems: non-ideal thermomechanical pumps, tapered sections, constrictions, lines with heated side walls and heat exchangers. The model is validated by comparison with published experimental data. It is applied to a complex system to show some of the non-intuitive feedback effects that can occur. This code is ready to be used as a design tool for practical applications of He II. It can also be used for the design of He II experiments and as a tool for comparison of experimental data with the standard two-fluid model.

  13. Systematic approach to verification and validation: High explosive burn models

    Microsoft Academic Search

    Ralph Menikoff; Christina A. Scovel

    2012-01-01

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V

  14. Considerations for the validation of species-habitat models

    Microsoft Academic Search

    Jennifer M. Psyllakis; Michael P. Gillingham

    The multitude of approaches to wildlife-habitat modeling reflect the broad objectives and goals of various research, management, and conservation programs. Validating models is an often overlooked component of using models effectively and confidently to achieve the desired objectives. Statistical models that attempt to predict the presence or absence of a species are often developed with logistic regression. In this paper,

  15. Validating Computational Cognitive Process Models across Multiple Timescales

    NASA Astrophysics Data System (ADS)

    Myers, Christopher; Gluck, Kevin; Gunzelmann, Glenn; Krusmark, Michael

    2010-12-01

    Model comparison is vital to evaluating progress in the fields of artificial general intelligence (AGI) and cognitive architecture. As they mature, AGI and cognitive architectures will become increasingly capable of providing a single model that completes a multitude of tasks, some of which the model was not specifically engineered to perform. These models will be expected to operate for extended periods of time and serve functional roles in real-world contexts. Questions arise regarding how to evaluate such models appropriately, including issues pertaining to model comparison and validation. In this paper, we specifically address model validation across multiple levels of abstraction, using an existing computational process model of unmanned aerial vehicle basic maneuvering to illustrate the relationship between validity and timescales of analysis.

  16. Using virtual reality to validate system models

    SciTech Connect

    Winter, V.L.; Caudell, T.P.

    1999-12-09

    To date most validation techniques are highly biased towards calculations involving symbolic representations of problems. These calculations are either formal (in the case of consistency and completeness checks), or informal in the case of code inspections. The authors believe that an essential type of evidence of the correctness of the formalization process must be provided by (i.e., must originate from) human-based calculation. They further believe that human calculation can by significantly amplified by shifting from symbolic representations to graphical representations. This paper describes their preliminary efforts in realizing such a representational shift.

  17. Adolescent PersonalityA Five-Factor Model Construct Validation

    Microsoft Academic Search

    Spencer R. Baker; James B. Victor; Anthony L. Chambers; Charles F. Halverson

    2004-01-01

    The purpose of this study was to investigate convergent and discriminant validity of the five-factor model of adolescent personality in a school setting using three different raters (methods): self-ratings, peer ratings, and teacher ratings. The authors investigated validity through a multitrait-multimethod matrix and a confirmatory factor analysis correlated trait, uncorrelated method model. With the exception of Emotional Stability, each analysis

  18. Validation of reduced-order models for control system design

    Microsoft Academic Search

    M. E. Sezer; D. D. Siljak

    1981-01-01

    The concept of suboptimality is applied to testing validity of reduced-order models in design of feedback schemes for large-scale systems. Aggregation and singular value decomposition as model reduction techniques, are interpreted in the expansion-contraction framework, which is suitable for evaluation of suboptimality of closed-loop systems resulting from reduced order designs. The proposed validation procedure is applied to a control design

  19. ECOLOGICAL MODEL TESTING: VERIFICATION, VALIDATION OR NEITHER?

    EPA Science Inventory

    Consider the need to make a management decision about a declining animal population. Two models are available to help. Before a decision is made based on model results, the astute manager or policy maker may ask, "Do the models work?" Or, "Have the models been verified or validat...

  20. Operational validation and intercomparison of different types of hydrological models

    NASA Astrophysics Data System (ADS)

    Refsgaard, Jens Christian; Knudsen, Jesper

    1996-07-01

    A theoretical framework for model validation, based on the methodology originally proposed by Klemes [1985, 1986], is presented. It includes a hierarchial validation testing scheme for model application to runoff prediction in gauged and ungauged catchments subject to stationary and nonstationary climate conditions. A case study on validation and intercomparison of three different models on three catchments in Zimbabwe is described. The three models represent a lumped conceptual modeling system (NAM), a distributed physically based system (MIKE SHE), and an intermediate approach (WATBAL). It is concluded that all models performed equally well when at least 1 year's data were available for calibration, while the distributed models performed marginally better for cases where no calibration was allowed.

  1. Validation of the Beck Depression Inventory—II in a Low-Income African American Sample of Medical Outpatients

    Microsoft Academic Search

    Karen B. Grothe; Gareth R. Dutton; Glenn N. Jones; Jamie Bodenlos; Martin Ancona; Phillip J. Brantley

    2005-01-01

    The psychometric properties of the Beck Depression Inventory—II (BDI-II) are well established with primarily Caucasian samples. However, little is known about its reliability and validity with minority groups. This study evaluated the psychometric properties of the BDI-II in a sample of low-income African American medical outpatients (N = 220). Reliability was demonstrated with high internal consistency (.90) and good item-total

  2. Data for model validation summary report. A summary of data for validation and benchmarking of recovery boiler models

    Microsoft Academic Search

    T. Grace; S. Lien; W. Schmidl; M. Salcudean; Z. Abdullah

    1997-01-01

    One of the tasks in the project was to obtain data from operating recovery boilers for the purpose of model validation. Another task was to obtain water model data and computer output from University of British Columbia for purposes of benchmarking the UBC model against other codes. In the course of discussions on recovery boiler modeling over the course of

  3. Verified Runtime Validation of Verified CPS Models From Model Checking to Checking Models

    E-print Network

    Clarke, Edmund M.

    ModelPlex: Verified Runtime Validation of Verified CPS Models From Model Checking to Checking Models Stefan Mitsch Andr´e Platzer Computer Science Department, Carnegie Mellon University Clarke Symposium, Sept. 20, 2014 For details, see ModelPlex paper at RV'14 Stefan Mitsch, Andr´e Platzer--Model

  4. Spatial statistical modeling of shallow landslides—Validating predictions for different landslide inventories and rainfall events

    NASA Astrophysics Data System (ADS)

    von Ruette, Jonas; Papritz, Andreas; Lehmann, Peter; Rickli, Christian; Or, Dani

    2011-10-01

    Statistical models that exploit the correlation between landslide occurrence and geomorphic properties are often used to map the spatial occurrence of shallow landslides triggered by heavy rainfalls. In many landslide susceptibility studies, the true predictive power of the statistical model remains unknown because the predictions are not validated with independent data from other events or areas. This study validates statistical susceptibility predictions with independent test data. The spatial incidence of landslides, triggered by an extreme rainfall in a study area, was modeled by logistic regression. The fitted model was then used to generate susceptibility maps for another three study areas, for which event-based landslide inventories were also available. All the study areas lie in the northern foothills of the Swiss Alps. The landslides had been triggered by heavy rainfall either in 2002 or 2005. The validation was designed such that the first validation study area shared the geomorphology and the second the triggering rainfall event with the calibration study area. For the third validation study area, both geomorphology and rainfall were different. All explanatory variables were extracted for the logistic regression analysis from high-resolution digital elevation and surface models (2.5 m grid). The model fitted to the calibration data comprised four explanatory variables: (i) slope angle (effect of gravitational driving forces), (ii) vegetation type (grassland and forest; root reinforcement), (iii) planform curvature (convergent water flow paths), and (iv) contributing area (potential supply of water). The area under the Receiver Operating Characteristic (ROC) curve ( AUC) was used to quantify the predictive performance of the logistic regression model. The AUC values were computed for the susceptibility maps of the three validation study areas (validation AUC), the fitted susceptibility map of the calibration study area (apparent AUC: 0.80) and another susceptibility map obtained for the calibration study area by 20-fold cross-validation (cross-validation AUC: 0.74). The AUC values of the first and second validation study areas (0.72 and 0.69, respectively) and the cross-validation AUC matched fairly well, and all AUC values were distinctly smaller than the apparent AUC. Based on the apparent AUC one would have clearly overrated the predictive performance for the first two validation areas. Rather surprisingly, the AUC value of the third validation study area (0.82) was larger than the apparent AUC. A large part of the third validation study area consists of gentle slopes, and the regression model correctly predicted that no landslides occur in the flat parts. This increased the predictive performance of the model considerably. The predicted susceptibility maps were further validated by summing the predicted susceptibilities for the entire validation areas and by comparing the sums with the observed number of landslides. The sums exceeded the observed counts for all the validation areas. Hence, the logistic regression model generally over-estimated the risk of landslide occurrence. Obviously, a predictive model that is based on static geomorphic properties alone cannot take a full account of the complex and time dependent processes in the subsurface. However, such a model is still capable of distinguishing zones highly or less prone to shallow landslides.

  5. Verification and validation of underwater models

    Microsoft Academic Search

    D. W. Gledhill; J. D. Illgen

    1997-01-01

    As computational power increases and the price of software and hardware decrease, the ability to simulate the effects of oceanic currents on the performance of electronic and electroacoustic systems becomes more achievable. While the modeling of underwater effects will always be orders of magnitude more complex than atmospheric modeling, the increase in technology is causing more underwater models and simulations

  6. Towards better clinical prediction models: seven steps for development and an ABCD for validation.

    PubMed

    Steyerberg, Ewout W; Vergouwe, Yvonne

    2014-08-01

    Clinical prediction models provide risk estimates for the presence of disease (diagnosis) or an event in the future course of disease (prognosis) for individual patients. Although publications that present and evaluate such models are becoming more frequent, the methodology is often suboptimal. We propose that seven steps should be considered in developing prediction models: (i) consideration of the research question and initial data inspection; (ii) coding of predictors; (iii) model specification; (iv) model estimation; (v) evaluation of model performance; (vi) internal validation; and (vii) model presentation. The validity of a prediction model is ideally assessed in fully independent data, where we propose four key measures to evaluate model performance: calibration-in-the-large, or the model intercept (A); calibration slope (B); discrimination, with a concordance statistic (C); and clinical usefulness, with decision-curve analysis (D). As an application, we develop and validate prediction models for 30-day mortality in patients with an acute myocardial infarction. This illustrates the usefulness of the proposed framework to strengthen the methodological rigour and quality for prediction models in cardiovascular research. PMID:24898551

  7. Understanding Hypersexuality with an Axis II Model

    Microsoft Academic Search

    Daniel F. Montaldi

    2003-01-01

    Current (descriptive) accounts of hypersexuality are Axis I models in that they explain “out of control” excessive sexual behavior by comparing it to one or more Axis I disorders, e.g., substance addiction, obsessive compulsive disorder. This article presents an Axis II model for a subset of hypersexual patterns that seem more similar in structure to personality disorders. Persons can exhibit

  8. Validation of Model Forecasts of the Ambient Solar Wind

    NASA Technical Reports Server (NTRS)

    Macneice, P. J.; Hesse, M.; Kuznetsova, M. M.; Rastaetter, L.; Taktakishvili, A.

    2009-01-01

    Independent and automated validation is a vital step in the progression of models from the research community into operational forecasting use. In this paper we describe a program in development at the CCMC to provide just such a comprehensive validation for models of the ambient solar wind in the inner heliosphere. We have built upon previous efforts published in the community, sharpened their definitions, and completed a baseline study. We also provide first results from this program of the comparative performance of the MHD models available at the CCMC against that of the Wang-Sheeley-Arge (WSA) model. An important goal of this effort is to provide a consistent validation to all available models. Clearly exposing the relative strengths and weaknesses of the different models will enable forecasters to craft more reliable ensemble forecasting strategies. Models of the ambient solar wind are developing rapidly as a result of improvements in data supply, numerical techniques, and computing resources. It is anticipated that in the next five to ten years, the MHD based models will supplant semi-empirical potential based models such as the WSA model, as the best available forecast models. We anticipate that this validation effort will track this evolution and so assist policy makers in gauging the value of past and future investment in modeling support.

  9. VALIDATION OF ACOUSTIC MODELS FOR TIME HARMONIC DISSIPATIVE SCATTERING PROBLEMS

    E-print Network

    Rodríguez, Rodolfo

    VALIDATION OF ACOUSTIC MODELS FOR TIME HARMONIC DISSIPATIVE SCATTERING PROBLEMS ALFREDO BERM absorber. Keywords: Acoustic time-harmonic scattering problem; Porous medium; Allard-Champoux model; Wall-harmonic scattering problem in a coupled fluid-porous medium system. We consider two different models

  10. Validation of simplified formation models at L2

    Microsoft Academic Search

    Isaac Miller; Mark Campbell

    2005-01-01

    This paper introduces a new method to probabilistically evaluate the validity of dynamics model approximations, and applies the method to simplified models of satellite formation dynamics near the Sun-Earth\\/Moon L2 libration point. The new method uses a Monte Carlo scheme similar to a sampling importance resampling filter to evolve state probability densities through satellite dynamics models of varying complexity. The

  11. Validation of the Poisson Stochastic Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    Zhuravleva, Tatiana; Marshak, Alexander

    2004-01-01

    A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.

  12. Combustion turbine dynamic model validation from tests

    Microsoft Academic Search

    L. N. Hannett; Afzal Khan

    1993-01-01

    Studies have been conducted on the Alaskan Railbelt System to examine the hydrothermal power system response after the hydroelectric power units at Bradley Lake are installed. The models and data for the generating units for the initial studies were not complete. Typical models were used, but their response appeared to be faster than judged by operating experience. A testing program

  13. Validation of Orthorectified Interferometric Radar Imagery and Digital Elevation Models

    NASA Technical Reports Server (NTRS)

    Smith Charles M.

    2004-01-01

    This work was performed under NASA's Verification and Validation (V&V) Program as an independent check of data supplied by EarthWatch, Incorporated, through the Earth Science Enterprise Scientific Data Purchase (SDP) Program. This document serves as the basis of reporting results associated with validation of orthorectified interferometric interferometric radar imagery and digital elevation models (DEM). This validation covers all datasets provided under the first campaign (Central America & Virginia Beach) plus three earlier missions (Indonesia, Red River: and Denver) for a total of 13 missions.

  14. On cluster validity for the fuzzy c-means model

    Microsoft Academic Search

    N. R. Pal; J. C. Bezdek

    1995-01-01

    Many functionals have been proposed for validation of partitions of object data produced by the fuzzy c-means (FCM) clustering algorithm. We examine the role a subtle but important parameter-the weighting exponent m of the FCM model-plays in determining the validity of FCM partitions. The functionals considered are the partition coefficient and entropy indexes of Bezdek, the Xie-Beni (1991), and extended

  15. Measurements of Humidity in the Atmosphere and Validation Experiments (Mohave, Mohave II): Results Overview

    NASA Technical Reports Server (NTRS)

    Leblanc, Thierry; McDermid, Iain S.; McGee, Thomas G.; Twigg, Laurence W.; Sumnicht, Grant K.; Whiteman, David N.; Rush, Kurt D.; Cadirola, Martin P.; Venable, Demetrius D.; Connell, R.; Demoz, Belay B.; Vomel, Holger; Miloshevich, L.

    2008-01-01

    The Measurements of Humidity in the Atmosphere and Validation Experiments (MOHAVE, MOHAVE-II) inter-comparison campaigns took place at the Jet Propulsion Laboratory (JPL) Table Mountain Facility (TMF, 34.5(sup o)N) in October 2006 and 2007 respectively. Both campaigns aimed at evaluating the capability of three Raman lidars for the measurement of water vapor in the upper troposphere and lower stratosphere (UT/LS). During each campaign, more than 200 hours of lidar measurements were compared to balloon borne measurements obtained from 10 Cryogenic Frost-point Hygrometer (CFH) flights and over 50 Vaisala RS92 radiosonde flights. During MOHAVE, fluorescence in all three lidar receivers was identified, causing a significant wet bias above 10-12 km in the lidar profiles as compared to the CFH. All three lidars were reconfigured after MOHAVE, and no such bias was observed during the MOHAVE-II campaign. The lidar profiles agreed very well with the CFH up to 13-17 km altitude, where the lidar measurements become noise limited. The results from MOHAVE-II have shown that the water vapor Raman lidar will be an appropriate technique for the long-term monitoring of water vapor in the UT/LS given a slight increase in its power-aperture, as well as careful calibration.

  16. Modeling of Alpine Atmospheric Dynamics II

    E-print Network

    Gohm, Alexander

    Modeling of Alpine Atmospheric Dynamics II 707.424, VU 2, SS2005 Unit 7: Model code structure code The tar-file of the model source code is at: /mnt/o3800/c707174/rams44-mod/rams44-mod.tar.gz rams44_code_modifications.txt is a description of the code modifications introduced by M. Fink and A

  17. VERIFICATION AND VALIDATION OF THE SPARC MODEL

    EPA Science Inventory

    Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values--that is, the physical and chemical constants that govern reactivity. Although empirical structure-activity relationships that allow estimation of some ...

  18. Discussion of model calibration and validation for transient dynamics simulation.

    SciTech Connect

    Hemez, F. M. (François M.); Doebling, S. W. (Scott W.); Wilson, A. C. (Amanda C.)

    2001-01-01

    Model calibration refers to a family of inverse problem-solving numerical techniques used to infer the value of parameters from test data sets. The purpose of model calibration is to optimize parametric or non-parametric models in such a way that their predictions match reality. In structural dynamics an example of calibration is the finite element model updating technology. Our purpose is essentially to discuss calibration in the broader context of model validation. Formal definitions are proposed and the notions of calibration and validation are illustrated using an example of transient structural dynamics that deals with the propagation of a shock wave through a hyper-foam pad. An important distinction that has not been made in finite element model updating and that is introduced here is that parameters of the numerical models or physical tests are categorized into input parameters, calibration variables, controllable and uncontrollable variables. Such classification helps to define model validation goals. Finally a path forward for validating numerical model is discussed and the relationship with uncertainty assessment is stressed.

  19. Model-Based Verification and Validation of Spacecraft Avionics

    NASA Technical Reports Server (NTRS)

    Khan, M. Omair; Sievers, Michael; Standley, Shaun

    2012-01-01

    Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.

  20. Validating Predictions from Climate Envelope Models

    PubMed Central

    Watling, James I.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Mazzotti, Frank J.; Romañach, Stephanie S.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species. PMID:23717452

  1. Validation of a Model of the Domino Effect?

    E-print Network

    Ron Larham

    2008-03-19

    A recent paper proposing a model of the limiting speed of the domino effect is discussed with reference to its need and the need of models in general for validation against experimental data. It is shown that the proposed model diverges significantly from experimentally derived speed estimates over a significant range of domino spacing using data from the existing literature and this author's own measurements, hence if its use had had economic importance its use outside its range of validity could have led to loses of one sort or another to its users.

  2. Psychometric validation of the BDI-II among HIV-positive CHARTER study participants.

    PubMed

    Hobkirk, Andréa L; Starosta, Amy J; De Leo, Joseph A; Marra, Christina M; Heaton, Robert K; Earleywine, Mitch

    2015-06-01

    Rates of depression are high among individuals living with HIV. Accurate assessment of depressive symptoms among this population is important for ensuring proper diagnosis and treatment. The Beck Depression Inventory-II (BDI-II) is a widely used measure for assessing depression, however its psychometric properties have not yet been investigated for use with HIV-positive populations in the United States. The current study was the first to assess the psychometric properties of the BDI-II among a large cohort of HIV-positive participants sampled at multiple sites across the United States as part of the CNS HIV Antiretroviral Therapy Effects Research (CHARTER) study. The BDI-II test scores showed good internal consistency (? = .93) and adequate test-retest reliability (internal consistency coefficient = 0.83) over a 6-mo period. Using a "gold standard" of major depressive disorder determined by the Composite International Diagnostic Interview, sensitivity and specificity were maximized at a total cut-off score of 17 and a receiver operating characteristic analysis confirmed that the BDI-II is an adequate diagnostic measure for the sample (area under the curve = 0.83). The sensitivity and specificity of each score are provided graphically. Confirmatory factor analyses confirmed the best fit for a three-factor model over one-factor and two-factor models and models with a higher-order factor included. The results suggest that the BDI-II is an adequate measure for assessing depressive symptoms among U.S. HIV-positive patients. Cut-off scores should be adjusted to enhance sensitivity or specificity as needed and the measure can be differentiated into cognitive, affective, and somatic depressive symptoms. (PsycINFO Database Record PMID:25419643

  3. Shuttle Spacesuit: Fabric/LCVG Model Validation

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tweed, J.; Zeitflin, C.; Kim, M.-H.; Anderson, B. M.; Cucinotta, F. A.; Ware, J.; Persans, A. E.

    2001-01-01

    A detailed spacesuit computational model is being developed at the Langley Research Center for radiation exposure evaluation studies. The details of the construction of the spacesuit are critical to estimation of exposures and assessing the risk to the astronaut on EVA. Past evaluations of spacesuit shielding properties assumed the basic fabric lay-up (Thermal Micrometeroid Garment, fabric restraints, and pressure envelope) and Liquid Cooling and Ventilation Garment (LCVG) could be homogenized as a single layer overestimating the protective properties over 60 percent of the fabric area. The present spacesuit model represents the inhomogeneous distributions of LCVG materials (mainly the water filled cooling tubes). An experimental test is performed using a 34-MeV proton beam and high- resolution detectors to compare with model-predicted transmission factors. Some suggestions are made on possible improved construction methods to improve the spacesuit's protection properties.

  4. Theory and Implementation of Nuclear Safety System Codes - Part II: System Code Closure Relations, Validation, and Limitations

    SciTech Connect

    Glenn A Roth; Fatih Aydogan

    2014-09-01

    This is Part II of two articles describing the details of thermal-hydraulic sys- tem codes. In this second part of the article series, the system code closure relationships (used to model thermal and mechanical non-equilibrium and the coupling of the phases) for the governing equations are discussed and evaluated. These include several thermal and hydraulic models, such as heat transfer coefficients for various flow regimes, two phase pressure correlations, two phase friction correlations, drag coefficients and interfacial models be- tween the fields. These models are often developed from experimental data. The experiment conditions should be understood to evaluate the efficacy of the closure models. Code verification and validation, including Separate Effects Tests (SETs) and Integral effects tests (IETs) is also assessed. It can be shown from the assessments that the test cases cover a significant section of the system code capabilities, but some of the more advanced reactor designs will push the limits of validation for the codes. Lastly, the limitations of the codes are discussed by considering next generation power plants, such as Small Modular Reactors (SMRs), analyz- ing not only existing nuclear power plants, but also next generation nuclear power plants. The nuclear industry is developing new, innovative reactor designs, such as Small Modular Reactors (SMRs), High-Temperature Gas-cooled Reactors (HTGRs) and others. Sub-types of these reactor designs utilize pebbles, prismatic graphite moderators, helical steam generators, in- novative fuel types, and many other design features that may not be fully analyzed by current system codes. This second part completes the series on the comparison and evaluation of the selected reactor system codes by discussing the closure relations, val- idation and limitations. These two articles indicate areas where the models can be improved to adequately address issues with new reactor design and development.

  5. CIPP evaluation model scale: development, reliability and validity

    Microsoft Academic Search

    Karatas Hakan; Fer Seval

    2011-01-01

    The purpose of this study was to determine the validity and reliability of the evaluation scale developed by the researcher based on the principles of Stufflebeam's CIPP Evaluation Model (1988) within the context of the evaluation of English curriculum of Yildiz Technical University. While the scale preparation, by taking advantage of the theoretical information and principles of CIPP Evaluation Model

  6. Validation of a metabolic cotton seedling emergence model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A seedling emergence model based on thermal dependence of enzyme activity in germinating cotton was developed. The model was validated under both laboratory and field conditions with several cotton lines under diverse temperature regimes. Four commercial lines were planted on four dates in Lubbock T...

  7. Validating regional-scale surface energy balance models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    One of the major challenges in developing reliable regional surface flux models is the relative paucity of scale-appropriate validation data. Direct comparisons between coarse-resolution model flux estimates and flux tower data can often be dominated by sub-pixel heterogeneity effects, making it di...

  8. A biomass combustion-gasification model: Validation and sensitivity analysis

    Microsoft Academic Search

    N. Bettagli; D. Fiaschi; U. Desideri

    1995-01-01

    The aim of the present paper is to study the gasification and combustion of biomass and waste materials. A model for the analysis of the chemical kinetics of gasification and combustion processes was developed with the main objective of calculating the gas composition at different operating conditions. The model was validated with experimental data for sawdust gasification. After having set

  9. Nonisothermal Modeling of Polymer Electrolyte Fuel Cells I. Experimental Validation

    E-print Network

    Nonisothermal Modeling of Polymer Electrolyte Fuel Cells I. Experimental Validation Hyunchul Ju, Pennsylvania 16802, USA b Gore Fuel Cell Technologies, W. L. Gore & Associates, Incorporated, Elkton, Maryland 21921, USA A three-dimensional, nonisothermal model of polymer electrolyte fuel cells PEFC is applied

  10. Validating Finite Element Models of Assembled Shell Structures

    NASA Technical Reports Server (NTRS)

    Hoff, Claus

    2006-01-01

    The validation of finite element models of assembled shell elements is presented. The topics include: 1) Problems with membrane rotations in assembled shell models; 2) Penalty stiffness for membrane rotations; 3) Physical stiffness for membrane rotations using shell elements with 6 dof per node; and 4) Connections avoiding rotations.

  11. THE FERNALD DOSIMETRY RECONSTRUCTION PROJECT Environmental Pathways -Models and Validation

    E-print Network

    Uncertainties in the Air Transport Model 25 VALIDATION EXERCISES . . . . . . . . . 26 Air Monitoring Data for Modeling the Transport of Airborne Releases F. The Straight-Line Gaussian Plume and Related Air Transport and Plume Depletion I. Plume Rise J. Building Wake Effects K Parametric Uncertainty in the Air Transport

  12. Institutional Effectiveness: A Model for Planning, Assessment & Validation.

    ERIC Educational Resources Information Center

    Truckee Meadows Community Coll., Sparks, NV.

    The report presents Truckee Meadows Community College's (Colorado) model for assessing institutional effectiveness and validating the College's mission and vision, and the strategic plan for carrying out the institutional effectiveness model. It also outlines strategic goals for the years 1999-2001. From the system-wide directive that education…

  13. Modeling HIV Immune Response and Validation with Clinical Data

    E-print Network

    Modeling HIV Immune Response and Validation with Clinical Data H. T. Banksa,1 , M. Davidiana,2 equations is formulated to describe the pathogenesis of HIV infection, wherein certain important features, and stimulation by antigens other than HIV. A stability analysis illustrates the capability of this model

  14. Bioaerosol optical sensor model development and initial validation

    Microsoft Academic Search

    Steven D. Campbell; Thomas H. Jeys; Xuan Le Eapen

    2007-01-01

    This paper describes the development and initial validation of a bioaerosol optical sensor model. This model was used to help determine design parameters and estimate performance of a new low-cost optical sensor for detecting bioterrorism agents. In order to estimate sensor performance in detecting biowarfare simulants and rejecting environmental interferents, use was made of a previously reported catalog of EEM

  15. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  16. Verification and Validation of Artificial Neural Network Models

    Microsoft Academic Search

    Fei Liu; Ming Yang

    2005-01-01

    \\u000a The increased dependence on artificial neural network (ANN) models leads to a key question – will the ANN models provide accurate\\u000a and reliable predictions? However, this important issue has received little systematic study. Thus this paper makes general\\u000a researches on verification and validation (V&V) of ANN models. Basic problems for V&V of ANN models are explicitly presented,\\u000a a new V&V

  17. Modeling and Validation of Biased Human Trust

    Microsoft Academic Search

    Mark Hoogendoorn; S. Waqar Jaffry; Peter-Paul van Maanen; Jan Treur; P. P. van Maanen

    2011-01-01

    When considering intelligent agents that interact with humans, having an idea of the trust levels of the human, for example in other agents or services, can be of great importance. Most models of human trust that exist, are based on some rationality assumption, and biased behavior is not represented, whereas a vast literature in Cognitive and Social Sciences indicates that

  18. WEPP: Model use, calibration and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Water Erosion Prediction Project (WEPP) model is a process-based, continuous simulation, distributed parameter, hydrologic and soil erosion prediction system. It has been developed over the past 25 years to allow for easy application to a large number of land management scenarios. Most general o...

  19. Validity Generalization of Holland's Hexagonal Model

    ERIC Educational Resources Information Center

    Toenjes, Carol M.; Borgen, Fred H.

    1974-01-01

    Holland's hexagonal model for six occupational groups was tested with data on estimated occupational rewards. Data on rated occupational reward characteristics were available for 148 occupations. Although the hexagonal shape was distorted, the groups were arrayed in the order postualted by Holland. (Author)

  20. Valid Inference in Partially Unstable GMM Models

    Microsoft Academic Search

    Hong Li; Ulrich K. Mullerz

    2006-01-01

    The paper considers time series GMM models where a subset of the parameters are time varying. The magnitude of the time variation in the unstable parameters is such that efficient tests detect the instability with (possibly high) probability smaller than one, even in the limit. We show that for many forms of the instability and a large class of GMM

  1. Modeling of sensor nets in Ptolemy II

    Microsoft Academic Search

    Philip Baldwin; Sanjeev Kohli; Edward A. Lee; Xiaojun Liu; Yang Zhao

    2004-01-01

    This paper describes a modeling and simulation framework called VisualSense for wireless sensor networks that builds on and leverages Ptolemy II. This framework supports actor-oriented definition of sensor nodes, wireless communication channels, physical media such as acoustic channels, and wired subsystems. The software architecture consists of a set of base classes for defining channels and sensor nodes, a library of

  2. Indistinguishable states II : Imperfect model scenarios

    E-print Network

    Smith, Leonard A

    Indistinguishable states II : Imperfect model scenarios Kevin Judd #3; Leonard Smith y July 16, 2001 Abstract A previous paper [4] considered the problem of estimating the true state of a system is that in any situation of state estimation or prediction of nonlinear systems it is essential to take even

  3. Experiments for foam model development and validation.

    SciTech Connect

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F. (Honeywell Federal Manufacturing and Technologies, Kansas City Plant, Kansas City, MO); Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  4. Theoretical models of ultrasonic inspection and their validation

    SciTech Connect

    Birchall, D.; Daniels, W. [AEA Technology, Risley (United Kingdom); Hawker, B.M.; Ramsey, A.T.; Temple, J.A.G. [AEA Technology, Harwell (United Kingdom)

    1994-12-31

    In response to the perception of demands by the public for higher than ever standards of safety, the nuclear industry in Britain embarked on an extensive program of nuclear safety research in support of the safety case for the new Sizewell B pressurized water reactor, which is now approaching completion. A suite of diverse computer models, of various aspects of ultrasonic inspection, is described, ranging from transducer design to ray-tracing in anisotropic stainless steel weldments or complex geometries. These provide aids to inspection design, verification, validation and data analysis, but the models must also be validated.

  5. Validation of DWPF MOG dynamics model -- Phase 1

    SciTech Connect

    Choi, A.S.

    1996-09-23

    The report documents the results of a study to validate the DWPF melter off-gas system dynamics model using the data collected during the Waste Qualification Runs in 1995. The study consisted of: (1) calibration of the model using one set of melter idling data, (2) validation of the calibrated model using three sets of steady feeding and one set of transient data, and (3) application of the validated model to simulate the melter overfeeding incident which took place on 7/5.95. All the controller tuning constants and control logic used in the validated model are identical to those used in the DCS in 1995. However, the model does not reflect any design and/or operational changes made in 1996 to alleviate the glass pouring problem. Based on the results of the overfeeding simulation, it is concluded that the actual feed rates during that incident were about 2.75 times the indicated readings and that the peak concentration of combustible gases remained below 15% of the lower flammable limit during the entire one-hour duration.

  6. Validity of empirical models of exposure in asphalt paving

    PubMed Central

    Burstyn, I; Boffetta, P; Burr, G; Cenni, A; Knecht, U; Sciarra, G; Kromhout, H

    2002-01-01

    Aims: To investigate the validity of empirical models of exposure to bitumen fume and benzo(a)pyrene, developed for a historical cohort study of asphalt paving in Western Europe. Methods: Validity was evaluated using data from the USA, Italy, and Germany not used to develop the original models. Correlation between observed and predicted exposures was examined. Bias and precision were estimated. Results: Models were imprecise. Furthermore, predicted bitumen fume exposures tended to be lower (-70%) than concentrations found during paving in the USA. This apparent bias might be attributed to differences between Western European and USA paving practices. Evaluation of the validity of the benzo(a)pyrene exposure model revealed a similar to expected effect of re-paving and a larger than expected effect of tar use. Overall, benzo(a)pyrene models underestimated exposures by 51%. Conclusions: Possible bias as a result of underestimation of the impact of coal tar on benzo(a)pyrene exposure levels must be explored in sensitivity analysis of the exposure–response relation. Validation of the models, albeit limited, increased our confidence in their applicability to exposure assessment in the historical cohort study of cancer risk among asphalt workers. PMID:12205236

  7. Validation of community models: 2. Development of a baseline using the Wang-Sheeley-Arge model

    Microsoft Academic Search

    Peter MacNeice

    2009-01-01

    This paper is the second in a series providing independent validation of community models of the outer corona and inner heliosphere. Here I present a comprehensive validation of the Wang-Sheeley-Arge (WSA) model. These results will serve as a baseline against which to compare the next generation of comparable forecasting models. The WSA model is used by a number of agencies

  8. Validation and Calibration in ACE Models: An Investigation on the CATS model.

    E-print Network

    Tesfatsion, Leigh

    Validation and Calibration in ACE Models: An Investigation on the CATS model. Carlo Bianchi deal with some validation (and a ...rst calibration) experiments on the CATS model proposed in Gallegati et al. (2003a, 2004b). The CATS model has been intensively used (see, for example, Delli Gatti et

  9. Validation techniques of agent based modelling for geospatial simulations

    NASA Astrophysics Data System (ADS)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  10. Embedded Effort Indicators on the California Verbal Learning Test – Second Edition (CVLT–II): An Attempted Cross-Validation

    Microsoft Academic Search

    Jacobus Donders; Carrie-Ann H. Strong

    2011-01-01

    This study determined whether the logistic regression method that was recently developed by Wolfe and colleagues (2010) for the detection of invalid effort on the California Verbal Learning Test – Second Edition (CVLT–II) could be cross-validated in an independent sample of 100 consecutively referred patients with traumatic brain injury. Although the CVLT–II logistic regression formula demonstrated a statistically significant level

  11. Analysis of Experimental Data for High Burnup PWR Spent Fuel Isotopic Validation - Vandellos II Reactor

    SciTech Connect

    Ilas, Germina [ORNL; Gauld, Ian C [ORNL

    2011-01-01

    This report is one of the several recent NUREG/CR reports documenting benchmark-quality radiochemical assay data and the use of the data to validate computer code predictions of isotopic composition for spent nuclear fuel, to establish the uncertainty and bias associated with code predictions. The experimental data analyzed in the current report were acquired from a high-burnup fuel program coordinated by Spanish organizations. The measurements included extensive actinide and fission product data of importance to spent fuel safety applications, including burnup credit, decay heat, and radiation source terms. Six unique spent fuel samples from three uranium oxide fuel rods were analyzed. The fuel rods had a 4.5 wt % {sup 235}U initial enrichment and were irradiated in the Vandellos II pressurized water reactor operated in Spain. The burnups of the fuel samples range from 42 to 78 GWd/MTU. The measurements were used to validate the two-dimensional depletion sequence TRITON in the SCALE computer code system.

  12. Tutorial: Building Ptolemy II Models Graphically Edward A. Lee

    E-print Network

    Tutorial: Building Ptolemy II Models Graphically Edward A. Lee Stephen Neuendorffer Electrical Modeling and Design 1 Tutorial: Building Ptolemy II Models Graphically Authors: Edward A. Lee Steve Neuendorffer 1. Introduction This tutorial document explains how to build Ptolemy II models using Vergil

  13. Using the split Hopkinson pressure bar to validate material models.

    PubMed

    Church, Philip; Cornish, Rory; Cullis, Ian; Gould, Peter; Lewtas, Ian

    2014-08-28

    This paper gives a discussion of the use of the split-Hopkinson bar with particular reference to the requirements of materials modelling at QinetiQ. This is to deploy validated material models for numerical simulations that are physically based and have as little characterization overhead as possible. In order to have confidence that the models have a wide range of applicability, this means, at most, characterizing the models at low rate and then validating them at high rate. The split Hopkinson pressure bar (SHPB) is ideal for this purpose. It is also a very useful tool for analysing material behaviour under non-shock wave loading. This means understanding the output of the test and developing techniques for reliable comparison of simulations with SHPB data. For materials other than metals comparison with an output stress v strain curve is not sufficient as the assumptions built into the classical analysis are generally violated. The method described in this paper compares the simulations with as much validation data as can be derived from deployed instrumentation including the raw strain gauge data on the input and output bars, which avoids any assumptions about stress equilibrium. One has to take into account Pochhammer-Chree oscillations and their effect on the specimen and recognize that this is itself also a valuable validation test of the material model. PMID:25071238

  14. Modeling and validation of microwave ablations with internal vaporization.

    PubMed

    Chiang, Jason; Birla, Sohan; Bedoya, Mariajose; Jones, David; Subbiah, Jeyam; Brace, Christopher L

    2015-02-01

    Numerical simulation is increasingly being utilized for computer-aided design of treatment devices, analysis of ablation growth, and clinical treatment planning. Simulation models to date have incorporated electromagnetic wave propagation and heat conduction, but not other relevant physics such as water vaporization and mass transfer. Such physical changes are particularly noteworthy during the intense heat generation associated with microwave heating. In this paper, a numerical model was created that integrates microwave heating with water vapor generation and transport by using porous media assumptions in the tissue domain. The heating physics of the water vapor model was validated through temperature measurements taken at locations 5, 10, and 20 mm away from the heating zone of the microwave antenna in homogenized ex vivo bovine liver setup. Cross-sectional area of water vapor transport was validated through intraprocedural computed tomography (CT) during microwave ablations in homogenized ex vivo bovine liver. Iso-density contours from CT images were compared to vapor concentration contours from the numerical model at intermittent time points using the Jaccard index. In general, there was an improving correlation in ablation size dimensions as the ablation procedure proceeded, with a Jaccard index of 0.27, 0.49, 0.61, 0.67, and 0.69 at 1, 2, 3, 4, and 5 min, respectively. This study demonstrates the feasibility and validity of incorporating water vapor concentration into thermal ablation simulations and validating such models experimentally. PMID:25330481

  15. Experimental Validation of a Thermoelastic Model for SMA Hybrid Composites

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.

    2001-01-01

    This study presents results from experimental validation of a recently developed model for predicting the thermomechanical behavior of shape memory alloy hybrid composite (SMAHC) structures, composite structures with an embedded SMA constituent. The model captures the material nonlinearity of the material system with temperature and is capable of modeling constrained, restrained, or free recovery behavior from experimental measurement of fundamental engineering properties. A brief description of the model and analysis procedures is given, followed by an overview of a parallel effort to fabricate and characterize the material system of SMAHC specimens. Static and dynamic experimental configurations for the SMAHC specimens are described and experimental results for thermal post-buckling and random response are presented. Excellent agreement is achieved between the measured and predicted results, fully validating the theoretical model for constrained recovery behavior of SMAHC structures.

  16. Testing the Validity of Cost-Effectiveness Models

    Microsoft Academic Search

    Chris McCabe; Simon Dixon

    2000-01-01

    A growing body of recent work has identified several problems with economic evaluations undertaken alongside controlled trials that can have potentially serious impacts on the ability of decision makers to draw valid conclusions. At the same time, the use of cost-effectiveness models has been drawn into question, due to the alleged arbitrary nature of their construction. This has led researchers

  17. Exploration of Pavement Oxidation Model Applications and Field Validation

    E-print Network

    Cui, Yuanchen

    2014-08-11

    model and validated their accuracies by field data, including one complicated case, a layer-by-layer prediction on a seal coat treated pavement. To better understand the asphalt aging process, the last topic in this dissertation was to study a dynamic...

  18. Validation of Visual Statistical Inference, with Application to Linear Models

    E-print Network

    the basic ideas of visual inference, Section 3 applies these ideas to the case of inference for regressionValidation of Visual Statistical Inference, with Application to Linear Models Mahbubul Majumder there were no formal visual methods in place for determining statistical significance of findings

  19. EXPLICIT CONTOUR MODEL FOR VEHICLE TRACKING WITH AUTOMATIC HYPOTHESIS VALIDATION

    E-print Network

    Wong, Kenneth K.Y.

    EXPLICIT CONTOUR MODEL FOR VEHICLE TRACKING WITH AUTOMATIC HYPOTHESIS VALIDATION Boris Wai-Sing Yiu addresses the problem of vehicle tracking under a single static, uncalibrated camera without any constraints cues for vehicle tracking, and eval- uate the correctness of a target hypothesis, with the infor

  20. Predictive Validation of an Influenza Spread Model Ayaz Hyder1

    E-print Network

    Leung, Brian

    Predictive Validation of an Influenza Spread Model Ayaz Hyder1 *, David L. Buckeridge2,3,4 , Brian Leung1,5 1 Department of Biology, McGill University, Montreal, Quebec, Canada, 2 Surveillance Lab, McGill Clinical and Health Informatics, McGill University, Montreal, Quebec, Canada, 3 Department of Epidemiology

  1. Issues with validation of urban flow and dispersion CFD models

    Microsoft Academic Search

    Michael Schatzmann; Bernd Leitl

    2011-01-01

    The paper describes difficulties in the proper evaluation of obstacle-resolving urban CFD models. After a brief description of the evaluation methodology suggested by the European COST action 732, focus is laid on the question of how to obtain validation data that can be regarded as a reliable standard. Data from an entire year of measurements at an urban monitoring station

  2. Toward Validating a Simplified Muscle Activation Model in SMARTMOBILE

    Microsoft Academic Search

    Ekaterina Auer; M. Tandl; D. Strobach; A. Kecskemethy

    2006-01-01

    In this paper, we apply the validated modeling and simulation environment SMARTMOBILE along with two solvers recently added to its core to the problem of the identification of muscle activation in general motor tasks. The identification of muscle activation is one of the important and still open problems in biomechanics which aims at helping physicians assess an individual therapy for

  3. Pear drying: Experimental validation of a mathematical prediction model

    Microsoft Academic Search

    Raquel P. F. Guiné

    2008-01-01

    In the present work, drying experiments were carried out with pears of the variety D. Joaquina for a number of different operating conditions, with the purpose of validating a diffusion based model previously developed to represent the drying behaviour of pears in a continuous convective drier, which includes the timely variation of the fruit chemical, physical and thermal properties.The drying

  4. Automated Validation of Software Models Steve Sims Rance Cleaveland

    E-print Network

    Cleaveland, Rance

    Automated Validation of Software Models Steve Sims Rance Cleaveland Reactive Systems, Inc. www those in the automotive, aviation and medical-device industries, will have similar needs, owing Eagle Software www.neweagle.net sranville@neweagle.net Abstract This paper describes the application

  5. Validation of the Repertory Grid for Use in Modeling Knowledge.

    ERIC Educational Resources Information Center

    Latta, Gail F.; Swigger, Keith

    1992-01-01

    Discusses the application of theories of cognitive modeling to information systems design and describes research that investigated the validity of the repertory grid for incorporation into intelligent front-end interfaces for information storage and retrieval systems. Personal construct theory is discussed and future research is suggested. (67…

  6. PASTIS: Bayesian extrasolar planet validation II. Constraining exoplanet blend scenarios using spectroscopic diagnoses

    E-print Network

    Santerne, A; Almenara, J -M; Bouchy, F; Deleuil, M; Figueira, P; Hébrard, G; Moutou, C; Rodionov, S; Santos, N C

    2015-01-01

    The statistical validation of transiting exoplanets proved to be an efficient technique to secure the nature of small exoplanet signals which cannot be established by purely spectroscopic means. However, the spectroscopic diagnoses are providing us with useful constraints on the presence of blended stellar contaminants. In this paper, we present how a contaminating star affects the measurements of the various spectroscopic diagnoses as function of the parameters of the target and contaminating stars using the model implemented into the PASTIS planet-validation software. We find particular cases for which a blend might produce a large radial velocity signal but no bisector variation. It might also produce a bisector variation anti-correlated with the radial velocity one, as in the case of stellar spots. In those cases, the full width half maximum variation provides complementary constraints. These results can be used to constrain blend scenarios for transiting planet candidates or radial velocity planets. We r...

  7. Modeling and validation of full fabric targets under ballistic impact

    Microsoft Academic Search

    Sidney Chocron; Eleonora Figueroa; Nikki King; Trenton Kirchdoerfer; Arthur E. Nicholls; Erick Sagebiel; Carl Weiss; Christopher J. Freitas

    2010-01-01

    The impact of three different projectiles (0.357 Magnum, 9-mm FMJ and 0.30 cal FSP) onto Kevlar® was modeled using a commercial finite-element program. The focus of the research was on simulating full-scale body armor targets, which were modeled at the yarn level, by reducing to a minimum the number of solid elements per yarn. A thorough validation of the impact

  8. Synergistic verification and validation of systems and software engineering models

    Microsoft Academic Search

    Yosr Jarraya; Andrei Soeanu; Luay Alawneh; Mourad Debbabi; Fawzi Hassaïne

    2009-01-01

    In this paper, we present a unified approach for the verification and validation of software and systems engineering design models expressed in UML 2.0 and SysML 1.0. The approach is based on three well-established techniques, namely formal analysis, programme analysis and software engineering (SwE) techniques. More precisely, our contribution consists of the synergistic combination of model checking, static analysis and

  9. Experimental validation of a model of an uncontrolled bicycle

    Microsoft Academic Search

    J. D. G. Kooijman; A. L. Schwab; J. P. Meijaard

    2008-01-01

    In this paper, an experimental validation of some modelling aspects of an uncontrolled bicycle is presented. In numerical\\u000a models, many physical aspects of the real bicycle are considered negligible, such as the flexibility of the frame and wheels,\\u000a play in the bearings, and precise tire characteristics. The admissibility of these assumptions has been checked by comparing\\u000a experimental results with numerical

  10. Concurrent Validity of the WISC-IV and DAS-II in Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Kuriakose, Sarah

    2014-01-01

    Cognitive assessments are used for a variety of research and clinical purposes in children with autism spectrum disorder (ASD). This study establishes concurrent validity of the Wechsler Intelligence Scales for Children-fourth edition (WISC-IV) and Differential Ability Scales-second edition (DAS-II) in a sample of children with ASD with a broad…

  11. Finite Element Model Development and Validation for Aircraft Fuselage Structures

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Fleming, Gary A.; Pappa, Richard S.; Grosveld, Ferdinand W.

    2000-01-01

    The ability to extend the valid frequency range for finite element based structural dynamic predictions using detailed models of the structural components and attachment interfaces is examined for several stiffened aircraft fuselage structures. This extended dynamic prediction capability is needed for the integration of mid-frequency noise control technology. Beam, plate and solid element models of the stiffener components are evaluated. Attachment models between the stiffener and panel skin range from a line along the rivets of the physical structure to a constraint over the entire contact surface. The finite element models are validated using experimental modal analysis results. The increased frequency range results in a corresponding increase in the number of modes, modal density and spatial resolution requirements. In this study, conventional modal tests using accelerometers are complemented with Scanning Laser Doppler Velocimetry and Electro-Optic Holography measurements to further resolve the spatial response characteristics. Whenever possible, component and subassembly modal tests are used to validate the finite element models at lower levels of assembly. Normal mode predictions for different finite element representations of components and assemblies are compared with experimental results to assess the most accurate techniques for modeling aircraft fuselage type structures.

  12. Does cross validation provide additional information in the evaluation of regression models?

    Microsoft Academic Search

    Antal Kozak; Robert Kozak

    2003-01-01

    A detailed study using seven data sets, two standing tree volume estimating models, and a height-diameter model showed that fit statistics and lack of fit statistics calculated directly from a regression model can be well esti - mated using simulations of cross validation or double cross validation. These results suggest that cross validation by data splitting and double cross validation

  13. An Examination of the Validity of the Family Affluence Scale II (FAS II) in a General Adolescent Population of Canada

    ERIC Educational Resources Information Center

    Boudreau, Brock; Poulin, Christiane

    2009-01-01

    This study examined the performance of the FAS II in a general population of 17,545 students in grades 7, 9, 10 and 12 in the Atlantic provinces of Canada. The FAS II was assessed against two other measures of socioeconomic status: mother's highest level of education and family structure. Our study found that the FAS II reduces the likelihood of…

  14. Experimental Validation of Modified Barton's Model for Rock Fractures

    NASA Astrophysics Data System (ADS)

    Asadollahi, Pooyan; Invernizzi, Marco C. A.; Addotto, Simone; Tonon, Fulvio

    2010-09-01

    Among the constitutive models for rock fractures developed over the years, Barton’s empirical model has been widely used. Although Barton’s failure criterion predicts peak shear strength of rock fractures with acceptable precision, it has some limitations in estimating the peak shear displacement, post-peak shear strength, dilation, and surface degradation. The first author modified Barton’s original model in order to address these limitations. In this study, the modified Barton’s model (the peak shear displacement, the shear stress-displacement curve, and the dilation displacement) is validated by conducting a series of direct shear tests.

  15. Experimentally validated finite element model of electrocaloric multilayer ceramic structures

    SciTech Connect

    Smith, N. A. S., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk; Correia, T. M., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk [National Physical Laboratory, Hampton Road, TW11 0LW Middlesex (United Kingdom); Rokosz, M. K., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk [National Physical Laboratory, Hampton Road, TW11 0LW Middlesex (United Kingdom); Department of Materials, Imperial College London, London SW7 2AZ (United Kingdom)

    2014-07-28

    A novel finite element model to simulate the electrocaloric response of a multilayer ceramic capacitor (MLCC) under real environment and operational conditions has been developed. The two-dimensional transient conductive heat transfer model presented includes the electrocaloric effect as a source term, as well as accounting for radiative and convective effects. The model has been validated with experimental data obtained from the direct imaging of MLCC transient temperature variation under application of an electric field. The good agreement between simulated and experimental data, suggests that the novel experimental direct measurement methodology and the finite element model could be used to support the design of optimised electrocaloric units and operating conditions.

  16. Experimentally validated finite element model of electrocaloric multilayer ceramic structures

    NASA Astrophysics Data System (ADS)

    Smith, N. A. S.; Rokosz, M. K.; Correia, T. M.

    2014-07-01

    A novel finite element model to simulate the electrocaloric response of a multilayer ceramic capacitor (MLCC) under real environment and operational conditions has been developed. The two-dimensional transient conductive heat transfer model presented includes the electrocaloric effect as a source term, as well as accounting for radiative and convective effects. The model has been validated with experimental data obtained from the direct imaging of MLCC transient temperature variation under application of an electric field. The good agreement between simulated and experimental data, suggests that the novel experimental direct measurement methodology and the finite element model could be used to support the design of optimised electrocaloric units and operating conditions.

  17. Validation of a Model for Teaching Canine Fundoscopy.

    PubMed

    Nibblett, Belle Marie D; Pereira, Mary Mauldin; Williamson, Julie A; Sithole, Fortune

    2015-01-01

    A validated teaching model for canine fundoscopic examination was developed to improve Day One fundoscopy skills while at the same time reducing use of teaching dogs. This novel eye model was created from a hollow plastic ball with a cutout for the pupil, a suspended 20-diopter lens, and paint and paper simulation of relevant eye structures. This eye model was mounted on a wooden stand with canine head landmarks useful in performing fundoscopy. Veterinary educators performed fundoscopy using this model and completed a survey to establish face and content validity. Subsequently, veterinary students were randomly assigned to pre-laboratory training with or without the use of this teaching model. After completion of an ophthalmology laboratory on teaching dogs, student outcome was assessed by measuring students' ability to see a symbol inserted on the simulated retina in the model. Students also completed a survey regarding their experience with the model and the laboratory. Overall, veterinary educators agreed that this eye model was well constructed and useful in teaching good fundoscopic technique. Student performance of fundoscopy was not negatively impacted by the use of the model. This novel canine model shows promise as a teaching and assessment tool for fundoscopy. PMID:25769909

  18. Open-source MFIX-DEM software for gas-solids flows: Part II Validation studies

    SciTech Connect

    Li, Tingwen [National Energy Technology Laboratory (NETL); Garg, Rahul [National Energy Technology Laboratory (NETL); Galvin, Janine [National Energy Technology Laboratory (NETL); Pannala, Sreekanth [ORNL

    2012-01-01

    With rapid advancements in computer hardware and numerical algorithms, computational fluid dynamics (CFD) has been increasingly employed as a useful tool for investigating the complex hydrodynamics inherent in multiphase flows. An important step during the development of a CFD model and prior to its application is conducting careful and comprehensive verification and validation studies. Accordingly, efforts to verify and validate the open-source MFIX-DEM software, which can be used for simulating the gas solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles, have been made at the National Energy Technology Laboratory (NETL). In part I of this paper, extensive verification studies were presented and in this part, detailed validation studies of MFIX-DEM are presented. A series of test cases covering a range of gas solids flow applications were conducted. In particular the numerical results for the random packing of a binary particle mixture, the repose angle of a sandpile formed during a side charge process, velocity, granular temperature, and voidage profiles from a bounded granular shear flow, lateral voidage and velocity profiles from a monodisperse bubbling fluidized bed, lateral velocity profiles from a spouted bed, and the dynamics of segregation of a binary mixture in a bubbling bed were compared with available experimental data, and in some instances with empirical correlations. In addition, sensitivity studies were conducted for various parameters to quantify the error in the numerical simulation.

  19. Open-Source MFIX-DEM Software for Gas-Solids Flows: Part II - Validation Studies

    SciTech Connect

    Li, Tingwen

    2012-04-01

    With rapid advancements in computer hardware and numerical algorithms, computational fluid dynamics (CFD) has been increasingly employed as a useful tool for investigating the complex hydrodynamics inherent in multiphase flows. An important step during the development of a CFD model and prior to its application is conducting careful and comprehensive verification and validation studies. Accordingly, efforts to verify and validate the open-source MFIX-DEM software, which can be used for simulating the gas–solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles, have been made at the National Energy Technology Laboratory (NETL). In part I of this paper, extensive verification studies were presented and in this part, detailed validation studies of MFIX-DEM are presented. A series of test cases covering a range of gas–solids flow applications were conducted. In particular the numerical results for the random packing of a binary particle mixture, the repose angle of a sandpile formed during a side charge process, velocity, granular temperature, and voidage profiles from a bounded granular shear flow, lateral voidage and velocity profiles from a monodisperse bubbling fluidized bed, lateral velocity profiles from a spouted bed, and the dynamics of segregation of a binary mixture in a bubbling bed were compared with available experimental data, and in some instances with empirical correlations. In addition, sensitivity studies were conducted for various parameters to quantify the error in the numerical simulation.

  20. Numerical modeling, calibration, and validation of an ultrasonic separator.

    PubMed

    Cappon, Hans; Keesman, Karel J

    2013-03-01

    Our overall goal is to apply acoustic separation technology for the recovery of valuable particulate matter from wastewater in industry. Such large-scale separator systems require detailed design and evaluation to optimize the system performance at the earliest stage possible. Numerical models can facilitate and accelerate the design of this application; therefore, a finite element (FE) model of an ultrasonic particle separator is a prerequisite. In our application, the particle separator consists of a glass resonator chamber with a piezoelectric transducer attached to the glass by means of epoxy adhesive. Separation occurs most efficiently when the system is operated at its main eigenfrequency. The goal of the paper is to calibrate and validate a model of a demonstrator ultrasonic separator, preserving known physical parameters and estimating the remaining unknown or less-certain parameters to allow extrapolation of the model beyond the measured system. A two-step approach was applied to obtain a validated model of the separator. The first step involved the calibration of the piezoelectric transducer. The second step, the subject of this paper, involves the calibration and validation of the entire separator using nonlinear optimization techniques. The results show that the approach lead to a fully calibrated 2-D model of the empty separator, which was validated with experiments on a filled separator chamber. The large sensitivity of the separator to small variations indicated that such a system should either be made and operated within tight specifications to obtain the required performance or the operation of the system should be adaptable to cope with a slightly off-spec system, requiring a feedback controller. PMID:23475927

  1. The TIGGE Model Validation Portal: An Improvement In Data Interoperability

    NASA Astrophysics Data System (ADS)

    Cram, T.; Schuster, D. C.; Wilcox, H.; Worley, S. J.

    2011-12-01

    The THORPEX Interactive Grand Global Ensemble (TIGGE), a major component of the World Weather Research Programme, was created to help foster and accelerate the accuracy of 1-day to 2-week high-impact weather forecasts for the benefit of humanity. A key element of this effort is the ability of weather researchers to perform model forecast validation, a statistical procedure by which observational data is used to evaluate how well a numerical model forecast performs as a function of forecast time and model fields. The current methods available for obtaining model forecast verification data can be time-consuming. For example, a user may need to obtain observational, in-situ, and model forecast data from multiple providers and sources in order to carry out the verification process. In most cases, the user is required to download a set of data covering a larger domain and over a longer period of time than is necessary for the user's research. The data preparation challenge is exacerbated if the requested data sets are provided in inconsistent formats, requiring the user to convert the multiple datasets into a preferred common data format. The TIGGE model validation portal, a new product developed for the NCAR Research Data Archive (RDA), strives to solve this data interoperability problem by bringing together and providing observational, model forecast, and in-situ data into a single data package, and in a common data format. Developed to help augment TIGGE research and facilitate researchers' ability to validate TIGGE model forecasts, the portal allows users to submit a delayed-mode data request for the observational and model parameters of their choosing. Additionally, users have the option of requesting a temporal and spatial subset from the global dataset to fit their research needs. This convenience saves both time and storage resources, and allows users to focus their efforts on model verification and research.

  2. Model validation and selection based on inverse fuzzy arithmetic

    NASA Astrophysics Data System (ADS)

    Haag, Thomas; Carvajal González, Sergio; Hanss, Michael

    2012-10-01

    In this work, a method for the validation of models in general, and the selection of the most appropriate model in particular, is presented. As an industrially relevant example, a Finite Element (FE) model of a brake pad is investigated and identified with particular respect to uncertainties. The identification is based on inverse fuzzy arithmetic and consists of two stages. In the first stage, the eigenfrequencies of the brake pad are considered, and for three different material models, a set of fuzzy-valued parameters is identified on the basis of measurement values. Based on these identified parameters and a resimulation of the system with these parameters, a model validation is performed which takes into account both the model uncertainties and the output uncertainties. In the second stage, the most appropriate material model is used in the FE model for the computation of frequency response functions between excitation point and three measurement points. Again, the parameters of the model are identified on the basis of three corresponding measurement signals and a resimulation is conducted.

  3. Robust cross-validation of linear regression QSAR models.

    PubMed

    Konovalov, Dmitry A; Llewellyn, Lyndon E; Vander Heyden, Yvan; Coomans, Danny

    2008-10-01

    A quantitative structure-activity relationship (QSAR) model is typically developed to predict the biochemical activity of untested compounds from the compounds' molecular structures. "The gold standard" of model validation is the blindfold prediction when the model's predictive power is assessed from how well the model predicts the activity values of compounds that were not considered in any way during the model development/calibration. However, during the development of a QSAR model, it is necessary to obtain some indication of the model's predictive power. This is often done by some form of cross-validation (CV). In this study, the concepts of the predictive power and fitting ability of a multiple linear regression (MLR) QSAR model were examined in the CV context allowing for the presence of outliers. Commonly used predictive power and fitting ability statistics were assessed via Monte Carlo cross-validation when applied to percent human intestinal absorption, blood-brain partition coefficient, and toxicity values of saxitoxin QSAR data sets, as well as three known benchmark data sets with known outlier contamination. It was found that (1) a robust version of MLR should always be preferred over the ordinary-least-squares MLR, regardless of the degree of outlier contamination and that (2) the model's predictive power should only be assessed via robust statistics. The Matlab and java source code used in this study is freely available from the QSAR-BENCH section of www.dmitrykonovalov.org for academic use. The Web site also contains the java-based QSAR-BENCH program, which could be run online via java's Web Start technology (supporting Windows, Mac OSX, Linux/Unix) to reproduce most of the reported results or apply the reported procedures to other data sets. PMID:18826208

  4. The motivations-attributes-skills-knowledge competency cluster validation model an empirical study 

    E-print Network

    Stevens, Jeffery Allen

    2004-09-30

    -Attributes-Skills-Knowledge Inverted Funnel Validation (MIFV) competency cluster model. The second purpose of this empirical research study was to introduce a new competency cluster validation model (MIFV). This model, if properly developed, should serve as a strong workforce...

  5. A verification and validation process for model-driven engineering

    NASA Astrophysics Data System (ADS)

    Delmas, R.; Pires, A. F.; Polacsek, T.

    2013-12-01

    Model Driven Engineering practitioners already benefit from many well established verification tools, for Object Constraint Language (OCL), for instance. Recently, constraint satisfaction techniques have been brought to Model-Driven Engineering (MDE) and have shown promising results on model verification tasks. With all these tools, it becomes possible to provide users with formal support from early model design phases to model instantiation phases. In this paper, a selection of such tools and methods is presented, and an attempt is made to define a verification and validation process for model design and instance creation centered on UML (Unified Modeling Language) class diagrams and declarative constraints, and involving the selected tools. The suggested process is illustrated with a simple example.

  6. Verifying and Validating Proposed Models for FSW Process Optimization

    NASA Technical Reports Server (NTRS)

    Schneider, Judith

    2008-01-01

    This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms

  7. Query Based UML Modeling Validation and Verification of the System Model and

    E-print Network

    Austin, Mark

    for a Hydraulic Crane Denny Mathew ENPM 643 System Validation and Verification Instructor: Dr. Mark Austin Fall1 Query Based UML Modeling Validation and Verification of the System Model and Behavior .......................................................... 5 1.3 System Description ................................................. 6 2.0 System Architecture

  8. Daily validation procedure of chromatographic assay using gaussoexponential modelling.

    PubMed

    Tamisier-Karolak, S L; Tod, M; Bonnardel, P; Czok, M; Cardot, P

    1995-07-01

    High performance liquid chromatography is one of the most successful analytical methods used for the quantitative determination of drugs in biological samples. However, this method is marked by a lack of performance reproducibility: chromatographic peaks become wider and even asymmetrical as the column ages. These progressive changes in the chromatographic parameters have to be taken into account when evaluating the validation criteria for the method. These criteria change with the ageing process of the column leading to the need for new estimations to assure the quality of the results. Procedures are proposed for the daily determination of some validation criteria using the exponentially modified Gaussian (EMG) model of the chromatographic peak. This modelling has been studied on simulated chromatographic peaks in order to obtain the relationships between chromatographic measurements and EMG parameters. PMID:8580155

  9. Microelectronics package design using experimentally-validated modeling and simulation.

    SciTech Connect

    Johnson, Jay Dean; Young, Nathan Paul; Ewsuk, Kevin Gregory

    2010-11-01

    Packaging high power radio frequency integrated circuits (RFICs) in low temperature cofired ceramic (LTCC) presents many challenges. Within the constraints of LTCC fabrication, the design must provide the usual electrical isolation and interconnections required to package the IC, with additional consideration given to RF isolation and thermal management. While iterative design and prototyping is an option for developing RFIC packaging, it would be expensive and most likely unsuccessful due to the complexity of the problem. To facilitate and optimize package design, thermal and mechanical simulations were used to understand and control the critical parameters in LTCC package design. The models were validated through comparisons to experimental results. This paper summarizes an experimentally-validated modeling approach to RFIC package design, and presents some results and key findings.

  10. Rationality Validation of a Layered Decision Model for Network Defense

    SciTech Connect

    Wei, Huaqiang; Alves-Foss, James; Zhang, Du; Frincke, Deb

    2007-08-31

    We propose a cost-effective network defense strategy built on three key: three decision layers: security policies, defense strategies, and real-time defense tactics for countering immediate threats. A layered decision model (LDM) can be used to capture this decision process. The LDM helps decision-makers gain insight into the hierarchical relationships among inter-connected entities and decision types, and supports the selection of cost-effective defense mechanisms to safeguard computer networks. To be effective as a business tool, it is first necessary to validate the rationality of model before applying it to real-world business cases. This paper describes our efforts in validating the LDM rationality through simulation.

  11. USER'S MANUAL FOR THE PLUME VISIBILITY MODEL (PLUVUE II)

    EPA Science Inventory

    This publication contains information about the computer programs for the Plume Visibility Model PLUVUE II. A technical overview of PLUVUE II and the results of model evaluation studies are presented. The source code of PLUVUE II, as well as two sets of input and output data, is ...

  12. Calibration of Predictor Models Using Multiple Validation Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  13. Optimization and validation of a micellar electrokinetic chromatographic method for the analysis of several angiotensin-II-receptor antagonists.

    PubMed

    Hillaert, S; De Beer, T R M; De Beer, J O; Van den Bossche, W

    2003-01-10

    We have optimized a micellar electrokinetic capillary chromatographic method for the separation of six angiotensin-II-receptor antagonists (ARA-IIs): candesartan, eprosartan mesylate, irbesartan, losartan potassium, telmisartan, and valsartan. A face-centred central composite design was applied to study the effect of the pH, the molarity of the running buffer, and the concentration of the micelle-forming agent on the separation properties. A combination of the studied parameters permitted the separation of the six ARA-IIs, which was best carried out using a 55-mM sodium phosphate buffer solution (pH 6.5) containing 15 mM of sodium dodecyl sulfate. The same system can also be applied for the quantitative determination of these compounds, but only for the more stable ARA-IIs (candesartan, eprosartan mesylate, losartan potassium, and valsartan). Some system parameters (linearity, precision, and accuracy) were validated. PMID:12564683

  14. Finite element modeling for validation of structural damage identification experimentation.

    SciTech Connect

    Stinemates, D. W. (Daniel W.); Bennett, J. G. (Joel G.)

    2001-01-01

    The project described in this report was performed to couple experimental and analytical techniques in the field of structural health monitoring and darnage identification. To do this, a finite dement model was Constructed of a simulated three-story building used for damage identification experiments. The model was used in conjunction with data from thie physical structure to research damage identification algorithms. Of particular interest was modeling slip in joints as a function of bolt torque and predicting the smallest change of torque that could be detected experimentally. After being validated with results from the physical structure, the model was used to produce data to test the capabilities of damage identification algorithms. This report describes the finite element model constructed, the results obtained, and proposed future use of the model.

  15. Pharmacophore modeling studies of type I and type II kinase inhibitors of Tie2.

    PubMed

    Xie, Qing-Qing; Xie, Huan-Zhang; Ren, Ji-Xia; Li, Lin-Li; Yang, Sheng-Yong

    2009-02-01

    In this study, chemical feature based pharmacophore models of type I and type II kinase inhibitors of Tie2 have been developed with the aid of HipHop and HypoRefine modules within Catalyst program package. The best HipHop pharmacophore model Hypo1_I for type I kinase inhibitors contains one hydrogen-bond acceptor, one hydrogen-bond donor, one general hydrophobic, one hydrophobic aromatic, and one ring aromatic feature. And the best HypoRefine model Hypo1_II for type II kinase inhibitors, which was characterized by the best correlation coefficient (0.976032) and the lowest RMSD (0.74204), consists of two hydrogen-bond donors, one hydrophobic aromatic, and two general hydrophobic features, as well as two excluded volumes. These pharmacophore models have been validated by using either or both test set and cross validation methods, which shows that both the Hypo1_I and Hypo1_II have a good predictive ability. The space arrangements of the pharmacophore features in Hypo1_II are consistent with the locations of the three portions making up a typical type II kinase inhibitor, namely, the portion occupying the ATP binding region (ATP-binding-region portion, AP), that occupying the hydrophobic region (hydrophobic-region portion, HP), and that linking AP and HP (bridge portion, BP). Our study also reveals that the ATP-binding-region portion of the type II kinase inhibitors plays an important role to the bioactivity of the type II kinase inhibitors. Structural modifications on this portion should be helpful to further improve the inhibitory potency of type II kinase inhibitors. PMID:19138543

  16. A validated approach for modeling collapse of steel structures

    NASA Astrophysics Data System (ADS)

    Saykin, Vitaliy Victorovich

    A civil engineering structure is faced with many hazardous conditions such as blasts, earthquakes, hurricanes, tornadoes, floods, and fires during its lifetime. Even though structures are designed for credible events that can happen during a lifetime of the structure, extreme events do happen and cause catastrophic failures. Understanding the causes and effects of structural collapse is now at the core of critical areas of national need. One factor that makes studying structural collapse difficult is the lack of full-scale structural collapse experimental test results against which researchers could validate their proposed collapse modeling approaches. The goal of this work is the creation of an element deletion strategy based on fracture models for use in validated prediction of collapse of steel structures. The current work reviews the state-of-the-art of finite element deletion strategies for use in collapse modeling of structures. It is shown that current approaches to element deletion in collapse modeling do not take into account stress triaxiality in vulnerable areas of the structure, which is important for proper fracture and element deletion modeling. The report then reviews triaxiality and its role in fracture prediction. It is shown that fracture in ductile materials is a function of triaxiality. It is also shown that, depending on the triaxiality range, different fracture mechanisms are active and should be accounted for. An approach using semi-empirical fracture models as a function of triaxiality are employed. The models to determine fracture initiation, softening and subsequent finite element deletion are outlined. This procedure allows for stress-displacement softening at an integration point of a finite element in order to subsequently remove the element. This approach avoids abrupt changes in the stress that would create dynamic instabilities, thus making the results more reliable and accurate. The calibration and validation of these models are shown. The calibration is performed using a particle swarm optimization algorithm to establish accurate parameters when calibrated to circumferentially notched tensile coupons. It is shown that consistent, accurate predictions are attained using the chosen models. The variation of triaxiality in steel material during plastic hardening and softening is reported. The range of triaxiality in steel structures undergoing collapse is investigated in detail and the accuracy of the chosen finite element deletion approaches is discussed. This is done through validation of different structural components and structural frames undergoing severe fracture and collapse.

  17. Low-order dynamic modeling of the Experimental Breeder Reactor II

    SciTech Connect

    Berkan, R.C. (Tennessee Univ., Knoxville, TN (USA). Dept. of Nuclear Engineering); Upadhyaya, B.R.; Kisner, R.A. (Oak Ridge National Lab., TN (USA))

    1990-07-01

    This report describes the development of a low-order, linear model of the Experimental Breeder Reactor II (EBR-II), including the primary system, intermediate heat exchanger, and steam generator subsystems. The linear model is developed to represent full-power steady state dynamics for low-level perturbations. Transient simulations are performed using model building and simulation capabilities of the computer software Matrix{sub x}. The inherently safe characteristics of the EBR-II are verified through the simulation studies. The results presented in this report also indicate an agreement between the linear model and the actual dynamics of the plant for several transients. Such models play a major role in the learning and in the improvement of nuclear reactor dynamics for control and signal validation studies. This research and development is sponsored by the Advanced Controls Program in the Instrumentation and Controls Division of the Oak Ridge National Laboratory. 17 refs., 67 figs., 15 tabs.

  18. Validation of the WATEQ4 geochemical model for uranium

    SciTech Connect

    Krupka, K.M.; Jenne, E.A.; Deutsch, W.J.

    1983-09-01

    As part of the Geochemical Modeling and Nuclide/Rock/Groundwater Interactions Studies Program, a study was conducted to partially validate the WATEQ4 aqueous speciation-solubility geochemical model for uranium. The solubility controls determined with the WATEQ4 geochemical model were in excellent agreement with those laboratory studies in which the solids schoepite (UO/sub 2/(OH)/sub 2/ . H/sub 2/O), UO/sub 2/(OH)/sub 2/, and rutherfordine ((UO/sub 2/CO/sub 3/) were identified as actual solubility controls for uranium. The results of modeling solution analyses from laboratory studies of uranyl phosphate solids, however, identified possible errors in the characterization of solids in the original solubility experiments. As part of this study, significant deficiencies in the WATEQ4 thermodynamic data base for uranium solutes and solids were corrected. Revisions included recalculation of selected uranium reactions. Additionally, thermodynamic data for the hydroxyl complexes of U(VI), including anionic (VI) species, were evaluated (to the extent permitted by the available data). Vanadium reactions were also added to the thermodynamic data base because uranium-vanadium solids can exist in natural ground-water systems. This study is only a partial validation of the WATEQ4 geochemical model because the available laboratory solubility studies do not cover the range of solid phases, alkaline pH values, and concentrations of inorganic complexing ligands needed to evaluate the potential solubility of uranium in ground waters associated with various proposed nuclear waste repositories. Further validation of this or other geochemical models for uranium will require careful determinations of uraninite solubility over the pH range of 7 to 10 under highly reducing conditions and of uranyl hydroxide and phosphate solubilities over the pH range of 7 to 10 under oxygenated conditions.

  19. Experimental Validation and Applications of a Fluid Infiltration Model

    PubMed Central

    Kao, Cindy S.; Hunt, James R.

    2010-01-01

    Horizontal infiltration experiments were performed to validate a plug flow model that minimizes the number of parameters that must be measured. Water and silicone oil at three different viscosities were infiltrated into glass beads, desert alluvium, and silica powder. Experiments were also performed with negative inlet heads on air-dried silica powder, and with water and oil infiltrating into initially water moist silica powder. Comparisons between the data and model were favorable in most cases, with predictions usually within 40% of the measured data. The model is extended to a line source and small areal source at the ground surface to analytically predict the shape of two-dimensional wetting fronts. Furthermore, a plug flow model for constant flux infiltration agrees well with field data and suggests that the proposed model for a constant-head boundary condition can be effectively used to predict wetting front movement at heterogeneous field sites if averaged parameter values are used. PMID:20428480

  20. A validation procedure for a LADAR system radiometric simulation model

    NASA Astrophysics Data System (ADS)

    Leishman, Brad; Budge, Scott; Pack, Robert

    2007-04-01

    The USU LadarSIM software package is a ladar system engineering tool that has recently been enhanced to include the modeling of the radiometry of Ladar beam footprints. This paper will discuss our validation of the radiometric model and present a practical approach to future validation work. In order to validate complicated and interrelated factors affecting radiometry, a systematic approach had to be developed. Data for known parameters were first gathered then unknown parameters of the system were determined from simulation test scenarios. This was done in a way to isolate as many unknown variables as possible, then build on the previously obtained results. First, the appropriate voltage threshold levels of the discrimination electronics were set by analyzing the number of false alarms seen in actual data sets. With this threshold set, the system noise was then adjusted to achieve the appropriate number of dropouts. Once a suitable noise level was found, the range errors of the simulated and actual data sets were compared and studied. Predicted errors in range measurements were analyzed using two methods: first by examining the range error of a surface with known reflectivity and second by examining the range errors for specific detectors with known responsivities. This provided insight into the discrimination method and receiver electronics used in the actual system.

  1. Model calibration and validation of an impact test simulation

    SciTech Connect

    Hemez, F. M. (François M.); Wilson, A. C. (Amanda C.); Havrilla, G. N. (George N.)

    2001-01-01

    This paper illustrates the methodology being developed at Los Alamos National Laboratory for the validation of numerical simulations for engineering structural dynamics. The application involves the transmission of a shock wave through an assembly that consists of a steel cylinder and a layer of elastomeric (hyper-foam) material. The assembly is mounted on an impact table to generate the shock wave. The input acceleration and three output accelerations are measured. The main objective of the experiment is to develop a finite element representation of the system capable of reproducing the test data with acceptable accuracy. Foam layers of various thicknesses and several drop heights are considered during impact testing. Each experiment is replicated several times to estimate the experimental variability. Instead of focusing on the calibration of input parameters for a single configuration, the numerical model is validated for its ability to predict the response of three different configurations (various combinations of foam thickness and drop height). Design of Experiments is implemented to perform parametric and statistical variance studies. Surrogate models are developed to replace the computationally expensive numerical simulation. Variables of the finite element model are separated into calibration variables and control variables, The models are calibrated to provide numerical simulations that correctly reproduce the statistical variation of the test configurations. The calibration step also provides inference for the parameters of a high strain-rate dependent material model of the hyper-foam. After calibration, the validity of the numerical simulation is assessed through its ability to predict the response of a fourth test setup.

  2. Modeling the Arm II core in MicroCap IV

    Microsoft Academic Search

    Dalton

    1996-01-01

    This paper reports on how an electrical model for the core of the Arm II machine was created and how to use this model. We wanted to get a model for the electrical characteristics of the ARM II core, in order to simulate this machine and to assist in the design of a future machine. We wanted this model to

  3. A turbulence model for iced airfoils and its validation

    NASA Technical Reports Server (NTRS)

    Shin, Jaiwon; Chen, Hsun H.; Cebeci, Tuncer

    1992-01-01

    A turbulence model based on the extension of the algebraic eddy viscosity formulation of Cebeci and Smith developed for two-dimensional flows over smooth and rough surfaces is described for iced airfoils and validated for computed ice shapes obtained for a range of total temperatures varying from 28 to -15 F. The validation is made with an interactive boundary layer method which uses a panel method to compute the inviscid flow and an inverse finite difference boundary layer method to compute the viscous flow. The interaction between inviscid and viscous flows is established by the use of the Hilbert integral. The calculated drag coefficients compare well with recent experimental data taken at the NASA-Lewis Icing Research Tunnel (IRT) and show that, in general, the drag increase due to ice accretion can be predicted well and efficiently.

  4. Validation Analysis of the Shoal Groundwater Flow and Transport Model

    SciTech Connect

    A. Hassan; J. Chapman

    2008-11-01

    Environmental restoration at the Shoal underground nuclear test is following a process prescribed by a Federal Facility Agreement and Consent Order (FFACO) between the U.S. Department of Energy, the U.S. Department of Defense, and the State of Nevada. Characterization of the site included two stages of well drilling and testing in 1996 and 1999, and development and revision of numerical models of groundwater flow and radionuclide transport. Agreement on a contaminant boundary for the site and a corrective action plan was reached in 2006. Later that same year, three wells were installed for the purposes of model validation and site monitoring. The FFACO prescribes a five-year proof-of-concept period for demonstrating that the site groundwater model is capable of producing meaningful results with an acceptable level of uncertainty. The corrective action plan specifies a rigorous seven step validation process. The accepted groundwater model is evaluated using that process in light of the newly acquired data. The conceptual model of ground water flow for the Project Shoal Area considers groundwater flow through the fractured granite aquifer comprising the Sand Springs Range. Water enters the system by the infiltration of precipitation directly on the surface of the mountain range. Groundwater leaves the granite aquifer by flowing into alluvial deposits in the adjacent basins of Fourmile Flat and Fairview Valley. A groundwater divide is interpreted as coinciding with the western portion of the Sand Springs Range, west of the underground nuclear test, preventing flow from the test into Fourmile Flat. A very low conductivity shear zone east of the nuclear test roughly parallels the divide. The presence of these lateral boundaries, coupled with a regional discharge area to the northeast, is interpreted in the model as causing groundwater from the site to flow in a northeastward direction into Fairview Valley. Steady-state flow conditions are assumed given the absence of groundwater withdrawal activities in the area. The conceptual and numerical models were developed based upon regional hydrogeologic investigations conducted in the 1960s, site characterization investigations (including ten wells and various geophysical and geologic studies) at Shoal itself prior to and immediately after the test, and two site characterization campaigns in the 1990s for environmental restoration purposes (including eight wells and a year-long tracer test). The new wells are denoted MV-1, MV-2, and MV-3, and are located to the northnortheast of the nuclear test. The groundwater model was generally lacking data in the north-northeastern area; only HC-1 and the abandoned PM-2 wells existed in this area. The wells provide data on fracture orientation and frequency, water levels, hydraulic conductivity, and water chemistry for comparison with the groundwater model. A total of 12 real-number validation targets were available for the validation analysis, including five values of hydraulic head, three hydraulic conductivity measurements, three hydraulic gradient values, and one angle value for the lateral gradient in radians. In addition, the fracture dip and orientation data provide comparisons to the distributions used in the model and radiochemistry is available for comparison to model output. Goodness-of-fit analysis indicates that some of the model realizations correspond well with the newly acquired conductivity, head, and gradient data, while others do not. Other tests indicated that additional model realizations may be needed to test if the model input distributions need refinement to improve model performance. This approach (generating additional realizations) was not followed because it was realized that there was a temporal component to the data disconnect: the new head measurements are on the high side of the model distributions, but the heads at the original calibration locations themselves have also increased over time. This indicates that the steady-state assumption of the groundwater model is in error. To test the robustness of the model d

  5. VALIDATION OF COMPUTER MODELS FOR RADIOACTIVE MATERIAL SHIPPING PACKAGES

    SciTech Connect

    Gupta, N; Gene Shine, G; Cary Tuckfield, C

    2007-05-07

    Computer models are abstractions of physical reality and are routinely used for solving practical engineering problems. These models are prepared using large complex computer codes that are widely used in the industry. Patran/Thermal is such a finite element computer code that is used for solving complex heat transfer problems in the industry. Finite element models of complex problems involve making assumptions and simplifications that depend upon the complexity of the problem and upon the judgment of the analysts. The assumptions involve mesh size, solution methods, convergence criteria, material properties, boundary conditions, etc. that could vary from analyst to analyst. All of these assumptions are, in fact, candidates for a purposeful and intended effort to systematically vary each in connection with the others to determine there relative importance or expected overall effect on the modeled outcome. These kinds of models derive from the methods of statistical science and are based on the principles of experimental designs. These, as all computer models, must be validated to make sure that the output from such an abstraction represents reality [1,2]. A new nuclear material packaging design, called 9977, which is undergoing a certification design review, is used to assess the capability of the Patran/Thermal computer model to simulate 9977 thermal response. The computer model for the 9977 package is validated by comparing its output with the test data collected from an actual thermal test performed on a full size 9977 package. Inferences are drawn by performing statistical analyses on the residuals (test data--model predictions).

  6. Validation of models that estimate the cost-effectiveness of improving patient adherence.

    PubMed

    Gandjour, Afschin

    2013-12-01

    This note suggests a test for internal validation of models that estimate the costs and effects of improving patient adherence. We apply the validation test to two published cost-effectiveness models on adherence improvement. PMID:24326171

  7. Multiresolution Analysis of Radiative Transfer through Inhomogeneous Media. Part II: Validation and New Insights.

    NASA Astrophysics Data System (ADS)

    Ferlay, Nicolas; Isaka, Harumi; Gabriel, Philip; Benassi, Albert

    2006-04-01

    The multiresolution radiative transfer equations of Part I of this paper are solved numerically for the case of inhomogeneous model clouds using Meyer’s basis functions. After analyzing the properties of Meyer’s connection coefficients and effective coupling operators (ECOs) for two examples of extinction functions, the present approach is validated by comparisons with Spherical Harmonic Discrete Ordinate Method (SHDOM) and Monte Carlo codes, and a preliminary analysis of the local-scale coupling between the cloud inhomogeneities and the radiance fields is presented. It is demonstrated that the contribution of subpixel-scale cloud inhomogeneities to pixel-scale radiation fields may be very important and that it varies considerably as a function of local cloud inhomogeneities.


  8. Cross-Cultural Validation of the Beck Depression Inventory-II across U.S. and Turkish Samples

    ERIC Educational Resources Information Center

    Canel-Cinarbas, Deniz; Cui, Ying; Lauridsen, Erica

    2011-01-01

    The purpose of this study was to test the Beck Depression Inventory-II (BDI-II) for factorial invariance across Turkish and U.S. college student samples. The results indicated that (a) a two-factor model has an adequate fit for both samples, thus providing evidence of configural invariance, and (b) there is a metric invariance but "no" sufficient…

  9. MODEL VALIDATION FOR A NONINVASIVE ARTERIAL STENOSIS DETECTION PROBLEM

    PubMed Central

    BANKS, H. THOMAS; HU, SHUHUA; KENZ, ZACKARY R.; KRUSE, CAROLA; SHAW, SIMON; WHITEMAN, JOHN; BREWIN, MARK P.; GREENWALD, STEPHEN E.; BIRCH, MALCOLM J.

    2014-01-01

    A current thrust in medical research is the development of a non-invasive method for detection, localization, and characterization of an arterial stenosis (a blockage or partial blockage in an artery). A method has been proposed to detect shear waves in the chest cavity which have been generated by disturbances in the blood flow resulting from a stenosis. In order to develop this methodology further, we use one-dimensional shear wave experimental data from novel acoustic phantoms to validate a corresponding viscoelastic mathematical model. We estimate model parameters which give a good fit (in a sense to be precisely defined) to the experimental data, and use asymptotic error theory to provide confidence intervals for parameter estimates. Finally, since a robust error model is necessary for accurate parameter estimates and confidence analysis, we include a comparison of absolute and relative models for measurement error. PMID:24506547

  10. Validation of plume models statistical methods and criteria

    SciTech Connect

    Bowne, N.E.

    1981-01-01

    The Electric Power Research Institute has undertaken an experimental and analytical project designed to evaluate the performance of power plant plume models. A workshop was held to define the statistical methods and criteria proposed for model evaluation for this project. The workshop was principally concerned with the assessment of model performance in predicting ground-level concentrations, and the use of model predictions as a basis for regulatory decision-making. Characteristics of the concentration pattern of primary concern for validation are the magnitude and location of maximum concentration; the maximum concentration at a given distance; plume width and crosswind-integrated concentration at a given distance; and the correlation of predicted with observed point concentration values. Statistical methods have been specified appropriate for comparing simultaneous predicted and observed values of plume characteristics, for comparing the frequency distribution of predicted values with the distribution of observed values. These statistical methods require a data set reasonably free of temporal autocorrelation. Validation analyses should also include an investigation of data limitations, including measurement uncertainty and the estimation of plume characteristics from a finite grid of sampling points.

  11. Validation of thermal models for a prototypical MEMS thermal actuator.

    SciTech Connect

    Gallis, Michail A.; Torczynski, John Robert; Piekos, Edward Stanley; Serrano, Justin Raymond; Gorby, Allen D.; Phinney, Leslie Mary

    2008-09-01

    This report documents technical work performed to complete the ASC Level 2 Milestone 2841: validation of thermal models for a prototypical MEMS thermal actuator. This effort requires completion of the following task: the comparison between calculated and measured temperature profiles of a heated stationary microbeam in air. Such heated microbeams are prototypical structures in virtually all electrically driven microscale thermal actuators. This task is divided into four major subtasks. (1) Perform validation experiments on prototypical heated stationary microbeams in which material properties such as thermal conductivity and electrical resistivity are measured if not known and temperature profiles along the beams are measured as a function of electrical power and gas pressure. (2) Develop a noncontinuum gas-phase heat-transfer model for typical MEMS situations including effects such as temperature discontinuities at gas-solid interfaces across which heat is flowing, and incorporate this model into the ASC FEM heat-conduction code Calore to enable it to simulate these effects with good accuracy. (3) Develop a noncontinuum solid-phase heat transfer model for typical MEMS situations including an effective thermal conductivity that depends on device geometry and grain size, and incorporate this model into the FEM heat-conduction code Calore to enable it to simulate these effects with good accuracy. (4) Perform combined gas-solid heat-transfer simulations using Calore with these models for the experimentally investigated devices, and compare simulation and experimental temperature profiles to assess model accuracy. These subtasks have been completed successfully, thereby completing the milestone task. Model and experimental temperature profiles are found to be in reasonable agreement for all cases examined. Modest systematic differences appear to be related to uncertainties in the geometric dimensions of the test structures and in the thermal conductivity of the polycrystalline silicon test structures, as well as uncontrolled nonuniform changes in this quantity over time and during operation.

  12. Validation of a computational model for the evaluation of fuel-coolant interaction under severe accidental condition in fast breeder reactors

    Microsoft Academic Search

    Tetsuo Sawada; Hisashi Ninokata

    1998-01-01

    A computational model for fuel-coolant interaction has been validated through calculations for a series of THINA experiments. By the experiments, it was intended to simulate a comparatively massive injection of molten core materials into sodium pool under a core disruptive accidental condition assumed for fast breeder reactors. The calculations by the SIMMER-II code showed that the current models for the

  13. Validity Domains of Beams Behavioural Models: Efficiency and Reduction with Artificial Neural Networks

    Microsoft Academic Search

    Keny Ordaz-Hernandez; Xavier Fischer; Fouad Bennis

    2008-01-01

    In a particular case of behavioural model reduction by ANNs, a validity domain shortening has been found. In me- chanics, as in other domains, the notion of validity domain allows the engineer to choose a valid model for a particular analysis or simulation. In the study of mechanical behaviour for a cantilever beam (using linear and non-linear models), Multi-Layer Perceptron

  14. A Model for Estimating the Reliability and Validity of Criterion-Referenced Measures.

    ERIC Educational Resources Information Center

    Edmonston, Leon P.; Randall, Robert S.

    A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in…

  15. A practical application of recent results in model and controller validation to a ferrosilicon production process

    Microsoft Academic Search

    Benoit Codrons; X. Bombois; M. Gevers; G. Scorletti

    2000-01-01

    This paper presents the application of our recently developed theory on model validation for control and controller validation in a prediction error framework to a realistic industrial case study. The industrial application concerns the control of the silicon concentration in a ferrosilicon production process. Our case study produces findings about the design of the validation experiment (validation in open or

  16. WASTES II model storage requirements benchmark testing

    SciTech Connect

    Shay, M.R.; Walling, R.C.; Altenhofen, M.K.

    1986-09-01

    A study was conducted to benchmark results obtained from using the Waste System Transportation and Economic Simulation - Version II (WASTES II) model against information published in the ''Spent Fuel Storage Requirements'' report (DOE/RL-84-1). The WASTES model was developed by PNL for use in evaluating the spent-fuel storage and transportation requirements and costs for the US Department of Energy (DOE). The ''Spent Fuel Storage Requirements'' report is issued annually by the DOE and provides both historical/projected spent fuel inventory data and storage requirements data based on information supplied directly from utilities. The objective of this study is to compare the total inventory and storage requirements documented in the ''Spent Fuel Storage Requirements'' report with similar data that results from use of the WASTES model. Three differences have been identified as a result of benchmark testing. Two minor differences are present in the total inventory projected and the equivalent metric tons of uranium of spent fuel requiring storage. These differences result from the way reinserted spent fuel is handled and the methods used to calculate mass equivalents. A third difference is found in the storage requirements for the case that uses intra-utility transshipment. This discrepancy is due to the Oyster Creek reactor, which is shown to not require additional storage in the Spent Fuel Storage Requirements report, even though there is no destination reactor of the same type within its utility. The discrepancy was corrected soon after the 1984 ''Spent Fuel Storage Requirements report was issued and does not appear in more recent documents (DOE/RL-85-2).

  17. Optimization and validation of a capillary zone electrophoretic method for the analysis of several angiotensin-II-receptor antagonists.

    PubMed

    Hillaert, S; Van den Bossche, W

    2002-12-01

    We optimized a capillary zone electrophoretic method for separation of six angiotensin-II-receptor antagonists (ARA-IIs): candesartan, eprosartan, irbesartan, losartan potassium, telmisartan, and valsartan. A three-level, full-factorial design was applied to study the effect of the pH and molarity of the running buffer on separation. Combination of the studied parameters permitted the separation of the six ARA-IIs, which was best carried out using 60 mM sodium phosphate buffer (pH 2.5). The same system can also be applied for the quantitative determination of these compounds, but only for the more soluble ones. Some parameters (linearity, precision and accuracy) were validated. PMID:12498264

  18. TMT studies on thermal seeing modeling: mirror seeing model validation

    Microsoft Academic Search

    Konstantinos Vogiatzis; Robert Upton

    2006-01-01

    Mirror and dome seeing are critical effects influencing the optical performance of large ground based telescopes. Computational Fluid Dynamics (CFD) and optical models that simulate mirror seeing in the Thirty Meter Telescope (TMT) are presented. The optical model used to quantify the effects of seeing utilizes the spatially varying refractive index resulting from the expected theoretical flow field, while the

  19. Modeling TCP Throughput: A Simple Model and Its Empirical Validation

    Microsoft Academic Search

    Jitendra Padhye; Victor Firoiu; Donald F. Towsley; James F. Kurose

    1998-01-01

    In this paper we develop a simple analytic characterization of the steady state throughput, as a function of loss rate and round trip time for a bulk transfer TCP flow, i.e., a flow with an unlimited amount of data to send. Unlike the models in [6, 7, 10], our model captures not only the behavior of TCP's fast retransmit mechanism

  20. Statistical validation of structured population models for Daphnia magna.

    PubMed

    Adoteye, Kaska; Banks, H T; Cross, Karissa; Eytcheson, Stephanie; Flores, Kevin B; LeBlanc, Gerald A; Nguyen, Timothy; Ross, Chelsea; Smith, Emmaline; Stemkovski, Michael; Stokely, Sarah

    2015-08-01

    In this study we use statistical validation techniques to verify density-dependent mechanisms hypothesized for populations of Daphnia magna. We develop structured population models that exemplify specific mechanisms and use multi-scale experimental data in order to test their importance. We show that fecundity and survival rates are affected by both time-varying density-independent factors, such as age, and density-dependent factors, such as competition. We perform uncertainty analysis and show that our parameters are estimated with a high degree of confidence. Furthermore, we perform a sensitivity analysis to understand how changes in fecundity and survival rates affect population size and age-structure. PMID:26092608

  1. Analytical thermal model validation for Cassini radioisotope thermoelectric generator

    SciTech Connect

    Lin, E.I. [California Inst. of Tech., Pasadena, CA (United States). Jet Propulsion Lab.

    1997-12-31

    The Saturn-bound Cassini spacecraft is designed to rely, without precedent, on the waste heat from its three radioisotope thermoelectric generators (RTGs) to warm the propulsion module subsystem, and the RTG end dome temperature is a key determining factor of the amount of waste heat delivered. A previously validated SINDA thermal model of the RTG was the sole guide to understanding its complex thermal behavior, but displayed large discrepancies against some initial thermal development test data. A careful revalidation effort led to significant modifications and adjustments of the model, which result in a doubling of the radiative heat transfer from the heat source support assemblies to the end domes and bring up the end dome and flange temperature predictions to within 2 C of the pertinent test data. The increased inboard end dome temperature has a considerable impact on thermal control of the spacecraft central body. The validation process offers an example of physically-driven analytical model calibration with test data from not only an electrical simulator but also a nuclear-fueled flight unit, and has established the end dome temperatures of a flight RTG where no in-flight or ground-test data existed before.

  2. Multiscale Modeling of Gastrointestinal Electrophysiology and Experimental Validation

    PubMed Central

    Du, Peng; O'Grady, Greg; Davidson, John B.; Cheng, Leo K.; Pullan, Andrew J.

    2011-01-01

    Normal gastrointestinal (GI) motility results from the coordinated interplay of multiple cooperating mechanisms, both intrinsic and extrinsic to the GI tract. A fundamental component of this activity is an omnipresent electrical activity termed slow waves, which is generated and propagated by the interstitial cells of Cajal (ICCs). The role of ICC loss and network degradation in GI motility disorders is a significant area of ongoing research. This review examines recent progress in the multiscale modeling framework for effectively integrating a vast range of experimental data in GI electrophysiology, and outlines the prospect of how modeling can provide new insights into GI function in health and disease. The review begins with an overview of the GI tract and its electrophysiology, and then focuses on recent work on modeling GI electrical activity, spanning from cell to body biophysical scales. Mathematical cell models of the ICCs and smooth muscle cell are presented. The continuum framework of monodomain and bidomain models for tissue and organ models are then considered, and the forward techniques used to model the resultant body surface potential and magnetic field are discussed. The review then outlines recent progress in experimental support and validation of modeling, and concludes with a discussion on potential future research directions in this field. PMID:21133835

  3. A validation study of a stochastic model of human interaction

    NASA Astrophysics Data System (ADS)

    Burchfield, Mitchel Talmadge

    The purpose of this dissertation is to validate a stochastic model of human interactions which is part of a developmentalism paradigm. Incorporating elements of ancient and contemporary philosophy and science, developmentalism defines human development as a progression of increasing competence and utilizes compatible theories of developmental psychology, cognitive psychology, educational psychology, social psychology, curriculum development, neurology, psychophysics, and physics. To validate a stochastic model of human interactions, the study addressed four research questions: (a) Does attitude vary over time? (b) What are the distributional assumptions underlying attitudes? (c) Does the stochastic model, {-}N{intlimitssbsp{-infty}{infty}}varphi(chi,tau)\\ Psi(tau)dtau, have utility for the study of attitudinal distributions and dynamics? (d) Are the Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein theories applicable to human groups? Approximately 25,000 attitude observations were made using the Semantic Differential Scale. Positions of individuals varied over time and the logistic model predicted observed distributions with correlations between 0.98 and 1.0, with estimated standard errors significantly less than the magnitudes of the parameters. The results bring into question the applicability of Fisherian research designs (Fisher, 1922, 1928, 1938) for behavioral research based on the apparent failure of two fundamental assumptions-the noninteractive nature of the objects being studied and normal distribution of attributes. The findings indicate that individual belief structures are representable in terms of a psychological space which has the same or similar properties as physical space. The psychological space not only has dimension, but individuals interact by force equations similar to those described in theoretical physics models. Nonlinear regression techniques were used to estimate Fermi-Dirac parameters from the data. The model explained a high degree of the variance in each probability distribution. The correlation between predicted and observed probabilities ranged from a low of 0.955 to a high value of 0.998, indicating that humans behave in psychological space as Fermions behave in momentum space.

  4. Quasidegenerate neutrinos in type II seesaw models

    NASA Astrophysics Data System (ADS)

    Das, Mrinal Kumar; Borah, Debasish; Mishra, Rinku

    2012-11-01

    We present an analysis of normal and inverted hierarchical neutrino mass models within the framework of tri-bimaximal mixing. Considering the neutrinos to be quasidegenerate (QDN), we study two different neutrino mass models with mass eigenvalues (m1,-m2,m3) and (m1,m2,m3) for both normal hierarchical and inverted hierarchical cases. Parameterizing the neutrino mass matrix using best-fit oscillation and cosmology data for a QDN scenario, we find the right-handed Majorana mass matrix using the type I seesaw formula for two types of Dirac neutrino mass matrices: charged lepton type and up quark type. Incorporating the presence of the type II seesaw term which arises naturally in generic left-right symmetric models along with the type I term, we compare the predictions for neutrino mass parameters with the experimental values. Within such a framework and incorporating both oscillation as well as cosmology data, we show that a QDN scenario of neutrino masses can still survive in nature with some minor exceptions. A viable extension of the standard model with an Abelian-gauged flavor symmetry is briefly discussed which can give rise to the desired structure of the Dirac and Majorana mass matrices.

  5. Distributed hydrological modelling of the Senegal River Basin — model construction and validation

    NASA Astrophysics Data System (ADS)

    Andersen, Jens; Refsgaard, Jens C.; Jensen, Karsten H.

    2001-07-01

    A modified version of the physically-based distributed MIKE SHE model code was applied to the 375,000 km 2 Senegal River Basin. On the basis of conventional data from meteorological stations and readily accessible databases on topography, soil types, vegetation type, etc. three models with different levels of calibration were constructed and rigorous validation tests conducted. Calibration against one station and internal validation against eight additional stations revealed significant shortcomings for some of the upstream tributaries, particularly in the semi-arid zone of the river basin. Further calibration against additional discharge stations improved the performance levels of the validation for the different subcatchments. Although there may be good reasons to believe that the model operating on a model grid of 4×4 km 2 to a large extent reflects field conditions at a scale smaller than subcatchment scale, this could not be validated due to lack of spatial data.

  6. Systematic validation of disease models for pharmacoeconomic evaluations. Swiss HIV Cohort Study.

    PubMed

    Sendi, P P; Craig, B A; Pfluger, D; Gafni, A; Bucher, H C

    1999-08-01

    Pharmacoeconomic evaluations are often based on computer models which simulate the course of disease with and without medical interventions. The purpose of this study is to propose and illustrate a rigorous approach for validating such disease models. For illustrative purposes, we applied this approach to a computer-based model we developed to mimic the history of HIV-infected subjects at the greatest risk for Mycobacterium avium complex (MAC) infection in Switzerland. The drugs included as a prophylactic intervention against MAC infection were azithromycin and clarithromycin. We used a homogenous Markov chain to describe the progression of an HIV-infected patient through six MAC-free states, one MAC state, and death. Probability estimates were extracted from the Swiss HIV Cohort Study database (1993-95) and randomized controlled trials. The model was validated testing for (1) technical validity (2) predictive validity (3) face validity and (4) modelling process validity. Sensitivity analysis and independent model implementation in DATA (PPS) and self-written Fortran 90 code (BAC) assured technical validity. Agreement between modelled and observed MAC incidence confirmed predictive validity. Modelled MAC prophylaxis at different starting conditions affirmed face validity. Published articles by other authors supported modelling process validity. The proposed validation procedure is a useful approach to improve the validity of the model. PMID:10461580

  7. Metrological validation for 3D modeling of dental plaster casts.

    PubMed

    Brusco, Nicola; Andreetto, Marco; Lucchese, Luca; Carmignato, Simone; Cortelazzo, Guido M

    2007-11-01

    The contribution of this paper is twofold: (1) it presents an automatic 3D modeling technique and (2) it advances a procedure for its metrological evaluation in the context of a medical application, the 3D modeling of dental plaster casts. The motivation for this work is the creation of a "virtual gypsotheque" where cumbersome dental plaster casts can be replaced by numerical 3D models, thereby alleviating storage and access problems and allowing dentists and orthodontists the use of novel and unprecedented software tools for their medical evaluations. Modeling free-form surfaces of anatomical interest is an intriguing mixture of open issues concerning 3D modeling, geometrical metrology, and medicine. Of general interest is both the fact that a widespread use of 3D modeling in non-engineering applications requires automatic procedures of the kind presented in this work and the adopted validation paradigm for free-form surfaces, rather useful for practical purposes. In this latter respect, the metrological analysis we advance is the first seminal attempt in the field of 3D modeling and can be readily extended to contexts other than the medical one discussed in this paper. PMID:17126062

  8. Long-term ELBARA-II Assistance to SMOS Land Product and Algorithm Validation at the Valencia Anchor Station (MELBEX Experiment 2010-2013)

    NASA Astrophysics Data System (ADS)

    Lopez-Baeza, Ernesto; Wigneron, Jean-Pierre; Schwank, Mike; Miernecki, Maciej; Kerr, Yann; Casal, Tania; Delwart, Steven; Fernandez-Moran, Roberto; Mecklenburg, Susanne; Coll Pajaron, M. Amparo; Salgado Hernanz, Paula

    The main activity of the Valencia Anchor Station (VAS) is currently now to support the validation of SMOS (Soil Moisture and Ocean Salinity) Level 2 and 3 land products (soil moisture, SM, and vegetation optical depth, TAU). With this aim, the European Space Agency (ESA) has provided the Climatology from Satellites Group of the University of Valencia with an ELBARA-II microwave radiometer under a loan agreement since September 2009. During this time, brightness temperatures (TB) have continuously been acquired, except during normal maintenance or minor repair interruptions. ELBARA-II is an L-band dual-polarization radiometer with two channels (1400-1418 MHz, 1409-1427 MHz). It is continuously measuring over a vineyard field (El Renegado, Caudete de las Fuentes, Valencia) from a 15 m platform with a constant protocol for calibration and angular scanning measurements with the aim to assisting the validation of SMOS land products and the calibration of the L-MEB (L-Band Emission of the Biosphere) -basis for the SMOS Level 2 Land Processor- over the VAS validation site. One of the advantages of using the VAS site is the possibility of studying two different environmental conditions along the year. While the vine cycle extends mainly between April and October, during the rest of the year the area remains under bare soil conditions, adequate for the calibration of the soil model. The measurement protocol currently running has shown to be robust during the whole operation time and will be extended in time as much as possible to continue providing a long-term data set of ELBARA-II TB measurements and retrieved SM and TAU. This data set is also showing to be useful in support of SMOS scientific activities: the VAS area and, specifically the ELBARA-II site, offer good conditions to control the long-term evolution of SMOS Level 2 and Level 3 land products and interpret eventual anomalies that may obscure sensor hidden biases. In addition, SM and TAU that are currently retrieved from the ELBARA-II TB data by inversion of the L-MEB model, can also be compared to the Level 2 and Level 3 SMOS products. L-band ELBARA-II measurements provide area-integrated estimations of SM and TAU that are much more representative of the soil and vegetation conditions at field scale than ground measurements (from capacitive probes for SM and destructive measurements for TAU). For instance, Miernecki et al., (2012) and Wigneron et al. (2012) showed that very good correlations could be obtained from TB data and SM retrievals obtained from both SMOS and ELBARA-II over the 2010-2011 time period. The analysis of the quality of these correlations over a long time period can be very useful to evaluate the SMOS measurements and retrieved products (Level 2 and 3). The present work that extends the analysis over almost 4 years now (2010-2013) emphasizes the need to (i) maintain the long-time record of ELBARA-II measurements (ii) enhance as much as possible the control over other parameters, especially, soil roughness (SR), vegetation water content (VWC) and surface temperature, to interpret the retrieved results obtained from both SMOS and ELBARA-II instruments.

  9. Applicability of Monte Carlo cross validation technique for model development and validation using generalised least squares regression

    NASA Astrophysics Data System (ADS)

    Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra

    2013-03-01

    SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.

  10. Hydrodynamical Models of Type II Plateau Supernovae

    NASA Astrophysics Data System (ADS)

    Bersten, Melina C.; Benvenuto, Omar; Hamuy, Mario

    2011-03-01

    We present bolometric light curves of Type II plateau supernovae obtained using a newly developed, one-dimensional Lagrangian hydrodynamic code with flux-limited radiation diffusion. Using our code we calculate the bolometric light curve and photospheric velocities of SN 1999em, obtaining a remarkably good agreement with observations despite the simplifications used in our calculation. The physical parameters used in our calculation are E = 1.25 foe, M = 19 M sun, R = 800 R sun, and M Ni = 0.056 M sun. We find that an extensive mixing of 56Ni is needed in order to reproduce a plateau as flat as that shown by the observations. We also study the possibility to fit the observations with lower values of the initial mass consistently with upper limits that have been inferred from pre-supernova imaging of SN 1999em in connection with stellar evolution models. We cannot find a set of physical parameters that reproduce well the observations for models with pre-supernova mass of <=12 M sun, although models with 14 M sun cannot be fully discarded.

  11. ADMS-AIRPORT: MODEL INTER-COMPARISIONS AND MODEL VALIDATION

    Microsoft Academic Search

    David Carruthers; Christine McHugh; Stephanie Church; Mark Jackson; Matt Williams; Chetan Lad

    The functionality of ADMS-Airport and details of its use in the Model Inter-comparison Study of the Project for the Sustainable Development of Heathrow Airport (PSDH) have previously been presented, Carruthers et al (2007). A distinguishing feature is the treatment of jet engine emissions as moving jet sources rather than averaging these emissions into volume sources as is the case in

  12. Ultrasonic transducers for cure monitoring: design, modelling and validation

    NASA Astrophysics Data System (ADS)

    Lionetto, Francesca; Montagna, Francesco; Maffezzoli, Alfonso

    2011-12-01

    The finite element method (FEM) has been applied to simulate the ultrasonic wave propagation in a multilayered transducer, expressly designed for high-frequency dynamic mechanical analysis of polymers. The FEM model includes an electro-acoustic (active element) and some acoustic (passive elements) transmission lines. The simulation of the acoustic propagation accounts for the interaction between the piezoceramic and the materials in the buffer rod and backing, and the coupling between the electric and mechanical properties of the piezoelectric material. As a result of the simulations, the geometry and size of the modelled ultrasonic transducer has been optimized and used for the realization of a prototype transducer for cure monitoring. The transducer performance has been validated by measuring the velocity changes during the polymerization of a thermosetting matrix of composite materials.

  13. Validation of a neural network model using cross application approaches

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet; Lee, Saro; Hyun-Joo, Oh; Buchroithner, Manfred F.

    2010-05-01

    This paper discusses an important component of landslide susceptibility mapping using back-propagation based artificial neural network model and its cross application of weights at three study areas in Malaysia, using a Geographic Information System (GIS). Landslide locations were identified in the study areas from the interpretation of aerial photographs, field surveys and inventory reports. Landslide related spatial database was constructed from topographic, soil, geology, landcover maps. The paper further examines the factors affecting landslide for assessing landslide susceptibility mapping and reviews tools for quantifying the likelihood of occurrence of the scenarios. Different training sites were selected randomly to train the neural network and nine sets of landslide susceptibility maps were prepared. The paper then illustrates the validation of those maps using Area Under Curve (AUC) model.

  14. Derivation and empirical validation of a refined traffic flow model

    NASA Astrophysics Data System (ADS)

    Helbing, Dirk

    1996-02-01

    The gas-kinetic foundation of fluid-dynamic traffic equations suggested in previous papers (D. Helbing, Physica A 219 (1995) 375 and 391) is further refined by applying the theory of dense gases and granular materials to the Boltzmann-like traffic model by Paveri-Fontana. It is shown that, despite the phenomenologically similar behaviour or ordinary and granular fluids, the relations for these cannot directly be transferred to vehicular traffic. The dissipative and anisotropic interactions of vehicles as well as their velocity-dependent space requirements lead to a considerably different structure of the macroscopic traffic equations, also in comparison with the previously suggested traffic flow models. As a consequence. the instability mechanisms of emergent density waves are different. Crucial assumptions are validated by empirical traffic data and essential results are illustrated by figures.

  15. Multicomponent aerosol dynamics model UHMA: model development and validation

    NASA Astrophysics Data System (ADS)

    Korhonen, H.; Lehtinen, K. E. J.; Kulmala, M.

    2004-05-01

    A size-segregated aerosol dynamics model UHMA (University of Helsinki Multicomponent Aerosol model) was developed for studies of multicomponent tropospheric aerosol particles. The model includes major aerosol microphysical processes in the atmosphere with a focus on new particle formation and growth; thus it incorporates particle coagulation and multicomponent condensation, applying a revised treatment of condensation flux onto free molecular regime particles and the activation of nanosized clusters by organic vapours (Nano-Köhler theory), as well as recent parameterizations for binary H2SO4-H2O and ternary H2SO4-NH3-H2O homogeneous nucleation and dry deposition. The representation of particle size distribution can be chosen from three sectional methods: the hybrid method, the moving center method, and the retracking method in which moving sections are retracked to a fixed grid after a certain time interval. All these methods can treat particle emissions and atmospheric transport consistently, and are therefore suitable for use in large scale atmospheric models. In a test simulation against an accurate high resolution solution, all the methods showed reasonable treatment of new particle formation with 20 size sections although the hybrid and the retracking methods suffered from artificial widening of the distribution. The moving center approach, on the other hand, showed extra dents in the particle size distribution and failed to predict the onset of detectable particle formation. In a separate test simulation of an observed nucleation event, the model captured the key qualitative behaviour of the system well. Furthermore, its prediction of the organic volume fraction in newly formed particles, suggesting values as high as 0.5 for 3-4 nm particles and approximately 0.8 for 10 nm particles, agrees with recent indirect composition measurements.

  16. Multicomponent aerosol dynamics model UHMA: model development and validation

    NASA Astrophysics Data System (ADS)

    Korhonen, H.; Lehtinen, K. E. J.; Kulmala, M.

    2004-01-01

    A size-segregated aerosol dynamics model UHMA (University of Helsinki Multicomponent Aerosol model) was developed for studies of multicomponent tropospheric aerosol particles. The model includes major aerosol microphysical processes in the atmosphere with a focus on new particle formation and growth; thus it incorporates particle coagulation and multicomponent condensation, applying a revised treatment of condensation flux onto free molecular regime particles and the activation of nanosized clusters by organic vapours (Nano-Köhler theory), as well as recent parameterizations for binary H2SO4-H2O and ternary H2SO4-NH3-H2O homogeneous nucleation and dry deposition. The representation of particle size distribution can be chosen from three sectional methods: the hybrid method, the moving center method, and the retracking method in which moving sections are retracked to a fixed grid after a certain time interval. All these methods can treat particle emissions and transport consistently, and are therefore suitable for use in large scale atmospheric models. In a test simulation against an accurate high resolution solution, all the methods showed reasonable treatment of new particle formation with 20 size sections although the hybrid and the retracking methods suffered from artificial widening of the distribution. The moving center approach, on the other hand, showed extra dents in the particle size distribution and failed to predict the onset of detectable particle formation. In a separate test simulation of an observed nucleation event, the model captured the key qualitative behaviour of the system well. Furthermore, its prediction of the organic volume fraction in newly formed particles, suggesting values as high as 0.5 for 3-4 nm particles and approximately 0.8 for 10 nm particles, agrees with recent indirect composition measurements.

  17. Ptolemy II: Heterogeneous Concurrent Modeling And Design In Java

    Microsoft Academic Search

    Christopher Hylands; Edward A. Lee; Jie Liu; Xiaojun Liu; Steve Neuendorffer; Yuhong Xiong

    2001-01-01

    This document describes the design and implementation of Ptolemy II 2.0.1. Ptolemy II is a set of Java packages supporting heterogeneous, concurrent modeling and design. The focus is on assembly of concurrent components. The key underlying principle in the Ptolemy II is the use of well-defined models of computation that govern the interaction between components. A major problem area that

  18. Assessing uncertainty in pollutant wash-off modelling via model validation.

    PubMed

    Haddad, Khaled; Egodawatta, Prasanna; Rahman, Ataur; Goonetilleke, Ashantha

    2014-11-01

    Stormwater pollution is linked to stream ecosystem degradation. In predicting stormwater pollution, various types of modelling techniques are adopted. The accuracy of predictions provided by these models depends on the data quality, appropriate estimation of model parameters, and the validation undertaken. It is well understood that available water quality datasets in urban areas span only relatively short time scales unlike water quantity data, which limits the applicability of the developed models in engineering and ecological assessment of urban waterways. This paper presents the application of leave-one-out (LOO) and Monte Carlo cross validation (MCCV) procedures in a Monte Carlo framework for the validation and estimation of uncertainty associated with pollutant wash-off when models are developed using a limited dataset. It was found that the application of MCCV is likely to result in a more realistic measure of model coefficients than LOO. Most importantly, MCCV and LOO were found to be effective in model validation when dealing with a small sample size which hinders detailed model validation and can undermine the effectiveness of stormwater quality management strategies. PMID:25169872

  19. An Analysis of Structural Validity of Ternary Relationships in Entity Relationship Modeling

    E-print Network

    Song, Il-Yeol

    1 An Analysis of Structural Validity of Ternary Relationships in Entity Relationship Modeling James cardinality constraints yielding a more complete analysis of the structural validity of ternary relationship ____________________________________________________________________________________________________ This research explores the criteria that contribute to the validity of modeling structures within the entity

  20. Development and validation of a liquid composite molding model

    NASA Astrophysics Data System (ADS)

    Bayldon, John Michael

    2007-12-01

    In composite manufacturing, Vacuum Assisted Resin Transfer Molding (VARTM) is becoming increasingly important as a cost effective manufacturing method of structural composites. In this process the dry preform (reinforcement) is placed on a rigid tool and covered by a flexible film to form an airtight vacuum bag. Liquid resin is drawn under vacuum through the preform inside the vacuum bag. Modeling of this process relies on a good understanding of closely coupled phenomena. The resin flow depends on the preform permeability, which in turn depends on the local fluid pressure and the preform compaction behavior. VARTM models for predicting the flow rate in this process do exist, however, they are not able to properly predict the flow for all classes of reinforcement material. In this thesis, the continuity equation used in VARTM models is reexamined and a modified form proposed. In addition, the compaction behavior of the preform in both saturated and dry states is studied in detail and new models are proposed for the compaction behavior. To assess the validity of the proposed models, the shadow moire method was adapted and used to perform full field measurement of the preform thickness during infusion, in addition to the usual measurements of flow front position. A new method was developed and evaluated for the analysis of the moire data related to the VARTM process, however, the method has wider applicability to other full field thickness measurements. The use of this measurement method demonstrated that although the new compaction models work well in the characterization tests, they do not properly describe all the preform features required for modeling the process. In particular the effect of varying saturation on the preform's behavior requires additional study. The flow models developed did, however, improve the prediction of the flow rate for the more compliant preform material tested, and the experimental techniques have shown where additional test methods will be required to further improve the models.

  1. A CARTILAGE GROWTH MIXTURE MODEL WITH COLLAGEN REMODELING: VALIDATION PROTOCOLS

    PubMed Central

    Klisch, Stephen M.; Asanbaeva, Anna; Oungoulian, Sevan R.; Masuda, Koichi; Thonar, Eugene J-MA; Davol, Andrew; Sah, Robert L.

    2009-01-01

    A cartilage growth mixture (CGM) model is proposed to address limitations of a model used in a previous study. New stress constitutive equations for the solid matrix are derived and collagen (COL) remodeling is incorporated into the CGM model by allowing the intrinsic COL material constants to evolve during growth. An analytical validation protocol based on experimental data from a recent in vitro growth study is developed. Available data included measurements of tissue volume, biochemical composition, and tensile modulus for bovine calf articular cartilage (AC) explants harvested at three depths and incubated for 13 days in 20% FBS and 20% FBS+?-aminopropionitrile. The proposed CGM model can match tissue biochemical content and volume exactly while predicting theoretical values of tensile moduli that do not significantly differ from experimental values. Also, theoretical values of a scalar COL remodeling factor are positively correlated with COL crosslink content, and mass growth functions are positively correlated with cell density. The results suggest that the CGM model may help to guide in vitro growth protocols for AC tissue via the a priori prediction of geometric and biomechanical properties. PMID:18532855

  2. Validation of a Global Hydrodynamic Flood Inundation Model

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Smith, A.; Sampson, C. C.; Alfieri, L.; Neal, J. C.

    2014-12-01

    In this work we present first validation results for a hyper-resolution global flood inundation model. We use a true hydrodynamic model (LISFLOOD-FP) to simulate flood inundation at 1km resolution globally and then use downscaling algorithms to determine flood extent and depth at 90m spatial resolution. Terrain data are taken from a custom version of the SRTM data set that has been processed specifically for hydrodynamic modelling. Return periods of flood flows along the entire global river network are determined using: (1) empirical relationships between catchment characteristics and index flood magnitude in different hydroclimatic zones derived from global runoff data; and (2) an index flood growth curve, also empirically derived. Bankful return period flow is then used to set channel width and depth, and flood defence impacts are modelled using empirical relationships between GDP, urbanization and defence standard of protection. The results of these simulations are global flood hazard maps for a number of different return period events from 1 in 5 to 1 in 1000 years. We compare these predictions to flood hazard maps developed by national government agencies in the UK and Germany using similar methods but employing detailed local data, and to observed flood extent at a number of sites including St. Louis, USA and Bangkok in Thailand. Results show that global flood hazard models can have considerable skill given careful treatment to overcome errors in the publicly available data that are used as their input.

  3. Canyon building ventilation system dynamic model -- Parameters and validation

    SciTech Connect

    Moncrief, B.R. (Westinghouse Savannah River Co., Aiken, SC (United States)); Chen, F.F.K. (Bechtel National, Inc., San Francisco, CA (United States))

    1993-01-01

    Plant system simulation crosses many disciplines. At the core is the mimic of key components in the form of mathematical models.'' These component models are functionally integrated to represent the plant. With today's low cost high capacity computers, the whole plant can be truly and effectively reproduced in a computer model. Dynamic simulation has its roots in single loop'' design, which is still a common objective in the employment of simulation. The other common objectives are the ability to preview plant operation, to anticipate problem areas, and to test the impact of design options. As plant system complexity increases and our ability to simulate the entire plant grows, the objective to optimize plant system design becomes practical. This shift in objectives from problem avoidance to total optimization by far offers the most rewarding potential. Even a small reduction in bulk materials and space can sufficiently justify the application of this technology. Furthermore, to realize an optimal plant starts from a tight and disciplined design. We believe the assurance required to execute such a design strategy can partly be derived from a plant model. This paper reports on the application of a dynamic model to evaluate the capacity of an existing production plant ventilation system. This study met the practical objectives of capacity evaluation under present and future conditions, and under normal and accidental situations. More importantly, the description of this application, in its methods and its utility, aims to validate the technology of dynamic simulation in the environment of plant system design and safe operation.

  4. Independent validation of six melanoma risk prediction models.

    PubMed

    Olsen, Catherine M; Neale, Rachel E; Green, Adèle C; Webb, Penelope M; Whiteman, David C

    2015-05-01

    Identifying people at high risk of melanoma is important for targeted prevention activities and surveillance. Several tools have been developed to classify melanoma risk, but few have been independently validated. We assessed the discriminatory performance of six melanoma prediction tools by applying them to individuals from two independent data sets, one comprising 762 melanoma cases and the second a population-based sample of 42,116 people without melanoma. We compared the model predictions with actual melanoma status to measure sensitivity and specificity. The performance of the models was variable with sensitivity ranging from 97.7 to 10.5% and specificity from 99.6 to 1.3%. The ability of all the models to discriminate between cases and controls, however, was generally high. The model developed by MacKie et al. (1989) had higher sensitivity and specificity for men (0.89 and 0.88) than women (0.79 and 0.72). The tool developed by Cho et al. (2005) was highly specific (men, 0.92; women, 0.99) but considerably less sensitive (men, 0.64; women, 0.37). Other models were either highly specific but lacked sensitivity or had low to very low specificity and higher sensitivity. Poor performance was partly attributable to the use of non-standardized assessment items and various differing interpretations of what constitutes "high risk". PMID:25548858

  5. Traveling with Cognitive Tests: Testing the Validity of a KABC-II Adaptation in India

    ERIC Educational Resources Information Center

    Malda, Maike; van de Vijver, Fons J. R.; Srinivasan, Krishnamachari; Transler, Catherine; Sukumar, Prathima

    2010-01-01

    The authors evaluated the adequacy of an extensive adaptation of the American Kaufman Assessment Battery for Children, second edition (KABC-II), for 6- to 10-year-old Kannada-speaking children of low socioeconomic status in Bangalore, South India. The adapted KABC-II was administered to 598 children. Subtests showed high reliabilities, the…

  6. Validating a widely used measure of frailty: are all sub-components necessary? Evidence from the Whitehall II cohort study.

    PubMed

    Bouillon, Kim; Sabia, Severine; Jokela, Markus; Gale, Catharine R; Singh-Manoux, Archana; Shipley, Martin J; Kivimäki, Mika; Batty, G David

    2013-08-01

    There is growing interest in the measurement of frailty in older age. The most widely used measure (Fried) characterizes this syndrome using five components: exhaustion, physical activity, walking speed, grip strength, and weight loss. These components overlap, raising the possibility of using fewer, and therefore making the device more time- and cost-efficient. The analytic sample was 5,169 individuals (1,419 women) from the British Whitehall II cohort study, aged 55 to 79 years in 2007-2009. Hospitalization data were accessed through English national records (mean follow-up 15.2 months). Age- and sex-adjusted Cox models showed that all components were significantly associated with hospitalization, the hazard ratios (HR) ranging from 1.18 (95 % confidence interval?=?0.98, 1.41) for grip strength to 1.60 (1.35, 1.90) for usual walking speed. Some attenuation of these effects was apparent following mutual adjustment for frailty components, but the rank order of the strength of association remained unchanged. We observed a dose-response relationship between the number of frailty components and the risk for hospitalization [1 component-HR?=?1.10 (0.96, 1.26); 2-HR?=?1.52 (1.26, 1.83); 3-5-HR?=?2.41 (1.84, 3.16), P trend <0.0001]. A concordance index used to evaluate the predictive power for hospital admissions of individual components and the full scale was modest in magnitude (range 0.57 to 0.58). Our results support the validity of the multi-component frailty measure, but the predictive performance of the measure is poor. PMID:22772579

  7. Using remote sensing for validation of a large scale hydrologic and hydrodynamic model in the Amazon

    NASA Astrophysics Data System (ADS)

    Paiva, R. C.; Bonnet, M.; Buarque, D. C.; Collischonn, W.; Frappart, F.; Mendes, C. B.

    2011-12-01

    We present the validation of the large-scale, catchment-based hydrological MGB-IPH model in the Amazon River basin. In this model, physically-based equations are used to simulate the hydrological processes, such as the Penman Monteith method to estimate evapotranspiration, or the Moore and Clarke infiltration model. A new feature recently introduced in the model is a 1D hydrodynamic module for river routing. It uses the full Saint-Venant equations and a simple floodplain storage model. River and floodplain geometry parameters are extracted from SRTM DEM using specially developed GIS algorithms that provide catchment discretization, estimation of river cross-sections geometry and water storage volume variations in the floodplains. The model was forced using satellite-derived daily rainfall TRMM 3B42, calibrated against discharge data and first validated using daily discharges and water levels from 111 and 69 stream gauges, respectively. Then, we performed a validation against remote sensing derived hydrological products, including (i) monthly Terrestrial Water Storage (TWS) anomalies derived from GRACE, (ii) river water levels derived from ENVISAT satellite altimetry data (212 virtual stations from Santos da Silva et al., 2010) and (iii) a multi-satellite monthly global inundation extent dataset at ~25 x 25 km spatial resolution (Papa et al., 2010). Validation against river discharges shows good performance of the MGB-IPH model. For 70% of the stream gauges, the Nash and Suttcliffe efficiency index (ENS) is higher than 0.6 and at Óbidos, close to Amazon river outlet, ENS equals 0.9 and the model bias equals,-4.6%. Largest errors are located in drainage areas outside Brazil and we speculate that it is due to the poor quality of rainfall datasets in these areas poorly monitored and/or mountainous. Validation against water levels shows that model is performing well in the major tributaries. For 60% of virtual stations, ENS is higher than 0.6. But, similarly, largest errors are also located in drainage areas outside Brazil, mostly Japurá River, and in the lower Amazon River. In the latter, correlation with observations is high but the model underestimates the amplitude of water levels. We also found a large bias between model and ENVISAT water levels, ranging from -3 to -15 m. The model provided TWS in good accordance with GRACE estimates. ENS values for TWS over the whole Amazon equals 0.93. We also analyzed results in 21 sub-regions of 4 x 4°. ENS is smaller than 0.8 only in 5 areas, and these are found mostly in the northwest part of the Amazon, possibly due to same errors reported in discharge results. Flood extent validation is under development, but a previous analysis in Brazilian part of Solimões River basin suggests a good model performance. The authors are grateful for the financial and operational support from the brazilian agencies FINEP, CNPq and ANA and from the french observatories HYBAM and SOERE RBV.

  8. Development and validation of a realistic head model for EEG

    NASA Astrophysics Data System (ADS)

    Bangera, Nitin Bhalchandra

    The utility of extracranial electrical or magnetic field recordings (EEG or MEG) is greatly enhanced if the generators of the bioelectromagnetic fields can be determined accurately from the measured fields. This procedure, known as the 'inverse method,' depends critically on calculations of the projection from generators in the brain to the EEG and MEG sensors. Improving and validating this calculation, known as the 'forward solution,' is the focus of this dissertation. The improvements involve more accurate modeling of the structures of the brain and thus understanding how current flows within the brain as a result of addition of structures in the forward model. Validation compares calculations using different forward models to the experimental results obtained by stimulating with implanted dipole electrodes. The human brain tissue displays inhomogeneity in electrical conductivity and also displays anisotropy, notably in the skull and brain white matter. In this dissertation, a realistic head model has been implemented using the finite element method to calculate the effects of inhomogeneity and anisotropy in the human brain. Accurate segmentation of the brain tissue type is implemented using a semi-automatic method to segment multimodal imaging data from multi-spectral MRI scans (different flip angles) in conjunction with the regular T1-weighted scans and computed x-ray tomography images. The electrical conductivity in the anisotropic white matter tissue is quantified from diffusion tensor MRI. The finite element model is constructed using AMIRA, a commercial segmentation and visualization tool and solved using ABAQUS, a commercial finite element solver. The model is validated using experimental data collected from intracranial stimulation in medically intractable epileptic patients. Depth electrodes are implanted in medically intractable epileptic patients in order to direct surgical therapy when the foci cannot be localized with the scalp EEG. These patients present the unique opportunity to generate sources at known positions in the human brain using the depth electrodes. Known dipolar sources were created inside the human brain at known locations by injecting a weak biphasic current (sub-threshold) between alternate contacts on the depth electrode. The corresponding bioelectric fields (intracranial and scalp EEG) were recorded in patients during the injection of biphasic pulses. The in vivo depth stimulation data provides a direct test of the performance of the forward model. The factors affecting the accuracy of the intracranial measurements are quantified in a precise manner by studying the effects of including different tissue types and anisotropy. The results show that white matter anisotropy is crucial for predicting the electric fields in a precise manner for intracranial locations, thereby affecting the source reconstructions. Accurate modeling of the skull is necessary for predicting accurately the scalp measurements. In sum, with the aid of high-resolution finite element realistic head models it is possible to accurately predict electric fields generated by current sources in the brain and thus in a precise way, understand the relationship between electromagnetic measure and neuronal activity at the voxel-scale.

  9. Geophysical Monitoring for Validation of Transient Permafrost Models (Invited)

    NASA Astrophysics Data System (ADS)

    Hauck, C.; Hilbich, C.; Marmy, A.; Scherler, M.

    2013-12-01

    Permafrost is a widespread phenomenon at high latitudes and high altitudes and describes the permanently frozen state of the subsurface in lithospheric material. In the context of climate change, both, new monitoring and modelling techniques are required to observe and predict potential permafrost changes, e.g. the warming and degradation which may lead to the liberation of carbon (Arctic) and the destabilisation of permafrost slopes (mountains). Mountain permafrost occurrences in the European Alps are characterised by temperatures only a few degrees below zero and are therefore particularly sensitive to projected climate changes in the 21st century. Traditional permafrost observation techniques are mainly based on thermal monitoring in vertical and horizontal dimension, but they provide only weak indications of physical properties such as ice or liquid water content. Geophysical techniques can be used to characterise permafrost occurrences and to monitor their changes as the physical properties of frozen and unfrozen ground measured by geophysical techniques are markedly different. In recent years, electromagnetic, seismic but especially electrical methods have been used to continuously monitor permafrost occurrences and to detect long-term changes within the active layer and regarding the ice content within the permafrost layer. On the other hand, coupled transient thermal/hydraulic models are used to predict the evolution of permafrost occurrences under different climate change scenarios. These models rely on suitable validation data for a certain observation period, which is usually restricted to data sets of ground temperature and active layer depth. Very important initialisation and validation data for permafrost models are, however, ground ice content and unfrozen water content in the active layer. In this contribution we will present a geophysical monitoring application to estimate ice and water content and their evolution in time at a permafrost station in the Swiss Alps. These data are then used to validate the coupled mass and energy balance soil model COUP, which is used for long-term projections of the permafrost evolution in the Swiss Alps. For this, we apply the recently developed 4-phase model, which is based on simple petrophysical relationships and which uses geoelectric and seismic tomographic data sets as input data.. In addition, we use continuously measured electrical resistivity tomography data sets and soil moisture data in daily resolution to compare modelled ice content changes and geophysical observations in high temporal resolution. The results show still large uncertainties in both model approaches regarding the absolute ice content values, but much smaller uncertainties regarding the changes in ice and unfrozen water content. We conclude that this approach is well suited for the analysis of permafrost changes in both, model and monitoring studies, even though more efforts are needed for obtaining in situ ground truth data of ice content and porosity.

  10. Cosmological Parameter Uncertainties from SALT-II Type Ia Supernova Light Curve Models

    NASA Astrophysics Data System (ADS)

    Mosher, J.; Guy, J.; Kessler, R.; Astier, P.; Marriner, J.; Betoule, M.; Sako, M.; El-Hage, P.; Biswas, R.; Pain, R.; Kuhlmann, S.; Regnault, N.; Frieman, J. A.; Schneider, D. P.

    2014-09-01

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ~120 low-redshift (z < 0.1) SNe Ia, ~255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ~290 SNLS SNe Ia (z <= 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w input - w recovered) ranging from -0.005 ± 0.012 to -0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty; the average bias on w is -0.014 ± 0.007.

  11. First principles Candu fuel model and validation experimentation

    SciTech Connect

    Corcoran, E.C.; Kaye, M.H.; Lewis, B.J.; Thompson, W.T. [Royal Military College of Canada, Department of Chemistry and Chemical Engineering, P.O. Box 17000 Station Forces, Kingston, Ontario K7K 7B4 (Canada); Akbari, F. [Atomic Energy of Canada Limited - Chalk River Ontario, Ontario KOJ IJ0 (Canada); Royal Military College of Canada, Department of Chemistry and Chemical Engineering, P.O. Box 17000 Station Forces, Kingston, Ontario K7K 7B4 (Canada); Higgs, J.D. [Atomic Energy of Canada Limited - 430 Bayside Drive, Saint John, NB E2J 1A8 (Canada); Royal Military College of Canada, Department of Chemistry and Chemical Engineering, P.O. Box 17000 Station Forces, Kingston, Ontario K7K 7B4 (Canada); Verrall, R.A.; He, Z.; Mouris, J.F. [Atomic Energy of Canada Limited - Chalk River Laboratories, Chalk River Ontario, Ontario KOJ IJ0 (Canada)

    2007-07-01

    Many modeling projects on nuclear fuel rest on a quantitative understanding of the co-existing phases at various stages of burnup. Since the various fission products have considerably different abilities to chemically associate with oxygen, and the O/M ratio is slowly changing as well, the chemical potential (generally expressed as an equivalent oxygen partial pressure) is a function of burnup. Concurrently, well-recognized small fractions of new phases such as inert gas, noble metals, zirconates, etc. also develop. To further complicate matters, the dominant UO{sub 2} fuel phase may be non-stoichiometric and most of minor phases have a variable composition dependent on temperature and possible contact with the coolant in the event of a sheathing defect. A Thermodynamic Fuel Model to predict the phases in partially burned Candu nuclear fuel containing many major fission products has been under development. This model is capable of handling non-stoichiometry in the UO{sub 2} fluorite phase, dilute solution behaviour of significant solute oxides, noble metal inclusions, a second metal solid solution U(Pd-Rh-Ru)3, zirconate and uranate solutions as well as other minor solid phases, and volatile gaseous species. The treatment is a melding of several thermodynamic modeling projects dealing with isolated aspects of this important multi-component system. To simplify the computations, the number of elements has been limited to twenty major representative fission products known to appear in spent fuel. The proportion of elements must first be generated using SCALES-5. Oxygen is inferred from the concentration of the other elements. Provision to study the disposition of very minor fission products is included within the general treatment but these are introduced only on an as needed basis for a particular purpose. The building blocks of the model are the standard Gibbs energies of formation of the many possible compounds expressed as a function of temperature. To these data are added mixing terms associated with the appearance of the component species in particular phases. To validate the model, coulometric titration experiments relating the chemical potential of oxygen to the moles of oxygen introduced to SIMFUEL are underway. A description of the apparatus to oxidize and reduce samples in a controlled way in a H{sub 2}/H{sub 2}O mixture is presented. Existing measurements for irradiated fuel, both defected and non-defected, are also being incorporated into the validation process. (authors)

  12. Validation of Thermospheric Density Models for Drag Specification

    NASA Astrophysics Data System (ADS)

    Boll, N. J.; Ridley, A. J.; Doornbos, E.

    2014-12-01

    The rate of deployment for small satellite constellations into low earth orbit (LEO) is rapidly increasing. At these altitudes, the orbital characteristics of low mass spacecraft are heavily impacted by atmospheric drag. Given that many such satellites do not possess systems capable of applying thrust to correct for these perturbations, the ability to perform station-keeping maneuvers, as well as to adjust and maintain the relative position of each spacecraft within a constellation, is greatly dependent on the ability to accurately model variations in the thermosphere-ionosphere density. This paper uses density data measured along the orbital paths of the Challenging Minisatellite Payload (CHAMP), the Gravity Recovery and Climate Experiment (GRACE), and the Gravity field and steady-state Ocean Circulation Explorer (GOCE) to validate and compare several atmospheric models, including the Global Ionosphere Thermosphere Model (GITM), the US Naval Research Laboratory Mass Spectrometer and Incoherent Scatter Radar (NRLMSISE-00), and the Jacchia-Bowman 2008 empirical thermospheric density model (JB2008), under various geomagnetic activity levels and seasonal conditions.

  13. Solution Verification Linked to Model Validation, Reliability, and Confidence

    Microsoft Academic Search

    R W Logan; C K Nitta

    2004-01-01

    The concepts of Verification and Validation (V&V) can be oversimplified in a succinct manner by saying that 'verification is doing things right' and 'validation is doing the right thing'. In the world of the Finite Element Method (FEM) and computational analysis, it is sometimes said that 'verification means solving the equations right' and 'validation means solving the right equations'. In

  14. Validation of a finite element modelization of shallow waves propagation

    NASA Astrophysics Data System (ADS)

    Eiselt, F.; Shahrour, I.; Tricot, J. C.; Pernod, Ph.; Delannoy, B.

    The detection of shallow underground cavities by means of seismic reflection encounters major difficulties due to the impossibility in the analysis of recorded data to separate the reflection signals induced by the cavities from the ground roll and refractions. In order to overcome these difficulties, we propose to use fundamental analysis which, in giving a thorough understanding of wave propagation in soils permits an improvement of the instrumentation and a better analysis of the recorded signals. Since, it is impossible to resolve the wave propagation problem in soils involving cavities by means of analytical methods, we use the finite element technique which is presently largely used in the resolution of engineering problems. In this paper, we present the first part of our work which concerns the validation of a finite element program PECPLAS on laboratory reflection tests carried out on physical model involving cavities.

  15. Validation of a CFD model for predicting film cooling performance

    NASA Astrophysics Data System (ADS)

    Ward, S. C.; Fricker, D. A.; Lee, R. S.

    1993-06-01

    The validation of a CFD code for predicting supersonic, tangential injection film cooling performance is presented. Three different experimental film cooling studies have been simulated using the MDNS3D CFD code, and results are shown for comparison with the experimental data. MDNS3D is a Reynolds Averaged Navier-Stokes code with fully coupled k-epsilon turbulence and finite rate chemical kinetics models. Free shear layer flow fields with both chemically reacting and nonreacting coolant layers are examined. Test case one simulates nitrogen coolant injection over a recessed window on a 3D interceptor forebody. Test case two involves helium coolant injection into an air freestream. Test case three simulates highly reactive N2O4/NO2 coolant injection over a flat plate with an external arcjet onset flow. The results presented demonstrate the capability of the CFD code to accurately predict film cooling performance for a variety of complex flow configurations.

  16. Modeling and Validating Chronic Pharmacological Manipulation of Circadian Rhythms

    PubMed Central

    Kim, J K; Forger, D B; Marconi, M; Wood, D; Doran, A; Wager, T; Chang, C; Walton, K M

    2013-01-01

    Circadian rhythms can be entrained by a light-dark (LD) cycle and can also be reset pharmacologically, for example, by the CK1?/? inhibitor PF-670462. Here, we determine how these two independent signals affect circadian timekeeping from the molecular to the behavioral level. By developing a systems pharmacology model, we predict and experimentally validate that chronic CK1?/? inhibition during the earlier hours of a LD cycle can produce a constant stable delay of rhythm. However, chronic dosing later during the day, or in the presence of longer light intervals, is not predicted to yield an entrained rhythm. We also propose a simple method based on phase response curves (PRCs) that predicts the effects of a LD cycle and chronic dosing of a circadian drug. This work indicates that dosing timing and environmental signals must be carefully considered for accurate pharmacological manipulation of circadian phase. PMID:23863866

  17. Utilizing Chamber Data for Developing and Validating Climate Change Models

    NASA Technical Reports Server (NTRS)

    Monje, Oscar

    2012-01-01

    Controlled environment chambers (e.g. growth chambers, SPAR chambers, or open-top chambers) are useful for measuring plant ecosystem responses to climatic variables and CO2 that affect plant water relations. However, data from chambers was found to overestimate responses of C fluxes to CO2 enrichment. Chamber data may be confounded by numerous artifacts (e.g. sidelighting, edge effects, increased temperature and VPD, etc) and this limits what can be measured accurately. Chambers can be used to measure canopy level energy balance under controlled conditions and plant transpiration responses to CO2 concentration can be elucidated. However, these measurements cannot be used directly in model development or validation. The response of stomatal conductance to CO2 will be the same as in the field, but the measured response must be recalculated in such a manner to account for differences in aerodynamic conductance, temperature and VPD between the chamber and the field.

  18. Development, validation and application of numerical space environment models

    NASA Astrophysics Data System (ADS)

    Honkonen, Ilja

    2013-10-01

    Currently the majority of space-based assets are located inside the Earth's magnetosphere where they must endure the effects of the near-Earth space environment, i.e. space weather, which is driven by the supersonic flow of plasma from the Sun. Space weather refers to the day-to-day changes in the temperature, magnetic field and other parameters of the near-Earth space, similarly to ordinary weather which refers to changes in the atmosphere above ground level. Space weather can also cause adverse effects on the ground, for example, by inducing large direct currents in power transmission systems. The performance of computers has been growing exponentially for many decades and as a result the importance of numerical modeling in science has also increased rapidly. Numerical modeling is especially important in space plasma physics because there are no in-situ observations of space plasmas outside of the heliosphere and it is not feasible to study all aspects of space plasmas in a terrestrial laboratory. With the increasing number of computational cores in supercomputers, the parallel performance of numerical models on distributed memory hardware is also becoming crucial. This thesis consists of an introduction, four peer reviewed articles and describes the process of developing numerical space environment/weather models and the use of such models to study the near-Earth space. A complete model development chain is presented starting from initial planning and design to distributed memory parallelization and optimization, and finally testing, verification and validation of numerical models. A grid library that provides good parallel scalability on distributed memory hardware and several novel features, the distributed cartesian cell-refinable grid (DCCRG), is designed and developed. DCCRG is presently used in two numerical space weather models being developed at the Finnish Meteorological Institute. The first global magnetospheric test particle simulation based on the Vlasov description of plasma is carried out using the Vlasiator model. The test shows that the Vlasov equation for plasma in six-dimensionsional phase space is solved correctly b! y Vlasiator, that results are obtained beyond those of the magnetohydrodynamic (MHD) description of plasma and that global magnetospheric simulations using a hybrid-Vlasov model are feasible on current hardware. For the first time four global magnetospheric models using the MHD description of plasma (BATS-R-US, GUMICS, OpenGGCM, LFM) are run with identical solar wind input and the results compared to observations in the ionosphere and outer magnetosphere. Based on the results of the global magnetospheric MHD model GUMICS a hypothesis is formulated for a new mechanism of plasmoid formation in the Earth's magnetotail.

  19. User's guide for the stock-recruitment model validation program. Environmental Sciences Division Publication No. 1985

    SciTech Connect

    Christensen, S.W.; Kirk, B.L.; Goodyear, C.P.

    1982-06-01

    SRVAL is a FORTRAN IV computer code designed to aid in assessing the validity of curve-fits of the linearized Ricker stock-recruitment model, modified to incorporate multiple-age spawners and to include an environmental variable, to variously processed annual catch-per-unit effort statistics for a fish population. It is sometimes asserted that curve-fits of this kind can be used to determine the sensitivity of fish populations to such man-induced stresses as entrainment and impingement at power plants. The SRVAL code was developed to test such assertions. It was utilized in testimony written in connection with the Hudson River Power Case (US Environmental Protection Agency, Region II). This testimony was recently published as a NUREG report. Here, a user's guide for SRVAL is presented.

  20. Criterion Validity, Severity Cut Scores, and Test-Retest Reliability of the Beck Depression Inventory-II in a University Counseling Center Sample

    ERIC Educational Resources Information Center

    Sprinkle, Stephen D.; Lurie, Daphne; Insko, Stephanie L.; Atkinson, George; Jones, George L.; Logan, Arthur R.; Bissada, Nancy N.

    2002-01-01

    The criterion validity of the Beck Depression Inventory-II (BDI-II; A. T. Beck, R. A. Steer, & G. K. Brown, 1996) was investigated by pairing blind BDI-II administrations with the major depressive episode portion of the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I; M. B. First, R. L. Spitzer, M. Gibbon, & J. B. W. Williams,…

  1. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  2. Development and validation of a broad scheme for prediction of HLA class II restricted T cell epitopes.

    PubMed

    Paul, Sinu; Lindestam Arlehamn, Cecilia S; Scriba, Thomas J; Dillon, Myles B C; Oseroff, Carla; Hinz, Denise; McKinney, Denise M; Carrasco Pro, Sebastian; Sidney, John; Peters, Bjoern; Sette, Alessandro

    2015-07-01

    Computational prediction of HLA class II restricted T cell epitopes has great significance in many immunological studies including vaccine discovery. In recent years, prediction of HLA class II binding has improved significantly but a strategy to globally predict the most dominant epitopes has not been rigorously defined. Using human immunogenicity data associated with sets of 15-mer peptides overlapping by 10 residues spanning over 30 different allergens and bacterial antigens, and HLA class II binding prediction tools from the Immune Epitope Database and Analysis Resource (IEDB), we optimized a strategy to predict the top epitopes recognized by human populations. The most effective strategy was to select peptides based on predicted median binding percentiles for a set of seven DRB1 and DRB3/4/5 alleles. These results were validated with predictions on a blind set of 15 new allergens and bacterial antigens. We found that the top 21% predicted peptides (based on the predicted binding to seven DRB1 and DRB3/4/5 alleles) were required to capture 50% of the immune response. This corresponded to an IEDB consensus percentile rank of 20.0, which could be used as a universal prediction threshold. Utilizing actual binding data (as opposed to predicted binding data) did not appreciably change the efficacy of global predictions, suggesting that the imperfect predictive capacity is not due to poor algorithm performance, but intrinsic limitations of HLA class II epitope prediction schema based on HLA binding in genetically diverse human populations. PMID:25862607

  3. Operational validation of Gaussian plume models at a plains site. Final report

    Microsoft Academic Search

    S. D. Reynolds; C. Seigneur; T. E. Stoeckenius; G. E. Moore; R. G. Johnson; R. J. Londergan

    1984-01-01

    As part of the Operational Validation Phase of the EPRI Plume Model Validation and Development (PMV and D) Project, three standard Gaussian air quality models have been evaluated by comparing predicted power plant impacts with ground-level measurements. The data used for model evaluation were collected over flat terrain near the Kincaid Generating Station during 1980 and 1981. The models were

  4. Diagnostic validation of plume models at a plains site. Final report

    Microsoft Academic Search

    M. K. Liu; G. E. Moore

    1984-01-01

    The Plume Model Validation and Development study is intended to validate existing plume models and to provide the scientific basis for future model development. The diagnostic phase of the study consists of an analysis of transport and dispersion processes affecting a plume from an elevated source located above flat terrain. Two important objectives are evaluation of existing plume model components

  5. Validity of using Gaussian Schell model for extended beacon studies

    NASA Astrophysics Data System (ADS)

    Basu, Santasri; Cusumano, Salvatore J.; Hyde, Milo W.; Marciniak, Michael A.; Fiorino, Steven T.

    2012-06-01

    In many military applications that use Adaptive Optics (AO) a point source beacon is ideally required at the target to measure and to correct for the wavefront aberrations caused by propagation through the atmosphere. However, it is rarely possible to create a point source beacon at the target. The "extended beacons" that are created instead have intensity profiles with a finite spatial extent and exhibit varying degrees of spatial coherence. The Gaussian Schell model might be a convenient way to model these extended sources because of its analytical tractability. The present work examines the validity of using such a model by evaluating the scattered field from a rough surface target using a full wave electromagnetic solution (method of moments). The full wave electromagnetic calculation improves the fidelity of the analysis by capturing all aspects of laser-target interaction i.e. shadowing/ masking, multiple reflections etc. A variety of rough surface targets with different roughness statistics has been analyzed. This analysis will ultimately aid in understanding the key parameters of extended beacons and how they impact the Adaptive Optics (AO) system performance.

  6. Circulation Control Model Experimental Database for CFD Validation

    NASA Technical Reports Server (NTRS)

    Paschal, Keith B.; Neuhart, Danny H.; Beeler, George B.; Allan, Brian G.

    2012-01-01

    A 2D circulation control wing was tested in the Basic Aerodynamic Research Tunnel at the NASA Langley Research Center. A traditional circulation control wing employs tangential blowing along the span over a trailing-edge Coanda surface for the purpose of lift augmentation. This model has been tested extensively at the Georgia Tech Research Institute for the purpose of performance documentation at various blowing rates. The current study seeks to expand on the previous work by documenting additional flow-field data needed for validation of computational fluid dynamics. Two jet momentum coefficients were tested during this entry: 0.047 and 0.114. Boundary-layer transition was investigated and turbulent boundary layers were established on both the upper and lower surfaces of the model. Chordwise and spanwise pressure measurements were made, and tunnel sidewall pressure footprints were documented. Laser Doppler Velocimetry measurements were made on both the upper and lower surface of the model at two chordwise locations (x/c = 0.8 and 0.9) to document the state of the boundary layers near the spanwise blowing slot.

  7. Coupling photochemistry with haze formation in Titan's atmosphere, Part II: Results and validation with Cassini\\/Huygens data

    Microsoft Academic Search

    P. P. Lavvas; A. Coustenis; I. M. Vardavas

    2008-01-01

    The new one-dimensional radiative–convective\\/photochemical\\/microphysical model described in Part I is applied to the study of Titan's atmospheric processes that lead to haze formation. Our model generates the haze structure from the gaseous species photochemistry. Model results are presented for the species vertical concentration profiles, haze formation and its radiative properties, vertical temperature\\/density profiles and geometric albedo. These are validated against

  8. Query Based UML Modeling Validation and Verification of the System Model and Behavior for a

    E-print Network

    Austin, Mark

    for a Hydraulic Crane Denny Mathew ENPM 643 Instructor: Dr. Mark Austin #12;Systems Engineering Process for a Hydraulic Crane - Link Belt ATC 3200 - Requirements - Systems Structure - System Behavior - ConstraintsQuery Based UML Modeling Validation and Verification of the System Model and Behavior

  9. Comparison and validation of combined GRACE/GOCE models of the Earth's gravity field

    NASA Astrophysics Data System (ADS)

    Hashemi Farahani, H.; Ditmar, P.

    2012-04-01

    Accurate global models of the Earth's gravity field are needed in various applications: in geodesy - to facilitate the production of a unified global height system; in oceanography - as a source of information about the reference equipotential surface (geoid); in geophysics - to draw conclusions about the structure and composition of the Earth's interiors, etc. A global and (nearly) homogeneous set of gravimetric measurements is being provided by the dedicated satellite mission Gravity Field and Steady-State Ocean Circulation Explorer (GOCE). In particular, Satellite Gravity Gradiometry (SGG) data acquired by this mission are characterized by an unprecedented accuracy/resolution: according to the mission objectives, they must ensure global geoid modeling with an accuracy of 1 - 2 cm at the spatial scale of 100 km (spherical harmonic degree 200). A number of new models of the Earth's gravity field have been compiled on the basis of GOCE data in the course of the last 1 - 2 years. The best of them take into account also the data from the satellite gravimetry mission Gravity Recovery And Climate Experiment (GRACE), which offers an unbeatable accuracy in the range of relatively low degrees. Such combined models contain state-of-the-art information about the Earth's gravity field up to degree 200 - 250. In the present study, we compare and validate such models, including GOCO02, EIGEN-6S, and a model compiled in-house. In addition, the EGM2008 model produced in the pre-GOCE era is considered as a reference. The validation is based on the ability of the models to: (i) predict GRACE K-Band Ranging (KBR) and GOCE SGG data (not used in the production of the models under consideration), and (ii) synthesize a mean dynamic topography model, which is compared with the CNES-CLS09 model derived from in situ oceanographic data. The results of the analysis demonstrate that the GOCE SGG data lead not only to significant improvements over continental areas with a poor coverage with terrestrial gravimetry measurements (such as Africa, Himalayas, and South America), but also to some improvements over well-studied continental areas (such as North America and Australia). Furthermore, we demonstrate a somewhat higher performance of the model produced in-house compared to the other combined GRACE/GOCE models. At the same time, it is found that the combined models show a relatively high level of noise in the oceanic areas compared to EGM2008. This implies that further efforts are needed in order to suppress high-frequency noise in the combined models in the optimal way.

  10. An approach to model validation and model-based prediction -- polyurethane foam case study.

    SciTech Connect

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model-based predictions. Several hypothetical prediction problems are created and addressed. Hypothetical problems are used because no guidance was provided concern

  11. Modeling Portfolios that Contain Risky Assets Optimization II: Model-Based Portfolio

    E-print Network

    Maryland at College Park, University of

    Modeling Portfolios that Contain Risky Assets Optimization II: Model-Based Portfolio Management C version c 2013 Charles David Levermore #12;Risk and Reward I: Introduction II: Markowitz Portfolios III: Basic Markowitz Portfolio Theory Portfolio Models I: Portfolios with Risk-Free Assets II: Long

  12. Canyon building ventilation system dynamic model -- Parameters and validation

    SciTech Connect

    Moncrief, B.R. [Westinghouse Savannah River Co., Aiken, SC (United States); Chen, F.F.K. [Bechtel National, Inc., San Francisco, CA (United States)

    1993-02-01

    Plant system simulation crosses many disciplines. At the core is the mimic of key components in the form of mathematical ``models.`` These component models are functionally integrated to represent the plant. With today`s low cost high capacity computers, the whole plant can be truly and effectively reproduced in a computer model. Dynamic simulation has its roots in ``single loop`` design, which is still a common objective in the employment of simulation. The other common objectives are the ability to preview plant operation, to anticipate problem areas, and to test the impact of design options. As plant system complexity increases and our ability to simulate the entire plant grows, the objective to optimize plant system design becomes practical. This shift in objectives from problem avoidance to total optimization by far offers the most rewarding potential. Even a small reduction in bulk materials and space can sufficiently justify the application of this technology. Furthermore, to realize an optimal plant starts from a tight and disciplined design. We believe the assurance required to execute such a design strategy can partly be derived from a plant model. This paper reports on the application of a dynamic model to evaluate the capacity of an existing production plant ventilation system. This study met the practical objectives of capacity evaluation under present and future conditions, and under normal and accidental situations. More importantly, the description of this application, in its methods and its utility, aims to validate the technology of dynamic simulation in the environment of plant system design and safe operation.

  13. Validation of lateral boundary conditions for regional climate models

    NASA Astrophysics Data System (ADS)

    Pignotti, Angela J.

    California boasts a population of more than 34 million and is the tenth largest energy consumer in the world. As such, the California Energy Commission (CEC) is greatly concerned about the environmental impacts of global climate change on energy needs, production and distribution. In order to better understand future energy needs in California, the CEC depends upon international climate scientists who use results from simulations of western U.S. regional climate models (RCMs). High-resolution RCMs are driven by coupled Atmosphere/Ocean General Circulation Model (AOGCM) simulations along lateral surface boundaries outlining the region of interest. For projections of future climate, however, when the RCM is driven by future climate change output from an AOGCM, the performance of an RCM will depend to some degree on the merit of the AOGCM. The objective of this study is to provide tools to assist with model validation of coupled Atmosphere/Ocean General Circulation Model (AOGCM) simulations against present-day observations. A comparison technique frequently utilized by climate scientists is multiple hypothesis testing, which identifies statistically significant regions of difference between spatial fields. In order to use these methods, the AOGCM fields must be interpolated onto the reanalysis grid. In this work, I present an efficient interpolation technique using thin-plate splines. I then compare significant regions of difference using multiple testing procedures of Bonferoni against the false detection rate methodology. A major drawback of multiple hypothesis methods is that they do not account for correlation in the spatial field. I introduce and employ measures of comparison, including the Mahalanobis distance measure, that account for anisotropy within the spatial field. Bayesian techniques are applied to calculate comparison measures between the driver-GCM lateral surface boundaries and the NCEP/NCAR and ERA40 reanalysis data sets. I find that the Mahalanobis measure provides a systematic ranking of model performance against present-day observations.

  14. Validation model for Raman based skin carotenoid detection.

    PubMed

    Ermakov, Igor V; Gellermann, Werner

    2010-12-01

    Raman spectroscopy holds promise as a rapid objective non-invasive optical method for the detection of carotenoid compounds in human tissue in vivo. Carotenoids are of interest due to their functions as antioxidants and/or optical absorbers of phototoxic light at deep blue and near UV wavelengths. In the macular region of the human retina, carotenoids may prevent or delay the onset of age-related tissue degeneration. In human skin, they may help prevent premature skin aging, and are possibly involved in the prevention of certain skin cancers. Furthermore, since carotenoids exist in high concentrations in a wide variety of fruits and vegetables, and are routinely taken up by the human body through the diet, skin carotenoid levels may serve as an objective biomarker for fruit and vegetable intake. Before the Raman method can be accepted as a widespread optical alternative for carotenoid measurements, direct validation studies are needed to compare it with the gold standard of high performance liquid chromatography. This is because the tissue Raman response is in general accompanied by a host of other optical processes which have to be taken into account. In skin, the most prominent is strongly diffusive, non-Raman scattering, leading to relatively shallow light penetration of the blue/green excitation light required for resonant Raman detection of carotenoids. Also, sizable light attenuation exists due to the combined absorption from collagen, porphyrin, hemoglobin, and melanin chromophores, and additional fluorescence is generated by collagen and porphyrins. In this study, we investigate for the first time the direct correlation of in vivo skin tissue carotenoid Raman measurements with subsequent chromatography derived carotenoid concentrations. As tissue site we use heel skin, in which the stratum corneum layer thickness exceeds the light penetration depth, which is free of optically confounding chromophores, which can be easily optically accessed for in vivo RRS measurement, and which can be easily removed for subsequent biochemical measurements. Excellent correlation (coefficient R=0.95) is obtained for this tissue site which could serve as a model site for scaled up future validation studies of large populations. The obtained results provide proof that resonance Raman spectroscopy is a valid non-invasive objective methodology for the quantitative assessment of carotenoid antioxidants in human skin in vivo. PMID:20678465

  15. Modeling short wave radiation and ground surface temperature: a validation experiment in the Western Alps

    NASA Astrophysics Data System (ADS)

    Pogliotti, P.; Cremonese, E.; Dallamico, M.; Gruber, S.; Migliavacca, M.; Morra di Cella, U.

    2009-12-01

    Permafrost distribution in high-mountain areas is influenced by topography (micro-climate) and high variability of ground covers conditions. Its monitoring is very difficult due to logistical problems like accessibility, costs, weather conditions and reliability of instrumentation. For these reasons physically-based modeling of surface rock/ground temperatures (GST) is fundamental for the study of mountain permafrost dynamics. With this awareness a 1D version of GEOtop model (www.geotop.org) is tested in several high-mountain sites and its accuracy to reproduce GST and incoming short wave radiation (SWin) is evaluated using independent field measurements. In order to describe the influence of topography, both flat and near-vertical sites with different aspects are considered. Since the validation of SWin is difficult on steep rock faces (due to the lack of direct measures) and validation of GST is difficult on flat sites (due to the presence of snow) the two parameters are validated as independent experiments: SWin only on flat morphologies, GST only on the steep ones. The main purpose is to investigate the effect of: (i) distance between driving meteo station location and simulation point location, (ii) cloudiness, (iii) simulation point aspect, (iv) winter/summer period. The temporal duration of model runs is variable from 3 years for the SWin experiment to 8 years for the validation of GST. The model parameterization is constant and tuned for a common massive bedrock of crystalline rock like granite. Ground temperature profile is not initialized because rock temperature is measured at only 10cm depth. A set of 9 performance measures is used for comparing model predictions and observations (including: fractional mean bias (FB), coefficient of residual mass (CMR), mean absolute error (MAE), modelling efficiency (ME), coefficient of determination (R2)). Results are very encouraging. For both experiments the distance (Km) between location of the driving meteo station and location of simulation doesn't play a significant effect (below 230 Km) on ME and R2 values. The incoming short wave radiation on flat sites is very well modeled and only the cloudiness can be a significant source of error in therms of underestimation. Also the GST on steep sites is very well modeled and very good values of both ME and R2 are obtained. MAE values are always quite big (1÷5°C) but the role of fixed parameterization is probably strong is such sense. Over and under-estimations occur during winter and summer respectively and can be an effect of not well modeling of SWin on near-vertical morphologies. In the future the direct validation of SWin on steep sites is needed together with a validation of snow accumulation/melting on flat sites and relative analysis of the effect on ground thermal regime. This require very good precipitation datasets in middle-high-mountain areas.

  16. A Test of Model Validation from Observed Temperature Trends

    NASA Astrophysics Data System (ADS)

    Singer, S. F.

    2006-12-01

    How much of current warming is due to natural causes and how much is manmade? This requires a comparison of the patterns of observed warming with the best available models that incorporate both anthropogenic (greenhouse gases and aerosols) as well as natural climate forcings (solar and volcanic). Fortunately, we have the just published U.S.-Climate Change Science Program (CCSP) report (www.climatescience.gov/Library/sap/sap1-1/finalreport/default.htm), based on best current information. As seen in Fig. 1.3F of the report, modeled surface temperature trends change little with latitude, except for a stronger warming in the Arctic. The observations, however, show a strong surface warming in the northern hemisphere but not in the southern hemisphere (see Fig. 3.5C and 3.6D). The Antarctic is found to be cooling and Arctic temperatures, while currently rising, were higher in the 1930s than today. Although the Executive Summary of the CCSP report claims "clear evidence" for anthropogenic warming, based on comparing tropospheric and surface temperature trends, the report itself does not confirm this. Greenhouse models indicate that the tropics should provide the most sensitive location for their validation; trends there should increase by 200-300 percent with altitude, peaking at around 10 kilometers. The observations, however, show the opposite: flat or even decreasing tropospheric trend values (see Fig. 3.7 and also Fig. 5.7E). This disparity is demonstrated most strikingly in Fig. 5.4G, which shows the difference between surface and troposphere trends for a collection of models (displayed as a histogram) and for balloon and satellite data. [The disparities are less apparent in the Summary, which displays model results in terms of "range" rather than as histograms.] There may be several possible reasons for the disparity: Instrumental and other effects that exaggerate or otherwise distort observed temperature trends. Or, more likely: Shortcomings in models that result in much reduced values of climate sensitivity; for example, the neglect of important negative feedbacks. Allowing for uncertainties in the data and for imperfect models, there is only one valid conclusion from the failure of greenhouse models to explain the observations: The human contribution to global warming is still quite small, so that natural climate factors are dominant. This may also explain why the climate was cooling from 1940 to 1975 -- even as greenhouse-gas levels increased rapidly. An overall test for climate prediction may soon be possible by measuring the ongoing rise in sea level. According to my estimates, sea level should rise by 1.5 to 2.0 cm per decade (about the same rate as in past millennia); the U.N.-IPCC (4th Assessment Report) predicts 1.4 to 4.3 cm per decade. In the New York Review of Books (July 13, 2006), however, James Hansen suggests 20 feet or more per century -- equivalent to about 60 cm or more per decade.

  17. NAIRAS aircraft radiation model development, dose climatology, and initial validation

    PubMed Central

    Mertens, Christopher J; Meier, Matthias M; Brown, Steven; Norman, Ryan B; Xu, Xiaojing

    2013-01-01

    [1] The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis suggests that these single-point differences will be within 30% when a new deterministic pion-initiated electromagnetic cascade code is integrated into NAIRAS, an effort which is currently underway.

  18. Stratospheric Heterogeneous Chemistry and Microphysics: Model Development, Validation and Applications

    NASA Technical Reports Server (NTRS)

    Turco, Richard P.

    1996-01-01

    The objectives of this project are to: define the chemical and physical processes leading to stratospheric ozone change that involve polar stratospheric clouds (PSCS) and the reactions occurring on the surfaces of PSC particles; study the formation processes, and the physical and chemical properties of PSCS, that are relevant to atmospheric chemistry and to the interpretation of field measurements taken during polar stratosphere missions; develop quantitative models describing PSC microphysics and heterogeneous chemical processes; assimilate laboratory and field data into these models; and calculate the extent of chemical processing on PSCs and the impact of specific microphysical processes on polar composition and ozone depletion. During the course of the project, a new coupled microphysics/physical-chemistry/ photochemistry model for stratospheric sulfate aerosols and nitric acid and ice PSCs was developed and applied to analyze data collected during NASA's Arctic Airborne Stratospheric Expedition-II (AASE-II) and other missions. In this model, detailed treatments of multicomponent sulfate aerosol physical chemistry, sulfate aerosol microphysics, polar stratospheric cloud microphysics, PSC ice surface chemistry, as well as homogeneous gas-phase chemistry were included for the first time. In recent studies focusing on AASE measurements, the PSC model was used to analyze specific measurements from an aircraft deployment of an aerosol impactor, FSSP, and NO(y) detector. The calculated results are in excellent agreement with observations for particle volumes as well as NO(y) concentrations, thus confirming the importance of supercooled sulfate/nitrate droplets in PSC formation. The same model has been applied to perform a statistical study of PSC properties in the Northern Hemisphere using several hundred high-latitude air parcel trajectories obtained from Goddard. The rates of ozone depletion along trajectories with different meteorological histories are presently being systematically evaluated to identify the principal relationships between ozone loss and aerosol state. Under this project, we formulated a detailed quantitative model that predicts the multicomponent composition of sulfate aerosols under stratospheric conditions, including sulfuric, nitric, hydrochloric, hydrofluoric and hydrobromic acids. This work defined for the first time the behavior of liquid ternary-system type-1b PSCS. The model also allows the compositions and reactivities of sulfate aerosols to be calculated over the entire range of environmental conditions encountered in the stratosphere (and has been incorporated into a trajectory/microphysics model-see above). Important conclusions that derived from this work over the last few years include the following: the HNO3 content of liquid-state aerosols dominate PSCs below about 195 K; the freezing of nitric acid ice from sulfate aerosol solutions is likely to occur within a few degrees K of the water vapor frost point; the uptake and reactions of HCl in liquid aerosols is a critical component of PSC heterogeneous chemistry. In a related application of this work, the inefficiency of chlorine injection into the stratosphere during major volcanic eruptions was explained on the basis of nucleation of sulfuric acid aerosols in rising volcanic plumes leading to the formation of supercooled water droplets on these aerosols, which efficiently scavenges HCl via precipitation.

  19. Validation of a New Rainbow Model Over the Hawaiian Islands

    NASA Astrophysics Data System (ADS)

    Ricard, J. L.; Adams, P. L.; Barckike, J.

    2012-12-01

    A new realistic model of the rainbow has been developed at the CNRM. It is based on the Airy theory. The main entry parameters are the droplet size distribution, the angle of the sun above the horizon, the temperature of the droplets and the wavelength. The island of Hawaii seems to be a perfect place for the validation of the rainbow model. Not only because of its famous rainbows, but also because of the convenient ring road along the coast. The older lower islands for more frequent viewing opportunities having to do with the proximity of clear sky to heavy rainfall. Both Oahu and Kauai as well as the western part of Maui have coastal roads that offer good access to rainbows. The best time to view rainbows is when the sun angle is lowest, in other words near the winter solstice. Figure 1 = Map of mean annual rainfall for the islands of Kauai and Oahu, developed from the new 2011 Rainfall Atlas of Hawaii. The base period of the statistics is 1978-2007. Figure 2 = Moisture zone map by Gon et al (1998). Blue areas are the wet ones. Green areas are the Mesic ones. Yellow areas are the dry ones.

  20. Calibration and Validation of Airborne InSAR Geometric Model

    NASA Astrophysics Data System (ADS)

    Chunming, Han; huadong, Guo; Xijuan, Yue; Changyong, Dou; Mingming, Song; Yanbing, Zhang

    2014-03-01

    The image registration or geo-coding is a very important step for many applications of airborne interferometric Synthetic Aperture Radar (InSAR), especially for those involving Digital Surface Model (DSM) generation, which requires an accurate knowledge of the geometry of the InSAR system. While the trajectory and attitude instabilities of the aircraft introduce severe distortions in three dimensional (3-D) geometric model. The 3-D geometrical model of an airborne SAR image depends on the SAR processor itself. Working at squinted model, i.e., with an offset angle (squint angle) of the radar beam from broadside direction, the aircraft motion instabilities may produce distortions in airborne InSAR geometric relationship, which, if not properly being compensated for during SAR imaging, may damage the image registration. The determination of locations of the SAR image depends on the irradiated topography and the exact knowledge of all signal delays: range delay and chirp delay (being adjusted by the radar operator) and internal delays which are unknown a priori. Hence, in order to obtain reliable results, these parameters must be properly calibrated. An Airborne InSAR mapping system has been developed by the Institute of Remote Sensing and Digital Earth (RADI), Chinese Academy of Sciences (CAS) to acquire three-dimensional geo-spatial data with high resolution and accuracy. To test the performance of the InSAR system, the Validation/Calibration (Val/Cal) campaign has carried out in Sichun province, south-west China, whose results will be reported in this paper.

  1. Validation of ADEOS-II GLI ocean color products using in-situ observations

    Microsoft Academic Search

    Hiroshi Murakami; Kosei Sasaoka; Kohtaro Hosoda; Hajime Fukushima; Mitsuhiro Toratani; Robert Frouin; B. Greg Mitchell; Mati Kahru; Pierre-Yves Deschamps; Dennis Clark; Stephanie Flora; Motoaki Kishino; Sei-Ichi Saitoh; Ichio Asanuma; Akihiko Tanaka; Hiroaki Sasaki; Katsumi Yokouchi; Yoko Kiyomoto; Hiroaki Saito; Cécile Dupouy; Absornsuda Siripong; Satsuki Matsumura; Joji Ishizaka

    2006-01-01

    The Global Imager (GLI) aboard the Advanced Earth Observing Satellite-II (ADEOS-II) made global observations from 2 April\\u000a 2003 to 24 October 2003. In cooperation with several institutes and scientists, we obtained quality controlled match-ups between\\u000a GLI products and in-situ data, 116 for chlorophyll-a concentration (CHLA), 249 for normalized water-leaving radiance (nLw) at 443 nm, and 201 for\\u000a aerosol optical thickness

  2. Validation of model based active control of combustion instability

    SciTech Connect

    Fleifil, M.; Ghoneim, Z.; Ghoniem, A.F.

    1998-07-01

    The demand for efficient, company and clean combustion systems have spurred research into the fundamental mechanisms governing their performance and means of interactively changing their performance characteristics. Thermoacoustic instability which is frequently observed in combustion systems with high power density, when burning close to the lean flammability limit, or using exhaust gas recirculation to meet more stringent emissions regulations, etc. Its occurrence and/or means to mitigate them passively lead to performance degradation such as reduced combustion efficiency, high local heat transfer rates, increase in the mixture equivalence ratio or system failure due to structural damage. This paper reports on their study of the origin of thermoacoustic instability, its dependence on system parameters and the means of actively controlling it. The authors have developed an analytical model of thermoacoustic instability in premixed combustors. The model combines a heat release dynamics model constructed using the kinematics of a premixed flame stabilized behind a perforated plate with the linearized conservation equations governing the system acoustics. This formulation allows model based controller design. In order to test the performance of the analytical model, a numerical solution of the partial differential equations governing the system has been carried out using the principle of harmonic separation and focusing on the dominant unstable mode. This leads to a system of ODEs governing the thermofluid variables. Analytical predictions of the frequency and growth ate of the unstable mode are shown to be in good agreement with the numerical simulations as well s with those obtained using experimental identification techniques when applied to a laboratory combustor. The authors use these results to confirm the validity of the assumptions used in formulating the analytical model. A controller based on the minimization of a cost function using the LQR technique has been designed using the analytical model and implemented on a bench top laboratory combustor. The authors show that the controller is capable of suppressing the pressure oscillations in the combustor with a settling time much shorter than what had been attained before and without exciting secondary peaks.

  3. Validation of Community Models: 2. Development of a Baseline, Using the Wang-Sheeley-Arge Model

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    This paper is the second in a series providing independent validation of community models of the outer corona and inner heliosphere. Here I present a comprehensive validation of the Wang-Sheeley-Arge (WSA) model. These results will serve as a baseline against which to compare the next generation of comparable forecasting models. The WSA model is used by a number of agencies to predict Solar wind conditions at Earth up to 4 days into the future. Given its importance to both the research and forecasting communities, it is essential that its performance be measured systematically and independently. I offer just such an independent and systematic validation. I report skill scores for the model's predictions of wind speed and interplanetary magnetic field (IMF) polarity for a large set of Carrington rotations. The model was run in all its routinely used configurations. It ingests synoptic line of sight magnetograms. For this study I generated model results for monthly magnetograms from multiple observatories, spanning the Carrington rotation range from 1650 to 2074. I compare the influence of the different magnetogram sources and performance at quiet and active times. I also consider the ability of the WSA model to forecast both sharp transitions in wind speed from slow to fast wind and reversals in the polarity of the radial component of the IMF. These results will serve as a baseline against which to compare future versions of the model as well as the current and future generation of magnetohydrodynamic models under development for forecasting use.

  4. The African American Acculturation Scale II: Cross-Validation and Short Form.

    ERIC Educational Resources Information Center

    Landrine, Hope; Klonoff, Elizabeth A.

    1995-01-01

    Studied African American culture, using a new, shortened, 33-item African American Acculturation Scale (AAAS-33) to assess the scale's validity and reliability. Comparisons between the original form and AAAS-33 reveal high correlations, however, the longer form may be sensitive to some beliefs, practices, and attitudes not assessed by the short…

  5. Validity of Social, Moral and Emotional Facets of Self-Description Questionnaire II

    ERIC Educational Resources Information Center

    Leung, Kim Chau; Marsh, Herbert W.; Yeung, Alexander Seeshing; Abduljabbar, Adel S.

    2015-01-01

    Studies adopting a construct validity approach can be categorized into within- and between-network studies. Few studies have applied between-network approach and tested the correlations of the social (same-sex relations, opposite-sex relations, parent relations), moral (honesty-trustworthiness), and emotional (emotional stability) facets of the…

  6. Effects of Risperidone on Aberrant Behavior in Persons with Developmental Disabilities: II. Social Validity Measures.

    ERIC Educational Resources Information Center

    McAdam, David B.; Zarcone, Jennifer R.; Hellings, Jessica; Napolitano, Deborah A.; Schroeder, Stephen R.

    2002-01-01

    Consumer satisfaction and social validity were measured during a double-blind, placebo-controlled evaluation of risperidone in treating aberrant behaviors of persons with developmental disabilities. A survey showed all 17 caregivers felt participation was positive. Community members (n=52) also indicated that when on medication, the 5 participants…

  7. Title: Modeling, Validation and Verification of Concurrent Behavior in the Panama Canal

    E-print Network

    Austin, Mark

    ABSTRACT Title: Modeling, Validation and Verification of Concurrent Behavior in the Panama Canal architectural model of a canal system, as measured by transportation criteria. Specifically, the Panama Canal the scenario-based specifications, system behavioral model, animated verification and validation of the Panama

  8. Alaska North Slope Tundra Travel Model and Validation Study

    SciTech Connect

    Harry R. Bader; Jacynthe Guimond

    2006-03-01

    The Alaska Department of Natural Resources (DNR), Division of Mining, Land, and Water manages cross-country travel, typically associated with hydrocarbon exploration and development, on Alaska's arctic North Slope. This project is intended to provide natural resource managers with objective, quantitative data to assist decision making regarding opening of the tundra to cross-country travel. DNR designed standardized, controlled field trials, with baseline data, to investigate the relationships present between winter exploration vehicle treatments and the independent variables of ground hardness, snow depth, and snow slab thickness, as they relate to the dependent variables of active layer depth, soil moisture, and photosynthetically active radiation (a proxy for plant disturbance). Changes in the dependent variables were used as indicators of tundra disturbance. Two main tundra community types were studied: Coastal Plain (wet graminoid/moist sedge shrub) and Foothills (tussock). DNR constructed four models to address physical soil properties: two models for each main community type, one predicting change in depth of active layer and a second predicting change in soil moisture. DNR also investigated the limited potential management utility in using soil temperature, the amount of photosynthetically active radiation (PAR) absorbed by plants, and changes in microphotography as tools for the identification of disturbance in the field. DNR operated under the assumption that changes in the abiotic factors of active layer depth and soil moisture drive alteration in tundra vegetation structure and composition. Statistically significant differences in depth of active layer, soil moisture at a 15 cm depth, soil temperature at a 15 cm depth, and the absorption of photosynthetically active radiation were found among treatment cells and among treatment types. The models were unable to thoroughly investigate the interacting role between snow depth and disturbance due to a lack of variability in snow depth cover throughout the period of field experimentation. The amount of change in disturbance indicators was greater in the tundra communities of the Foothills than in those of the Coastal Plain. However the overall level of change in both community types was less than expected. In Coastal Plain communities, ground hardness and snow slab thickness were found to play an important role in change in active layer depth and soil moisture as a result of treatment. In the Foothills communities, snow cover had the most influence on active layer depth and soil moisture as a result of treatment. Once certain minimum thresholds for ground hardness, snow slab thickness, and snow depth were attained, it appeared that little or no additive effect was realized regarding increased resistance to disturbance in the tundra communities studied. DNR used the results of this modeling project to set a standard for maximum permissible disturbance of cross-country tundra travel, with the threshold set below the widely accepted standard of Low Disturbance levels (as determined by the U.S. Fish and Wildlife Service). DNR followed the modeling project with a validation study, which seemed to support the field trial conclusions and indicated that the standard set for maximum permissible disturbance exhibits a conservative bias in favor of environmental protection. Finally DNR established a quick and efficient tool for visual estimations of disturbance to determine when investment in field measurements is warranted. This Visual Assessment System (VAS) seemed to support the plot disturbance measurements taking during the modeling and validation phases of this project.

  9. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    PubMed

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601

  10. Validation of transport models using additive flux minimization technique

    SciTech Connect

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States)] [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States)] [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States)] [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)] [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  11. The role of global cloud climatologies in validating numerical models

    NASA Technical Reports Server (NTRS)

    HARSHVARDHAN

    1991-01-01

    Reliable estimates of the components of the surface radiation budget are important in studies of ocean-atmosphere interaction, land-atmosphere interaction, ocean circulation and in the validation of radiation schemes used in climate models. The methods currently under consideration must necessarily make certain assumptions regarding both the presence of clouds and their vertical extent. Because of the uncertainties in assumed cloudiness, all these methods involve perhaps unacceptable uncertainties. Here, a theoretical framework that avoids the explicit computation of cloud fraction and the location of cloud base in estimating the surface longwave radiation is presented. Estimates of the global surface downward fluxes and the oceanic surface net upward fluxes were made for four months (April, July, October and January) in 1985 to 1986. These estimates are based on a relationship between cloud radiative forcing at the top of the atmosphere and the surface obtained from a general circulation model. The radiation code is the version used in the UCLA/GLA general circulation model (GCM). The longwave cloud radiative forcing at the top of the atmosphere as obtained from Earth Radiation Budget Experiment (ERBE) measurements is used to compute the forcing at the surface by means of the GCM-derived relationship. This, along with clear-sky fluxes from the computations, yield maps of the downward longwave fluxes and net upward longwave fluxes at the surface. The calculated results are discussed and analyzed. The results are consistent with current meteorological knowledge and explainable on the basis of previous theoretical and observational works; therefore, it can be concluded that this method is applicable as one of the ways to obtain the surface longwave radiation fields from currently available satellite data.

  12. Validation of transport models using additive flux minimization technique

    NASA Astrophysics Data System (ADS)

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-10-01

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V&V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V&V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V&V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  13. Results of site validation experiments. Volume II. Supporting documents 5 through 14

    SciTech Connect

    Not Available

    1983-01-01

    Volume II contains the following supporting documents: Summary of Geologic Mapping of Underground Investigations; Logging of Vertical Coreholes - ''Double Box'' Area and Exploratory Drift; WIPP High Precision Gravity Survey; Basic Data Reports for Drillholes, Brine Content of Facility Internal Strata; Mineralogical Content of Facility Interval Strata; Location and Characterization of Interbedded Materials; Characterization of Aquifers at Shaft Locations; and Permeability of Facility Interval Strate.

  14. METASTATES IN DISORDERED MEAN FIELD MODELS II: THE SUPERSTATES \\Lambda

    E-print Network

    Külske, Christof

    METASTATES IN DISORDERED MEAN FIELD MODELS II: THE SUPERSTATES \\Lambda Christof K¨ulske 1 2 Courant the concept of `superstates', as recently proposed by Bovier and Gayrard. We discuss various notions

  15. An independent verification and validation of the Future Theater Level Model conceptual model

    SciTech Connect

    Hartley, D.S. III; Kruse, K.L.; Martellaro, A.J.; Packard, S.L.; Thomas, B. Jr.; Turley, V.K.

    1994-08-01

    This report describes the methodology and results of independent verification and validation performed on a combat model in its design stage. The combat model is the Future Theater Level Model (FTLM), under development by The Joint Staff/J-8. J-8 has undertaken its development to provide an analysis tool that addresses the uncertainties of combat more directly than previous models and yields more rapid study results. The methodology adopted for this verification and validation consisted of document analyses. Included were detailed examination of the FTLM design documents (at all stages of development), the FTLM Mission Needs Statement, and selected documentation for other theater level combat models. These documents were compared to assess the FTLM as to its design stage, its purpose as an analytical combat model, and its capabilities as specified in the Mission Needs Statement. The conceptual design passed those tests. The recommendations included specific modifications as well as a recommendation for continued development. The methodology is significant because independent verification and validation have not been previously reported as being performed on a combat model in its design stage. The results are significant because The Joint Staff/J-8 will be using the recommendations from this study in determining whether to proceed with develop of the model.

  16. Gravitropic responses of the Avena coleoptile in space and on clinostats. II. Is reciprocity valid?

    NASA Technical Reports Server (NTRS)

    Johnsson, A.; Brown, A. H.; Chapman, D. K.; Heathcote, D.; Karlsson, C.

    1995-01-01

    Experiments were undertaken to determine if the reciprocity rule is valid for gravitropic responses of oat coleoptiles in the acceleration region below 1 g. The rule predicts that the gravitropic response should be proportional to the product of the applied acceleration and the stimulation time. Seedlings were cultivated on 1 g centrifuges and transferred to test centrifuges to apply a transverse g-stimulation. Since responses occurred in microgravity, the uncertainties about the validity of clinostat simulation of weightlessness was avoided. Plants at two stages of coleoptile development were tested. Plant responses were obtained using time-lapse video recordings that were analyzed after the flight. Stimulus intensities and durations were varied and ranged from 0.1 to 1.0 g and from 2 to 130 min, respectively. For threshold g-doses the reciprocity rule was obeyed. The threshold dose was of the order of 55 g s and 120 g s, respectively, for two groups of plants investigated. Reciprocity was studied also at bending responses which are from just above the detectable level to about 10 degrees. The validity of the rule could not be confirmed for higher g-doses, chiefly because the data were more variable. It was investigated whether the uniformity of the overall response data increased when the gravitropic dose was defined as (gm x t) with m-values different from unity. This was not the case and the reciprocity concept is, therefore, valid also in the hypogravity region. The concept of gravitropic dose, the product of the transverse acceleration and the stimulation time, is also well-defined in the acceleration region studied. With the same hardware, tests were done on earth where responses occurred on clinostats. The results did not contradict the reciprocity rule but scatter in the data was large.

  17. Modeling local paleoclimates and validation in the southwest United States

    SciTech Connect

    Stamm, J.F.

    1992-01-01

    In order to evaluate the spatial and seasonal variations of paleoclimate in the southwest US, a local climate model (LCM) is developed that computes modern and 18,000 yr B.P. (18 ka) monthly temperature and precipitation from a set of independent variables. Independent variables include: terrain elevation, insolation, CO[sub 2] concentration, January and July winds, and January and July sea-surface temperatures. Solutions are the product of a canonical regression function which is calibrated using climate data from 641 stations from AZ, CA, CO, NM, NV, UT in the National Weather Service Cooperative observer network. Validation of the LCH, using climate data at 98 climate stations from the period 1980--1984, indicates no significant departures of LCM solutions from climate data. LCM solutions of modern and 18 ka climate are computed at a 15 km spacing over a rectangular domain extending 810 km east, 360 km west, 225 km north and 330 km south of the approximate location of Yucca Mt., KV. Solutions indicate mean annual temperature was 5[degrees]C cooler at 18 ka and mean annual precipitation increased 68%. The annual cycle of temperature and precipitation at 18 ka was amplified with summers about 1[degrees]C cooler and 71% drier, and winters about 11[degrees]C colder and 35% wetter than the modern. Model results compare quite reasonably with proxy paleoclimate estimates from glacial deposits, pluvial lake deposits, pollen records, ostracodes records and packrat madden records from the southwest US However, bias (+5[degrees]C to +10[degrees]C) is indicated for LCM solutions of summer temperatures at 18 ka.

  18. Modeling the Arm II core in MicroCap IV

    SciTech Connect

    Dalton, A.C.

    1996-11-01

    This paper reports on how an electrical model for the core of the Arm II machine was created and how to use this model. We wanted to get a model for the electrical characteristics of the ARM II core, in order to simulate this machine and to assist in the design of a future machine. We wanted this model to be able to simulate saturation, variable loss, and reset. Using the Hodgdon model and the circuit analysis program MicroCap IV, this was accomplished. This paper is written in such a way as to allow someone not familiar with the project to understand it.

  19. Model of the expansion of H II region RCW 82

    SciTech Connect

    Krasnobaev, K. V.; Kotova, G. Yu. [Lomonosov Moscow State University, Moscow 119991 (Russian Federation); Tagirova, R. R., E-mail: kvk-kras@list.ru, E-mail: gviana2005@gmail.com, E-mail: rtaghirova@gmail.com [Space Research Institute of RAS, Moscow 117997 (Russian Federation)

    2014-05-10

    This paper aims to resolve the problem of formation of young objects observed in the RCW 82 H II region. In the framework of a classical trigger model the estimated time of fragmentation is larger than the estimated age of the H II region. Thus the young objects could not have formed during the dynamical evolution of the H II region. We propose a new model that helps resolve this problem. This model suggests that the H II region RCW 82 is embedded in a cloud of limited size that is denser than the surrounding interstellar medium. According to this model, when the ionization-shock front leaves the cloud it causes the formation of an accelerating dense gas shell. In the accelerated shell, the effects of the Rayleigh-Taylor (R-T) instability dominate and the characteristic time of the growth of perturbations with the observed magnitude of about 3 pc is 0.14 Myr, which is less than the estimated age of the H II region. The total time t {sub ?}, which is the sum of the expansion time of the H II region to the edge of the cloud, the time of the R-T instability growth, and the free fall time, is estimated as 0.44 < t {sub ?} < 0.78 Myr. We conclude that the young objects in the H II region RCW 82 could be formed as a result of the R-T instability with subsequent fragmentation into large-scale condensations.

  20. VALIDATING A QOL WORK\\/NON-WORK MODEL OF NAVAL RETENTION THROUGH ALTERNATIVE MODEL COMPARISONS1

    Microsoft Academic Search

    Tracy L. Kline; Michael J. Schwerin; Murrey G. Olmsted

    With the costs of preparing personnel for militaristic challenges, recognizing the life needs affecting military retention behavior is critical. Data from the 1999 Navy Quality of Life (QOL) Survey was previously utilized to develop a relationship model of work and non-work QOL factors to retention intent (Wilcove, Schwerin, & Wolosin, 2003). Validation analyses of the 1998 USMC QOL Survey support

  1. Sociodemography of borderline personality disorder (PD): a comparison with Axis II PDs and psychiatric symptom disorders convergent validation.

    PubMed

    Taub, J M

    1996-11-01

    A theoretical objective of the present meta-analysis based upon data derived from a previously reported review (Taub, 1995), was to test two inductive hypotheses empirically regarding educational background and social class across different criteria for the DSM-III diagnosis of borderline personality disorder (PD). A secondary purpose was to determine whether comorbidity of borderline PD with other Axis II PDs would significantly delineate socioeducational variables. Across 7/8 pairwise contrasts which represented five studies, distribution of Hollingshead Redlich (H-R) social classes II-IV borderline PD (N = 326) significantly exceeded that in 457 diagnostic controls with Axis II PDs and psychiatric symptom disorders. Although average differences, as well as, interactions reflected by values of the H-R two-factor scale attained statistical significance these were were less consistent in magnitude and direction versus outcomes yielded by distribution of social classes. For the borderline PD diagnosis, the inductive hypotheses were substantiated by findings of significantly advanced scholastic achievement, as well as the younger age of most cohorts versus diagnostic controls with Axis II PDs and psychiatric symptom disorders; and in pairwise contrasts of outpatients with hospitalized cohorts. Comorbidity of the borderline PD diagnosis was associated with significantly lower social class, scholastic achievement and to a lesser extent, more severe psychopathology. Evidence for predominantly convergent validation relative to the socioeducational variables was substantiated by comparisons with (a) cohorts selected by criteria of the DSM-III-R, Gunderson's DIB and Borderline Personality Scale: (b) Norwegian females admitted to Gaustad Hospital and (c) patients with the DSM-III diagnosis of borderline PD attending an outpatient clinic in Norway. PMID:9003963

  2. Validation of chemometric models for the determination of deoxynivalenol on maize by mid-infrared spectroscopy

    Microsoft Academic Search

    G. Kos; H. Lohninger; R. Krska

    2003-01-01

    Validation methods for chemometric models are presented, which are a necessity for the evaluation of model performance and\\u000a prediction ability. Reference methods with known performance can be employed for comparison studies. Other validation methods\\u000a include test set and cross validation, where some samples are set aside for testing purposes. The choice of the testing method\\u000a mainly depends on the size

  3. Sound scattering by several zooplankton groups. II. Scattering models

    E-print Network

    Stanton, Tim

    Sound scattering by several zooplankton groups. II. Scattering models Timothy K. Stanton 1996 Mathematical scattering models are derived and compared with data from zooplankton from several gross anatomical groups--fluidlike, elastic shelled, and gas bearing. The models are based upon

  4. The fear-avoidance model of chronic pain: Validation and age analysis using structural equation modeling

    Microsoft Academic Search

    Andrew J. Cook; Peter A. Brawer; Kevin E. Vowles

    2006-01-01

    The cognitive-behavioral, fear-avoidance (FA) model of chronic pain (Vlaeyen JWS, Kole-Snijders AMJ, Boeren RGB, van Eek H. Fear of movement\\/(re)injury in chronic low back pain and its relation to behavioral performance. Pain 1995a;62:363–72) has found broad empirical support, but its multivariate, predictive relationships have not been uniformly validated. Applicability of the model across age groups of chronic pain patients has

  5. Semipermeable Hollow Fiber Phantoms for Development and Validation of Perfusion-Sensitive MR Methods and Signal Models

    PubMed Central

    Anderson, J.R.; Ackerman, J.J.H.; Garbow, J.R.

    2015-01-01

    Two semipermeable, hollow fiber phantoms for the validation of perfusion-sensitive magnetic resonance methods and signal models are described. Semipermeable hollow fibers harvested from a standard commercial hemodialysis cartridge serve to mimic tissue capillary function. Flow of aqueous media through the fiber lumen is achieved with a laboratory-grade peristaltic pump. Diffusion of water and solute species (e.g., Gd-based contrast agent) occurs across the fiber wall, allowing exchange between the lumen and the extralumenal space. Phantom design attributes include: i) small physical size, ii) easy and low-cost construction, iii) definable compartment volumes, and iv) experimental control over media content and flow rate.

  6. MATSIM -The Development and Validation of a Numerical Voxel Model based on the MATROSHKA Phantom

    NASA Astrophysics Data System (ADS)

    Beck, Peter; Rollet, Sofia; Berger, Thomas; Bergmann, Robert; Hajek, Michael; Latocha, Marcin; Vana, Norbert; Zechner, Andrea; Reitz, Guenther

    The AIT Austrian Institute of Technology coordinates the project MATSIM (MATROSHKA Simulation) in collaboration with the Vienna University of Technology and the German Aerospace Center. The aim of the project is to develop a voxel-based model of the MATROSHKA anthro-pomorphic torso used at the International Space Station (ISS) as foundation to perform Monte Carlo high-energy particle transport simulations for different irradiation conditions. Funded by the Austrian Space Applications Programme (ASAP), MATSIM is a co-investigation with the European Space Agency (ESA) ELIPS project MATROSHKA, an international collaboration of more than 18 research institutes and space agencies from all over the world, under the science and project lead of the German Aerospace Center. The MATROSHKA facility is designed to determine the radiation exposure of an astronaut onboard ISS and especially during an ex-travehicular activity. The numerical model developed in the frame of MATSIM is validated by reference measurements. In this report we give on overview of the model development and compare photon and neutron irradiations of the detector-equipped phantom torso with Monte Carlo simulations using FLUKA. Exposure to Co-60 photons was realized in the standard ir-radiation laboratory at Seibersdorf, while investigations with neutrons were performed at the thermal column of the Vienna TRIGA Mark-II reactor. The phantom was loaded with passive thermoluminescence dosimeters. In addition, first results of the calculated dose distribution within the torso are presented for a simulated exposure in low-Earth orbit.

  7. Quantitative endoscopic imaging elastic scattering spectroscopy: model system/tissue phantom validation

    NASA Astrophysics Data System (ADS)

    Lindsley, E. H.; Farkas, D. L.

    2008-02-01

    We have designed and built an imaging elastic scattering spectroscopy endoscopic instrument for the purpose of detecting cancer in vivo. As part of our testing and validation of the system, known targets representing potential disease states of interest were constructed using polystyrene beads of known average diameter and TiO II crystals embedded in a two-layer agarose gel. Final construction geometry was verified using a dissection microscope. The phantoms were then imaged using the endoscopic probe at a known incident angle, and the results compared to model predictions. The mathematical model that was used combines classic ray-tracing optics with Mie scattering to predict the images that would be observed by the probe at a given physical distance from a Mie-regime scattering media. This model was used generate the expected observed response for a broad range of parameter values, and these results were then used as a library to fit the observed data from the phantoms. Compared against the theoretical library, the best matching signal correlated well with known phantom material dimensions. These results lead us to believe that imaging elastic scattering can be useful in detection/diagnosis, but further refinement of the device will be necessary to detect the weak signals in a real clinical setting.

  8. Modeling and experimental validation of unsteady impinging flames

    SciTech Connect

    Fernandes, E.C.; Leandro, R.E. [Center for Innovation, Technology and Policy Research, Mechanical Engineering Department, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa Codex (Portugal)

    2006-09-15

    This study reports on a joint experimental and analytical study of premixed laminar flames impinging onto a plate at controlled temperature, with special emphasis on the study of periodically oscillating flames. Six types of flame structures were found, based on parametric variations of nozzle-to-plate distance (H), jet velocity (U), and equivalence ratio (f). They were classified as conical, envelope, disc, cool central core, ring, and side-lifted flames. Of these, the disc, cool central core, and envelope flames were found to oscillate periodically, with frequency and sound pressure levels increasing with Re and decreasing with nozzle-to-plate distance. The unsteady behavior of these flames was modeled using the formulation derived by Durox et al. [D. Durox, T. Schuller, S. Candel, Proc. Combust. Inst. 29 (2002) 69-75] for the cool central core flames where the convergent burner acts as a Helmholtz resonator, driven by an external pressure fluctuation dependent on a velocity fluctuation at the burner mouth after a convective time delay {tau}. Based on this model, the present work shows that {tau} = [Re[2jtanh{sup -1}((2{delta}{omega}+(1+N)j{omega}{sup 2}-j{omega}{sub 0}{sup 2})/ (2{delta}{omega}+(1-N)j{omega}{sup 2}-j{omega}{sub 0}{sup 2}))]+2{pi}K]/{omega}, i.e., there is a relation between oscillation frequency ({omega}), burner acoustic characteristics ({omega}{sub 0},{delta}), and time delay {tau}, not explicitly dependent on N, the flame-flow normalized interaction coefficient [D. Durox, T. Schuller, S. Candel, Proc. Combust. Inst. 29 (2002) 69-75], because {partial_derivative}t/{partial_derivative}N = 0. Based on flame motion and noise analysis, K was found to physically represent the integer number of perturbations on flame surface or number of coherent structures on impinging jet. Additionally, assuming that {tau}={beta}H/U, where H is the nozzle-to-plate distance and U is the mean jet velocity, it is shown that {beta}{sub Disc}=1.8, {beta}{sub CCC}=1.03, and {beta}{sub Env}=1.0. A physical analysis of the proportionality constant {beta} showed that for the disc flames, {tau} corresponds to the ratio between H and the velocity of the coherent structures. In the case of envelope and cool central core flames, {tau} corresponds to the ratio between H and the mean jet velocity. The predicted frequency fits the experimental data, supporting the validity of the mathematical modeling, empirical formulation, and assumptions made. (author)

  9. Experimental Validation of the ns-2 Wireless Model using Simulation, Emulation, and Real

    E-print Network

    Experimental Validation of the ns-2 Wireless Model using Simulation, Emulation, and Real Network,aherms,glukas}@ivs.cs.uni-magdeburg.de Abstract. Wireless network research in the last years is often based on simulation. Ns-2 is a widely used the accuracy of the ns-2 wireless model in the literature so far. In this paper we present the validation

  10. Adaptation and Validation of an Agent Model of Functional State and Performance for Individuals

    E-print Network

    Treur, Jan

    functional state model to the individual and validation of the resulting model. First, human experiments have mostly qualitative theories from Psychology, but was not validated yet using human experiments been performed by taking a number of steps. First of all, an experiment with 31 human subjects has been

  11. Verification and Validation of a model dedicated to mode handling of manufacturing systems

    Microsoft Academic Search

    N. Hamani

    2006-01-01

    This paper focuses on verification and validation (V&V) of a model dedicated to mode handling of flexible manufacturing systems. This model, specified using the synchronous formalism Safe State Machines, was proposed in our earlier work. The rigorous semantics that characterize such formalism enable to provide formal verification mechanisms ensuring determinism and dependability. A structured framework for verification and validation of

  12. A Process Modelling Framework for Formal Validation of Panama Canal System Operations

    E-print Network

    Austin, Mark

    1 A Process Modelling Framework for Formal Validation of Panama Canal System Operations John develop a process modeling framework for the evaluation and formal validation of Panama Canal system. The Panama Canal is one of the world's most important waterways. Initially opened for operation in 1914

  13. Validation of computational models in biomechanics H B Henninger1,2

    E-print Network

    Utah, University of

    Validation of computational models in biomechanics H B Henninger1,2 , S P Reese1,2 , A E Anderson1 biomechanics, and many recent articles have applied these concepts in an attempt to build credibility to present them in the context of computational biomechanics. Specifically, the task of model validation

  14. Reconceptualizing the Learning Transfer Conceptual Framework: Empirical Validation of a New Systemic Model

    ERIC Educational Resources Information Center

    Kontoghiorghes, Constantine

    2004-01-01

    The main purpose of this study was to examine the validity of a new systemic model of learning transfer and thus determine if a more holistic approach to training transfer could better explain the phenomenon. In all, this study confirmed the validity of the new systemic model and suggested that a high performance work system could indeed serve as…

  15. Reconceptualizing the learning transfer conceptual framework: empirical validation of a new systemic model

    Microsoft Academic Search

    Constantine Kontoghiorghes

    2004-01-01

    The main purpose of this study was to examine the validity of a new systemic model of learning transfer and thus determine if a more holistic approach to training transfer could better explain the phenomenon. In all, this study confirmed the validity of the new systemic model and suggested that a high performance work system could indeed serve as a

  16. Heterogeneous Concurrent Modeling and Design in Java (Volume 3: Ptolemy II Domains)

    E-print Network

    Heterogeneous Concurrent Modeling and Design in Java (Volume 3: Ptolemy II Domains) Christopher, Infineon, Microsoft, National Instruments, and Toyota. #12;PTOLEMY II HETEROGENEOUS CONCURRENT MODELING, Haiyang Zheng VOLUME 3: PTOLEMY II DOMAINS Authors: Shuvra S. Bhattacharyya Christopher Brooks Elaine

  17. Validation of the integration of CFD and SAS4A/SASSYS-1: Analysis of EBR-II shutdown heat removal test 17

    SciTech Connect

    Thomas, J. W.; Fanning, T. H.; Vilim, R.; Briggs, L. L. [Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439-4842 (United States)

    2012-07-01

    Recent analyses have demonstrated the need to model multidimensional phenomena, particularly thermal stratification in outlet plena, during safety analyses of loss-of-flow transients of certain liquid-metal cooled reactor designs. Therefore, Argonne's reactor systems safety code SAS4A/SASSYS-1 is being enhanced by integrating 3D computational fluid dynamics models of the plena. A validation exercise of the new tool is being performed by analyzing the protected loss-of-flow event demonstrated by the EBR-II Shutdown Heat Removal Test 17. In this analysis, the behavior of the coolant in the cold pool is modeled using the CFD code STAR-CCM+, while the remainder of the cooling system and the reactor core are modeled with SAS4A/SASSYS-1. This paper summarizes the code integration strategy and provides the predicted 3D temperature and velocity distributions inside the cold pool during SHRT-17. The results of the coupled analysis should be considered preliminary at this stage, as the exercise pointed to the need to improve the CFD model of the cold pool tank. (authors)

  18. Test cell modeling and optimization for FPD-II

    SciTech Connect

    Haney, S.W.; Fenstermacher, M.E.

    1985-04-10

    The Fusion Power Demonstration, Configuration II (FPD-II), will ba a DT burning tandem mirror facility with thermal barriers, designed as the next step engineering test reactor (ETR) to follow the tandem mirror ignition test machines. Current plans call for FPD-II to be a multi-purpose device. For approximately the first half of its lifetime, it will operate as a high-Q ignition machine designed to reach or exceed engineering break-even and to demonstrate the technological feasibility of tandem mirror fusion. The second half of its operation will focus on the evaluation of candidate reactor blanket designs using a neutral beam driven test cell inserted at the midplane of the 90 m long cell. This machine called FPD-II+T, uses an insert configuration similar to that used in the MFTF-..cap alpha..+T study. The modeling and optimization of FPD-II+T are the topic of the present paper.

  19. Simplified Risk Model Version II (SRM-II) Structure and Application

    SciTech Connect

    Eide, Steven Arvid; Wierman, Thomas Edward

    1999-08-01

    The Simplified Risk Model Version II (SRM-II) is a quantitative tool for efficiently evaluating the risk from Department of Energy waste management activities. Risks evaluated include human safety and health and environmental impact. Both accidents and normal, incident-free operation are considered. The risk models are simplifications of more detailed risk analyses, such as those found in environmental impact statements, safety analysis reports, and performance assessments. However, wherever possible, conservatisms in such models have been removed to obtain best estimate results. The SRM-II is used to support DOE complex-wide environmental management integration studies. Typically such activities involve risk predictions including such activities as initial storage, handling, treatment, interim storage, transportation, and final disposal.

  20. Simplified Risk Model Version II (SRM-II) Structure and Application

    SciTech Connect

    S. A. Eide; T. E. Wierman

    1999-08-01

    The Simplified Risk Model Version II (SRM-II) is a quantitative tool for efficiently evaluating the risk from Department of Energy waste management activities. Risks evaluated include human safety and health and environmental impact. Both accidents and normal, incident-free operation are considered. The risk models are simplifications of more detailed risk analyses, such as those found in environmental impact statements, safety analysis reports, and performance assessments. However, wherever possible, conservatisms in such models have been removed to obtain best estimate results. The SRM-II is used to support DOE complex-wide environmental management integration studies. Typically such studies involve risk predictions covering the entire waste management program, including such activities as initial storage, handling, treatment, interim storage, transportation, and final disposal.

  1. Testing the validity of the International Atomic Energy Agency (IAEA) safety culture model.

    PubMed

    López de Castro, Borja; Gracia, Francisco J; Peiró, José M; Pietrantoni, Luca; Hernández, Ana

    2013-11-01

    This paper takes the first steps to empirically validate the widely used model of safety culture of the International Atomic Energy Agency (IAEA), composed of five dimensions, further specified by 37 attributes. To do so, three independent and complementary studies are presented. First, 290 students serve to collect evidence about the face validity of the model. Second, 48 experts in organizational behavior judge its content validity. And third, 468 workers in a Spanish nuclear power plant help to reveal how closely the theoretical five-dimensional model can be replicated. Our findings suggest that several attributes of the model may not be related to their corresponding dimensions. According to our results, a one-dimensional structure fits the data better than the five dimensions proposed by the IAEA. Moreover, the IAEA model, as it stands, seems to have rather moderate content validity and low face validity. Practical implications for researchers and practitioners are included. PMID:24076304

  2. DISCRETE EVENT MODELING IN PTOLEMY II

    Microsoft Academic Search

    Lukito Muliadi

    Abstract This report describes the discrete-event semantics and its implementation,in the Ptolemy II software architecture. The discrete-event system representation is appropriate for time-oriented systems such as queueing systems, communication networks, and hardware systems. A key strength in our discrete-event implementation ,is that simultaneous ,events are handled systematically and deterministically. A formal and rigorous treatment of this property is given. One

  3. Hydrodynamical Models of Type II Plateau Supernovae

    Microsoft Academic Search

    Melina C. Bersten; Omar Benvenuto; Mario Hamuy

    2011-01-01

    We present bolometric light curves of Type II plateau supernovae obtained using a newly developed, one-dimensional Lagrangian hydrodynamic code with flux-limited radiation diffusion. Using our code we calculate the bolometric light curve and photospheric velocities of SN 1999em, obtaining a remarkably good agreement with observations despite the simplifications used in our calculation. The physical parameters used in our calculation are

  4. Quantum theory of the Bianchi II model

    E-print Network

    Hervé Bergeron; Orest Hrycyna; Przemys?aw Ma?kiewicz; W?odzimierz Piechocki

    2014-08-24

    We describe the quantum evolution of the vacuum Bianchi II universe in terms of the transition amplitude between two asymptotic quantum Kasner-like states. For large values of the momentum variable the classical and quantum calculations give similar results. The difference occurs for small values of this variable due to the Heisenberg uncertainty principle. Our results can be used, to some extent, as a building block of the quantum evolution of the vacuum Bianchi IX universe.

  5. Development of a Hydraulic Manipulator Servoactuator Model: Simulation and Experimental Validation

    E-print Network

    Papadopoulos, Evangelos

    Development of a Hydraulic Manipulator Servoactuator Model: Simulation and Experimental Validation Abstract In this paper, modelling and identification of a hydraulic servoactuator system is presented, leakage, and load dynamics. System parameters are identified based on a high-performance hydraulic

  6. Using Structural Equation Modeling To Test for Differential Reliability and Validity: An Empirical Demonstration.

    ERIC Educational Resources Information Center

    Raines-Eudy, Ruth

    2000-01-01

    Demonstrates empirically a structural equation modeling technique for group comparison of reliability and validity. Data, which are from a study of 495 mothers' attitudes toward pregnancy, have a one-factor measurement model and three sets of subpopulation comparisons. (SLD)

  7. Importance of Sea Ice for Validating Global Climate Models

    NASA Technical Reports Server (NTRS)

    Geiger, Cathleen A.

    1997-01-01

    Reproduction of current day large-scale physical features and processes is a critical test of global climate model performance. Without this benchmark, prognoses of future climate conditions are at best speculation. A fundamental question relevant to this issue is, which processes and observations are both robust and sensitive enough to be used for model validation and furthermore are they also indicators of the problem at hand? In the case of global climate, one of the problems at hand is to distinguish between anthropogenic and naturally occuring climate responses. The polar regions provide an excellent testing ground to examine this problem because few humans make their livelihood there, such that anthropogenic influences in the polar regions usually spawn from global redistribution of a source originating elsewhere. Concomitantly, polar regions are one of the few places where responses to climate are non-anthropogenic. Thus, if an anthropogenic effect has reached the polar regions (e.g. the case of upper atmospheric ozone sensitivity to CFCs), it has most likely had an impact globally but is more difficult to sort out from local effects in areas where anthropogenic activity is high. Within this context, sea ice has served as both a monitoring platform and sensitivity parameter of polar climate response since the time of Fridtjof Nansen. Sea ice resides in the polar regions at the air-sea interface such that changes in either the global atmospheric or oceanic circulation set up complex non-linear responses in sea ice which are uniquely determined. Sea ice currently covers a maximum of about 7% of the earth's surface but was completely absent during the Jurassic Period and far more extensive during the various ice ages. It is also geophysically very thin (typically <10 m in Arctic, <3 m in Antarctic) compared to the troposphere (roughly 10 km) and deep ocean (roughly 3 to 4 km). Because of these unique conditions, polar researchers regard sea ice as one of the more important features to monitor in terms of heat, mass, and momentum transfer between the air and sea and furthermore, the impact of such responses to global climate.

  8. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the repeatability of such catastrophe losses in the country. The validation process also included collaboration between Aon Benfield and its client in order to consider the insurance market penetration in Algeria estimated approximately at 5%. Thus, we believe that the applied approach led towards the production of an earthquake model for Algeria that is scientifically sound and reliable from one side and market and client oriented on the other side.

  9. Large-eddy simulation of flow past urban-like surfaces: A model validation study

    NASA Astrophysics Data System (ADS)

    Cheng, Wai Chi; Porté-Agel, Fernando

    2013-04-01

    Accurate prediction of atmospheric boundary layer (ABL) flow and its interaction with urban surfaces is critical for understanding the transport of momentum and scalars within and above cities. This, in turn, is essential for predicting the local climate and pollutant dispersion patterns in urban areas. Large-eddy simulation (LES) explicitly resolves the large-scale turbulent eddy motions and, therefore, can potentially provide improved understanding and prediction of flows inside and above urban canopies. This study focuses on developing and validating an LES framework to simulate flow past urban-like surfaces. In particular, large-eddy simulations were performed of flow past an infinite long two-dimensional (2D) building and an array of 3D cubic buildings. An immersed boundary (IB) method was employed to simulate both 2D and 3D buildings. Four subgrid-scale (SGS) models, including (i) the traditional Smagorinsky model, (ii) the Lagrangian dynamic model, (iii) the Lagrangian scale-dependent dynamic model, and (iv) the modulated gradient model, were evaluated using the 2D building case. The simulated velocity streamlines and the vertical profiles of the mean velocities and variances were compared with experimental results. The modulated gradient model shows the best overall agreement with the experimental results among the four SGS models. In particular, the flow recirculation, the reattachment position and the vertical profiles are accurately reproduced with a grid resolution of (Nx)x(Ny)x(Nz) =160x40x160 ((nx)x(nz) =13x16 covering the block). After validating the LES framework with the 2D building case, it was further applied to simulate a boundary-layer flow past a 3D building array. A regular aligned building array with seven rows of cubic buildings was simulated. The building spacings in the streamwise and spanwise directions were both equal to the building height. A developed turbulent boundary-layer flow was used as the incoming flow. The results were compared with wind tunnel experimental data. Good agreement was observed between the LES results and experimental data in the vertical profiles of the mean velocities and velocity variances at different positions within the building array.

  10. The Neurological Outcome Scale for Traumatic Brain Injury (NOS-TBI): II. Reliability and Convergent Validity

    PubMed Central

    Wilde, Elisabeth A.; Kelly, Tara M.; Weyand, Annie M.; Yallampalli, Ragini; Waldron, Eric J.; Pedroza, Claudia; Schnelle, Kathleen P.; Boake, Corwin; Levin, Harvey S.; Moretti, Paolo

    2010-01-01

    Abstract A standardized measure of neurological dysfunction specifically designed for TBI currently does not exist and the lack of assessment of this domain represents a substantial gap. To address this, the Neurological Outcome Scale for Traumatic Brain Injury (NOS-TBI) was developed for TBI outcomes research through the addition to and modification of items specifically relevant to patients with TBI, based on the National Institutes of Health Stroke Scale. In a sample of 50 participants (mean age?=?33.3 years, SD?=?12.9) ?18 months (mean?=?3.1, SD?=?3.2) following moderate (n?=?8) to severe (n?=?42) TBI, internal consistency of the NOS-TBI was high (Cronbach's alpha?=?0.942). Test-retest reliability also was high (??=?0.97, p?validity was demonstrated through significant Spearman rank-order correlations between the NOS-TBI and the concurrently administered Disability Rating Scale (??=?0.75, p?valid measure of neurological functioning in patients with moderate to severe TBI. PMID:20210595

  11. Percolation and sieving segregation patterns: Quantification, mechanistic theory, model development and validation, and application

    NASA Astrophysics Data System (ADS)

    Tang, Pingjun

    The general goal of this research was to study percolation and sieving segregation patterns---quantification, mechanistic theory, model development and validation of particulate materials. A second generation primary segregation shear cell (PSSC-II) was designed and fabricated to model the sieving and percolation segregation mechanisms of particulate materials. Two test materials used in this research were spherical shaped glass beads (denoted as G) and irregular shaped mash poultry feed (denoted as F), which are considered as representatives of ideal and real world materials, respectively. The PSSC-II test results showed that there is a linear relationship between normalized segregation rate (NSR) and absolute size or size ratio for GG and FG combinations; whereas, linear relationship does not hold for FF and GF combinations although the effect of absolute size and size ratio on NSR were significant (P < 0.001). The NSR is defined as the ratio of collected fines mass to feed fines mass divided by total time. Furthermore, comparisons between these four combinations showed that, compared with coarse particle properties, fine particle properties other than size including density, surface texture, and electrostatic charges of a binary mixture play a dominant role on NSR. For instance, the higher density and smoother surface of fine glass beads lead to a NSR for GG and FG combinations much greater compared with fine feed particles with lower density and rough surface texture for FF and GF combinations. Additionally, the irregular shaped coarse bed of particles (higher porosity) cause higher segregation potential of fines compared with spherical shaped coarse particles with lower porosity. A mechanistic theory-based segregation model (denoted as MTB model) for GG and FG combinations was developed using mechanics, dimensional analysis, and linear regression methods. The MTB model, for the first time, successfully correlated the effect of particle size, density, and shape to segregation potential of binary mixtures in one quantitative equation. Furthermore, the MTB model has the potential to accommodate additional effects such as surface texture and electrostatic charge to generalize the model. Finally, as a case study, effect of feed particle segregation on bird performance was performed to examine the effectiveness of the research results. The results showed that, due to bird selection behavior and particle segregation, birds did not sufficiently consume those nutrients that are contained in smaller feed particles (<1,180 mum). The results of feed particle size and nutrients analysis verified the above observations. (Abstract shortened by UMI.)

  12. Some guidance on preparing validation plans for the DART Full System Models.

    SciTech Connect

    Gray, Genetha Anne; Hough, Patricia Diane; Hills, Richard Guy (Sandia National Laboratories, Albuquerque, NM)

    2009-03-01

    Planning is an important part of computational model verification and validation (V&V) and the requisite planning document is vital for effectively executing the plan. The document provides a means of communicating intent to the typically large group of people, from program management to analysts to test engineers, who must work together to complete the validation activities. This report provides guidelines for writing a validation plan. It describes the components of such a plan and includes important references and resources. While the initial target audience is the DART Full System Model teams in the nuclear weapons program, the guidelines are generally applicable to other modeling efforts. Our goal in writing this document is to provide a framework for consistency in validation plans across weapon systems, different types of models, and different scenarios. Specific details contained in any given validation plan will vary according to application requirements and available resources.

  13. A Formal Algorithm for Verifying the Validity of Clustering Results Based on Model Checking

    PubMed Central

    Huang, Shaobin; Cheng, Yuan; Lang, Dapeng; Chi, Ronghua; Liu, Guofeng

    2014-01-01

    The limitations in general methods to evaluate clustering will remain difficult to overcome if verifying the clustering validity continues to be based on clustering results and evaluation index values. This study focuses on a clustering process to analyze crisp clustering validity. First, we define the properties that must be satisfied by valid clustering processes and model clustering processes based on program graphs and transition systems. We then recast the analysis of clustering validity as the problem of verifying whether the model of clustering processes satisfies the specified properties with model checking. That is, we try to build a bridge between clustering and model checking. Experiments on several datasets indicate the effectiveness and suitability of our algorithms. Compared with traditional evaluation indices, our formal method can not only indicate whether the clustering results are valid but, in the case the results are invalid, can also detect the objects that have led to the invalidity. PMID:24608823

  14. Composing Different Models of Computation in Kepler and Ptolemy II

    Microsoft Academic Search

    Antoon Goderis; Christopher Brooks; Ilkay Altintas; Edward A. Lee; Carole A. Goble

    2007-01-01

    A model of computation (MoC) is a formal abstraction of execution in a computer. There is a need for composing MoCs in e-science. Kepler, which is based on Ptolemy II, is a scientific workflow environment that allows for MoC composition. This paper explains how MoCs are combined in Kepler and Ptolemy II and analyzes which combinations of MoCs are currently

  15. Examining the Reliability and Validity of Clinician Ratings on the Five-Factor Model Score Sheet

    PubMed Central

    Few, Lauren R.; Miller, Joshua D.; Morse, Jennifer Q.; Yaggi, Kirsten E.; Reynolds, Sarah K.; Pilkonis, Paul A.

    2013-01-01

    Despite substantial research use, measures of the five-factor model (FFM) are infrequently used in clinical settings due, in part, to issues related to administration time and a reluctance to use self-report instruments. The current study examines the reliability and validity of the Five-Factor Model Score Sheet (FFMSS), which is a 30-item clinician rating form designed to assess the five domains and 30 facets of one conceptualization of the FFM. Studied in a sample of 130 outpatients, clinical raters demonstrated reasonably good interrater reliability across personality profiles and the domains manifested good internal consistency with the exception of Neuroticism. The FFMSS ratings also evinced expected relations with self-reported personality traits (e.g., FFMSS Extraversion and Schedule for Nonadaptive and Adaptive Personality Positive Temperament) and consensus-rated personality disorder symptoms (e.g., FFMSS Agreeableness and Narcissistic Personality Disorder). Finally, on average, the FFMSS domains were able to account for approximately 50% of the variance in domains of functioning (e.g., occupational, parental) and were even able to account for variance after controlling for Axis I and Axis II pathology. Given these findings, it is believed that the FFMSS holds promise for clinical use. PMID:20519735

  16. The Adriatic Sea ecosystem seasonal cycle: Validation of a three-dimensional numerical model

    Microsoft Academic Search

    L. Polimene; N. Pinardi; M. Zavatarelli; S. Colella

    2006-01-01

    A three-dimensional coupled biogeochemical-circulation numerical model was implemented in the Adriatic Sea. The biogeochemical part of the model is a development of the European Seas Regional Ecosystem Model (ERSEM II), while the circulation model is the Adriatic Sea implementation of the Princeton Ocean Model (POM). The model was run under climatological monthly varying atmospheric and river runoff forcing in order

  17. A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection

    Microsoft Academic Search

    Ron Kohavi

    1995-01-01

    We review accuracy estimation methods and compare the two most common methods cross- validation and bootstrap Recent experimen­ tal results on artificial data and theoretical re cults m restricted settings have shown that for selecting a good classifier from a set of classi­ fiers (model selection), ten-fold cross-validation may be better than the more expensive ka\\\\p one-out cross-validation We report

  18. Multiperiod Multiproduct Advertising Budgeting. Part II: Stochastic Optimization Modeling

    E-print Network

    Beltran-Royo, Cesar

    Multiperiod Multiproduct Advertising Budgeting. Part II: Stochastic Optimization Modeling C for the Multiperiod Multiproduct Advertising Budgeting problem, so that the expected profit of the advertising of standard opti- mization software. The model has been tested for planning a realistic advertising campaign

  19. H II Galaxies versus Photoionization Models for Evolving Starbursts

    Microsoft Academic Search

    Grazyna Stasinska; Claus Leitherer

    1996-01-01

    We have constructed a grid of models representing an H II region produced by an evolving starburst embedded in a gas cloud of the same metallicity. The models were produced with the spectral energy distribution from a stellar evolutionary synthesis code as input for a photoionization code that computes the emission-line strengths and equivalent widths. Stellar evolution was assumed to

  20. ORIGINAL PAPER Modeling Heart Rate Regulation--Part II: Parameter

    E-print Network

    Olufsen, Mette Sofie

    ORIGINAL PAPER Modeling Heart Rate Regulation--Part II: Parameter Identification and Analysis K. R of this study we introduced a 17- parameter model that can predict heart rate regulation during postural change to adequately represent the observed heart rate response. In part I and in previous work (Olufsen et al. 2006

  1. Validation of Cognitive Structures: A Structural Equation Modeling Approach

    Microsoft Academic Search

    Dimiter M. Dimitrov; Tenko Raykov

    2003-01-01

    Determining sources of item difficulty and using them for selection or development of test items is a bridging task of psychometrics and cognitive psychology. A key problem in this task is the validation of hypothesized cognitive operations required for correct solution of test items. In previous research, the problem has been addressed frequently via use of the linear logistic test

  2. An Approach to Model and Validate Publish\\/Subscribe Architectures

    Microsoft Academic Search

    Luca Zanolin; Carlo Ghezzi; Luciano Baresi

    2003-01-01

    Distributed applications are increasingly built as federations of components that join and leave the cooperation dynam- ically. Publish\\/subscribe middleware is a promising infras- tructure to support these applications, but unfortunately complicates the understanding and validation of these sys- tems. It is easy to understand what each component does, but it is hard to understand what the global federation achieves. In

  3. VALIDATION OF A LAGRANGIAN MODEL PLUME RISE SCHEME AGAINST THE KINCAID DATA SET

    Microsoft Academic Search

    Helen N Webster; David J Thomson; Alison L Redington; Derrick B Ryall

    In this paper a new plume rise scheme for the Lagrangian model NAME is described. We use the Kincaid data set to validate the scheme and compare the model with other leading atmospheric dispersion models at short range. THE PLUME RISE SCHEME NAME is a Lagrangian model in which large numbers of particles are released into the model atmosphere (Maryon,

  4. Computational modeling and validation of intraventricular flow in a simple model of the left ventricle

    NASA Astrophysics Data System (ADS)

    Vedula, Vijay; Fortini, Stefania; Seo, Jung-Hee; Querzoli, Giorgio; Mittal, Rajat

    2014-12-01

    Simulations of flow inside a laboratory model of the left ventricle are validated against experiments. The simulations employ an immersed boundary-based method for flowmodeling, and the computationalmodel of the expanding-contracting ventricle is constructed via image-segmentation. A quantitative comparison of the phase-averaged velocity and vorticity fields between the simulation and the experiment shows a reasonable agreement, given the inherent uncertainties in the modeling procedure. Simulations also exhibit a good agreement in terms of time-varying net circulation, as well as clinically important metrics such as flow-wave propagation velocity and its ratio with peak early-wave flow velocity. The detailed and critical assessment of this comparison is used to identify and discuss the key challenges that are faced in such a validation study.

  5. Development and validation of a dynamical atmosphere-vegetation-soil HTO transport and OBT formation model.

    PubMed

    Ota, Masakazu; Nagai, Haruyasu

    2011-09-01

    A numerical model simulating transport of tritiated water (HTO) in atmosphere-soil-vegetation system, and, accumulation of organically bound tritium (OBT) in vegetative leaves was developed. Characteristic of the model is, for calculating tritium transport, it incorporates a dynamical atmosphere-soil-vegetation model (SOLVEG-II) that calculates transport of heat and water, and, exchange of CO(2). The processes included for calculating tissue free water tritium (TFWT) in leaves are HTO exchange between canopy air and leaf cellular water, root uptake of aqueous HTO in soil, photosynthetic assimilation of TFWT into OBT, and, TFWT formation from OBT through respiration. Tritium fluxes at the last two processes are input to a carbohydrate compartment model in leaves that calculates OBT translocation from leaves and allocation in them, by using photosynthesis and respiration rate in leaves. The developed model was then validated through a simulation of an existing experiment of acute exposure of grape plants to atmospheric HTO. Calculated TFWT concentration in leaves increased soon after the start of HTO exposure, reaching to equilibrium with the atmospheric HTO within a few hours, and then rapidly decreased after the end of the exposure. Calculated non-exchangeable OBT amount in leaves linearly increased during the exposure, and after the exposure, rapidly decreased in daytime, and, moderately nighttime. These variations in the calculated TFWT concentrations and OBT amounts, each mainly controlled by HTO exchange between canopy air and leaf cellular water and by carbohydrates translocation from leaves, fairly agreed with the observations within average errors of a factor of two. PMID:21665337

  6. Capability maturity models support of modeling and simulation verification, validation, and accreditation

    Microsoft Academic Search

    Candace L. Conwell; Rosemary Enright; Marcia A. Stutzman

    2000-01-01

    Both government and industry are involved in the acquisition and development of modeling and simulation (M&S) products. The effectiveness and maturity of an organization's acquisition process directly affect the cost, schedule, and quality of the M&S products that are delivered to the user. When M&S program sponsors implement best practices throughout acquisition, critical verification, and validation (V&V) tasks can be

  7. Capability Maturity Models support of modeling and simulation verification, validation, and accreditation

    Microsoft Academic Search

    Candace L. Conwell; R. Enright; M. A. Stutzman

    2000-01-01

    Both government and industry are involved in the acquisition and development of modeling and simulation (M&S) products. The effectiveness and maturity of an organization's acquisition process directly affect the cost, schedule and quality of the M&S products that are delivered to the user. When M&S program sponsors implement best practices throughout acquisition, critical verification, and validation (V&V) tasks can be

  8. The 183-WSL fast rain rate retrieval algorithm. Part II: Validation using ground radar measurements

    NASA Astrophysics Data System (ADS)

    Laviola, Sante; Levizzani, Vincenzo; Cattani, Elsa; Kidd, Chris

    2013-12-01

    The Water vapor Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convective/stratiform). It exploits the water vapor absorption line observations centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15/-17 and NOAA-18-19/MetOp-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The "ground truth" dataset consists of 2 years and 7 months of rainfall intensity fields from the NIMROD radar network, which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds and 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.

  9. Dynamic characterization of hysteresis elements in mechanical systems. II. Experimental validation

    NASA Astrophysics Data System (ADS)

    Symens, W.; Al-Bender, F.

    2005-03-01

    The industrial demand for machine tools with ever increasing speed and accuracy calls for a closer look at the physical phenomena that are present at small movements of those machine's slides. One of these phenomena, and probably the most dominant one, is the dependence of the friction force on displacement that can be described by a rate-independent hysteresis function with nonlocal memory. The influence of this highly nonlinear effect on the dynamics of the system has been theoretically analyzed in Part I of this paper. This part (II) aims at verifying these theoretical results on three experimental setups. Two setups, consisting of linearly driven rolling element guideways, have been built to specifically study the hysteretic friction behavior. The experiments performed on these specially designed setups are then repeated on one axis of an industrial pick-and-place device, driven by a linear motor and guided by commercial guideways. The results of the experiments on all the setups agree qualitatively well with the theoretically predicted ones and point to the inherent difficulty of accurate quantitative identification of the hysteretic behavior. They further show that the hysteretic friction behavior has a direct bearing on the dynamics of machine tools and its presence should therefore be carefully considered in the dynamic identification process of these systems.

  10. Standard solar model. II - g-modes

    NASA Technical Reports Server (NTRS)

    Guenther, D. B.; Demarque, P.; Pinsonneault, M. H.; Kim, Y.-C.

    1992-01-01

    The paper presents the g-mode oscillation for a set of modern solar models. Each solar model is based on a single modification or improvement to the physics of a reference solar model. Improvements were made to the nuclear reaction rates, the equation of state, the opacities, and the treatment of the atmosphere. The error in the predicted g-mode periods associated with the uncertainties in the model physics is predicted and the specific sensitivities of the g-mode periods and their period spacings to the different model structures are described. In addition, these models are compared to a sample of published observations. A remarkably good agreement is found between the 'best' solar model and the observations of Hill and Gu (1990).

  11. The Sandia MEMS Passive Shock Sensor : FY08 testing for functionality, model validation, and technology readiness.

    SciTech Connect

    Walraven, Jeremy Allen; Blecke, Jill; Baker, Michael Sean; Clemens, Rebecca C.; Mitchell, John Anthony; Brake, Matthew Robert; Epp, David S.; Wittwer, Jonathan W.

    2008-10-01

    This report summarizes the functional, model validation, and technology readiness testing of the Sandia MEMS Passive Shock Sensor in FY08. Functional testing of a large number of revision 4 parts showed robust and consistent performance. Model validation testing helped tune the models to match data well and identified several areas for future investigation related to high frequency sensitivity and thermal effects. Finally, technology readiness testing demonstrated the integrated elements of the sensor under realistic environments.

  12. A multi-source satellite data approach for modelling Lake Turkana water level: calibration and validation using satellite altimetry data

    NASA Astrophysics Data System (ADS)

    Velpuri, N. M.; Senay, G. B.; Asante, K. O.

    2012-01-01

    Lake Turkana is one of the largest desert lakes in the world and is characterized by high degrees of inter- and intra-annual fluctuations. The hydrology and water balance of this lake have not been well understood due to its remote location and unavailability of reliable ground truth datasets. Managing surface water resources is a great challenge in areas where in-situ data are either limited or unavailable. In this study, multi-source satellite-driven data such as satellite-based rainfall estimates, modelled runoff, evapotranspiration, and a digital elevation dataset were used to model Lake Turkana water levels from 1998 to 2009. Due to the unavailability of reliable lake level data, an approach is presented to calibrate and validate the water balance model of Lake Turkana using a composite lake level product of TOPEX/Poseidon, Jason-1, and ENVISAT satellite altimetry data. Model validation results showed that the satellite-driven water balance model can satisfactorily capture the patterns and seasonal variations of the Lake Turkana water level fluctuations with a Pearson's correlation coefficient of 0.90 and a Nash-Sutcliffe Coefficient of Efficiency (NSCE) of 0.80 during the validation period (2004-2009). Model error estimates were within 10% of the natural variability of the lake. Our analysis indicated that fluctuations in Lake Turkana water levels are mainly driven by lake inflows and over-the-lake evaporation. Over-the-lake rainfall contributes only up to 30% of lake evaporative demand. During the modelling time period, Lake Turkana showed seasonal variations of 1-2 m. The lake level fluctuated in the range up to 4 m between the years 1998-2009. This study demonstrated the usefulness of satellite altimetry data to calibrate and validate the satellite-driven hydrological model for Lake Turkana without using any in-situ data. Furthermore, for Lake Turkana, we identified and outlined opportunities and challenges of using a calibrated satellite-driven water balance model for (i) quantitative assessment of the impact of basin developmental activities on lake levels and for (ii) forecasting lake level changes and their impact on fisheries. From this study, we suggest that globally available satellite altimetry data provide a unique opportunity for calibration and validation of hydrologic models in ungauged basins.

  13. Cross Cultural Validation of the TAT (Pilot Phase of a Project to Test the Atkinson Model).

    ERIC Educational Resources Information Center

    Donleavy, G. D.; Lim, Amanda

    1990-01-01

    The study assessed the cross-cultural validity of the Thematic Apperception Test (TAT) as a gauge of motivation to succeed (as proposed in Atkinson's model of motivation) with 45 Hong Kong students. Although doubts about the TAT's validity were found to be unjustified, the question of whether the test captures the need to achieve remains.…

  14. Derivation, Parameterization and Validation of a Sandy-Clay Material Model for Use

    E-print Network

    Grujicic, Mica

    Derivation, Parameterization and Validation of a Sandy-Clay Material Model for Use in Landmine for sand-based soils with different saturation levels and clay and gravel contents was recently proposed and validated in our study, and the same has been extended in this study to include clay-based soils

  15. Context-Sensitive Benefit Transfer Using Stated Choice Models: Specification and Convergent Validity for Policy Analysis

    Microsoft Academic Search

    Yong Jiang; Stephen K. Swallow; Michael P. Mcgonagle

    2005-01-01

    Benefit transfer has been an important, practical policy tool appealing to government agencies, especially when time or budget is constrained. However, the literature fails to support convergent validity of benefit transfer using the stated-preference method. This empirical study conducts four convergent validity assessments of benefit transfer using the choice modeling method and data from Rhode Island and Massachusetts, regarding coastal

  16. Internet privacy concerns and their antecedents - measurement validity and a regression model

    Microsoft Academic Search

    Tamara Dinev; Paul Hart

    2004-01-01

    This research focuses on the development and validation of an instrument to measure the privacy concerns of individuals who use the Internet and two antecedents, perceived vulnerability and perceived ability to control information. The results of exploratory factor analysis support the validity of the measures developed. In addition, the regression analysis results of a model including the three constructs provide

  17. Trend Validation of a Musculoskeletal Model with a Workstation Design Parameter

    E-print Network

    Pontonnier, Charles; Samani, Afshin; Dumont, Georges; Madeleine, Pascal

    2012-01-01

    The aim of this article is to present the application of a trend validation to validate a simulation model. The workstation parameter used to define the trend is the table height of simulated meat cutting tasks (well known to be related to MSD).

  18. Understanding Student Teachers' Behavioural Intention to Use Technology: Technology Acceptance Model (TAM) Validation and Testing

    ERIC Educational Resources Information Center

    Wong, Kung-Teck; Osman, Rosma bt; Goh, Pauline Swee Choo; Rahmat, Mohd Khairezan

    2013-01-01

    This study sets out to validate and test the Technology Acceptance Model (TAM) in the context of Malaysian student teachers' integration of their technology in teaching and learning. To establish factorial validity, data collected from 302 respondents were tested against the TAM using confirmatory factor analysis (CFA), and structural equation…

  19. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  20. Knowledge Provenance: An Approach to Modeling and Maintaining The Evolution and Validity of Knowledge

    E-print Network

    Fox, Mark S.

    Knowledge Provenance: An Approach to Modeling and Maintaining The Evolution and Validity of Knowledge Mark S. Fox and Jingwei Huang Enterprise Integration Laboratory, University of Toronto 40 St. This paper addresses the problem of how to determine the validity and origin of information/knowledge

  1. Density estimation via cross-validation: Model selection point of view

    E-print Network

    framework. Extensively used in practice, cross-validation (CV) remains poorly understood, especially to increase with p. A theoretical assessment of the CV performance is carried out thanks to two oracle: Density estimation, cross-validation, model selection, leave-p-out, random penalty, oracle inequality

  2. A process for translating and validating model of human occupation assessments in the danish context.

    PubMed

    Petersen, Kirsten; Hartvig, Bente

    2008-01-01

    The aim of the study was to validate the Danish translation of The Assessment of Communication and Interaction Skills (ACIS) and The Occupational Self-Assessment (OSA). The validation process followed four research steps where pilot versions were tested by occupational therapists in practice, peer-reviewed, and back-translated. The result of the study was two validated assessment tools based on the Model of Human Occupation (MOHO) available for Danish occupational therapy practice, education, and research. Future studies should enlighten how to maintain the validity and reliability of assessments when translations are made into other languages. PMID:23941380

  3. Validation of models for global irradiance on inclined planes

    Microsoft Academic Search

    D. Feuremann; A. Zemel

    1992-01-01

    The accuracy of models to estimate irradiance on inclined planes is tested by comparing the predictions to measurements taken with four instruments of various tilt and azimuth angles in Sede Boqer, Israel. The three models investigated are: the Perez model, Hay's anisotropic model, and the isotropic model. The Perez model is found to perform significantly better than the other two,

  4. Independent Validation of an Existing Model Enables Prediction of Hearing Loss after Childhood Bacterial Meningitis

    PubMed Central

    Terwee, Caroline B.; Heymans, Martijn W.; Gemke, Reinoud J. B. J.; Koomen, Irene; Spanjaard, Lodewijk; van Furth, A. Marceline

    2013-01-01

    Objective This study aimed external validation of a formerly developed prediction model identifying children at risk for hearing loss after bacterial meningitis (BM). Independent risk factors included in the model are: duration of symptoms prior to admission, petechiae, cerebral spinal fluid (CSF) glucose level, Streptococcus pneumoniae and ataxia. Validation helps to evaluate whether the model has potential in clinical practice. Study design 116 Dutch school-age BM survivors were included in the validation cohort and screened for sensorineural hearing loss (>25 dB). Risk factors were obtained from medical records. The model was applied to the validation cohort and its performance was compared with the development cohort. Validation was performed by application of the model on the validation cohort and by assessment of discrimination and goodness of fit. Calibration was evaluated by testing deviations in intercept and slope. Multiple imputation techniques were used to deal with missing values. Results Risk factors were distributed equally between both cohorts. Discriminative ability (Area Under the Curve, AUC) of the model was 0.84 in the development and 0.78 in the validation cohort. Hosmer-Lemeshow test for goodness of fit was not significant in the validation cohort, implying good fit concerning the similarity of expected and observed cases. There were no significant differences in calibration slope and intercept. Sensitivity and negative predicted value were high, while specificity and positive predicted value were low which is comparable with findings in the development cohort. Conclusions Performance of the model remained good in the validation cohort. This prediction model might be used as a screening tool and can help to identify those children that need special attention and a long follow-up period or more frequent auditory testing. PMID:23536814

  5. Structuring and validating a cost-effectiveness model of primary asthma prevention amongst children

    PubMed Central

    2011-01-01

    Background Given the rising number of asthma cases and the increasing costs of health care, prevention may be the best cure. Decisions regarding the implementation of prevention programmes in general and choosing between unifaceted and multifaceted strategies in particular are urgently needed. Existing trials on the primary prevention of asthma are, however, insufficient on their own to inform the decision of stakeholders regarding the cost-effectiveness of such prevention strategies. Decision analytic modelling synthesises available data for the cost-effectiveness evaluation of strategies in an explicit manner. Published reports on model development should provide the detail and transparency required to increase the acceptability of cost-effectiveness modelling. But, detail on the explicit steps and the involvement of experts in structuring a model is often unevenly reported. In this paper, we describe a procedure to structure and validate a model for the primary prevention of asthma in children. Methods An expert panel was convened for round-table discussions to frame the cost-effectiveness research question and to select and structure a model. The model's structural validity, which indicates how well a model reflects the reality, was determined through descriptive and parallel validation. Descriptive validation was performed with the experts. Parallel validation qualitatively compared similarity between other published models with different decision problems. Results The multidisciplinary input of experts helped to develop a decision-tree structure which compares the current situation with screening and prevention. The prevention was further divided between multifaceted and unifaceted approaches to analyse the differences. The clinical outcome was diagnosis of asthma. No similar model was found in the literature discussing the same decision problem. Structural validity in terms of descriptive validity was achieved with the experts and was supported by parallel validation. Conclusions A decision-tree model developed with experts in round-table discussions benefits from a systematic and transparent approach and the multidisciplinary contributions of the experts. Parallel validation provides a feasible alternative to validating novel models. The process of structuring and validating a model presented in this paper could be a useful guide to increase transparency, credibility, and acceptability of (future, novel) models when experts are involved. PMID:22070532

  6. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    NASA Technical Reports Server (NTRS)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL performs better during the warm months, while during the winter time the discrepancies with radar measurements tends to maximum values. A stable behavior of the 183-WSL algorithm is demonstrated over the whole study period with an overall overestimation for rain rates intensities lower than 1 millimeter per hour. This threshold is crucial especially in wintertime where the low precipitation regime is difficult to be classified.

  7. An extended physiological pharmacokinetic model of methadone disposition in the rat: Validation and sensitivity analysis

    Microsoft Academic Search

    Johan L. Gabrielsson; Torgny Groth

    1988-01-01

    An extended physiological model of methadone disposition in the rat was constructed and evaluated in various tests of model validity. A separate circulation model of the fetus was included due to the large tissue concentration differences obtained after a constant rate infusion but also to propose the use of this type of model for optimization of toxicological tests. Simulations were

  8. Validation challenges in model composition: The case of adaptive systems

    E-print Network

    Paris-Sud XI, Université de

    . Aspect Oriented Modeling helps separating concerns that crosscut across different models. MDE and AOM problem. Aspect Oriented Modeling (AOM) helps separating crosscutting concerns at model level separately, and later composed into a global model. Research challenges have been widely identified for AOM

  9. Description of a Website Resource for Turbulence Modeling Verification and Validation

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Smith, Brian R.; Huang, George P.

    2010-01-01

    The activities of the Turbulence Model Benchmarking Working Group - which is a subcommittee of the American Institute of Aeronautics and Astronautics (AIAA) Fluid Dynamics Technical Committee - are described. The group s main purpose is to establish a web-based repository for Reynolds-averaged Navier-Stokes turbulence model documentation, including verification and validation cases. This turbulence modeling resource has been established based on feedback from a survey on what is needed to achieve consistency and repeatability in turbulence model implementation and usage, and to document and disseminate information on new turbulence models or improvements to existing models. The various components of the website are described in detail: description of turbulence models, turbulence model readiness rating system, verification cases, validation cases, validation databases, and turbulence manufactured solutions. An outline of future plans of the working group is also provided.

  10. On The Modeling of Educational Systems: II

    ERIC Educational Resources Information Center

    Grauer, Robert T.

    1975-01-01

    A unified approach to model building is developed from the separate techniques of regression, simulation, and factorial design. The methodology is applied in the context of a suburban school district. (Author/LS)

  11. Biomarker Discovery and Validation for Proteomics and Genomics: Modeling And Systematic Analysis 

    E-print Network

    Atashpazgargari, Esmaeil

    2014-08-27

    Discovery and validation of protein biomarkers with high specificity is the main challenge of current proteomics studies. Different mass spectrometry models are used as shotgun tools for discovery of biomarkers which is usually done on a small...

  12. BRE large compartment fire tests – characterising post-flashover fires for model validation 

    E-print Network

    Welch, Stephen; Jowsey, Allan; Deeny, Susan; Morgan, Richard; Torero, Jose L

    2007-01-01

    Reliable and comprehensive measurement data from large-scale fire tests is needed for validation of computer fire models, but is subject to various uncertainties, including radiation errors in temperature measurement. Here, a simple method for post...

  13. Climatically Diverse Data Set for Flat-Plate PV Module Model Validations (Presentation)

    SciTech Connect

    Marion, B.

    2013-05-01

    Photovoltaic (PV) module I-V curves were measured at Florida, Colorado, and Oregon locations to provide data for the validation and development of models used for predicting the performance of PV modules.

  14. Validation of Shape Memory Alloys Multiscale Modeling thanks to in-situ X-Rays Diffraction

    E-print Network

    Boyer, Edmond

    constants and validation of the model. This second point is addressed. A texturized nickel-titanium SMA (Ni inside an equivalent medium. Its deformation is given by: (5) is the Eschelby tensor that relies

  15. Validation of atmospheric sounders by correlative measurements.

    PubMed

    Pougatchev, Nikita

    2008-09-10

    A linear mathematical model for the statistical estimate of the bias and noise of satellite sounders and a case study are presented. The model provides the tool for proper comparison of actual performance of the remote sensing system while in orbit to correlative data sets. The model accounts for: (i) noncoincidence in time and space of satellite and validating systems sampling; (ii) different characteristics of the validated and validating systems, e.g., different vertical resolutions and noise levels. In the case study the model is applied to validation of temperature profile retrievals using radiosondes for the reference. PMID:18784779

  16. Anticipating High-Resolution STIS Spectra of Four Multiphase Mg II Absorbers: A Test of Photoionization Models

    Microsoft Academic Search

    Jane C. Charlton; Richard R. Mellon; Jane R. Rigby; Christopher W. Churchill

    2000-01-01

    In this paper we propose a test of the validity of a photoionization modeling technique that is applicable when a combination of high- and low-resolution spectra are available for various chemical transitions. We apply this technique to the four Mg II systems along the line of sight toward the zem=1.335 quasar PG 1634+706 to infer the physical conditions in the

  17. Enhancing modeling and simulation accreditation by structuring verification and validation results

    Microsoft Academic Search

    Dirk Brade

    2000-01-01

    Model verification, validation and accreditation (VV&A) is as complex as developing a modeling and simulation (M&S) application itself. For the purpose of structuring both verification and validation (V&V) activities and V&V results, we introduce a refined V&V process. After identification of the major influence factors on applicable V&V, a conceptual approach for subphase-wise organization of V&V activities is presented. Finally

  18. Relap5-3d model validation and benchmark exercises for advanced gas cooled reactor application

    E-print Network

    Moore, Eugene James Thomas

    2006-08-16

    RELAP5-3D MODEL VALIDATION AND BENCHMARK EXERCISES FOR ADVANCED GAS COOLED REACTOR APPLICATIONS A Thesis by EUGENE JAMES THOMAS MOORE Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment... of the requirements for the degree of MASTER OF SCIENCE May 2006 Major Subject: Nuclear Engineering RELAP5-3D MODEL VALIDATION AND BENCHMARK EXERCISES FOR ADVANCED GAS COOLED REACTOR APPLICATIONS A Thesis by EUGENE JAMES THOMAS...

  19. Predictive validity of the potentiated startle response as a behavioral model for anxiolytic drugs

    Microsoft Academic Search

    T. H. Hijzen; S. W. J. Houtzager; R. J. E. Joordens; B. Olivier; J. L. Slangen

    1995-01-01

    The fear-potentiated startle (PSR) paradigm is a putative behavioral model for the determination of anxiolytic properties of drugs. The present study further investigated the predictive validity of the model. Predictive validity is high, when only drugs clinically used as anxiolytics attenuate PSR dose dependently. Results showed that startle potentiation decreased dose dependently after the administration of the anxiolytics CDP (2.5–10

  20. Enhancement and Validation of the IDES Orbital Debris Environment Model

    Microsoft Academic Search

    R. Walker; P. H. Stokes; J. E. Wilkinson; G. G. Swinerd

    1999-01-01

    Orbital debris environment models are essential in predicting the characteristics of the entire debris environment, especially for altitude and size regimes where measurement data is sparse. Most models are also used to assess mission collision risk. The IDES (Integrated Debris Evolution Suite) simulation model has recently been upgraded by including a new sodium–potassium liquid coolant droplet source model and a

  1. A virtual source model for Kilo-voltage cone beam CT: Source characteristics and model validation

    SciTech Connect

    Spezi, E.; Volken, W.; Frei, D.; Fix, M. K. [Department of Medical Physics, Velindre Cancer Centre Cardiff CF14 2TL United Kingdom (United Kingdom); Division of Medical Radiation Physics, Inselspital and University of Bern, CH-3010 Berne (Switzerland)

    2011-09-15

    Purpose: The purpose of this investigation was to study the source characteristics of a clinical kilo-voltage cone beam CT unit and to develop and validate a virtual source model that could be used for treatment planning purposes. Methods: We used a previously commissioned full Monte Carlo model and new bespoke software to study the source characteristics of a clinical kilo-voltage cone beam CT (CBCT) unit. We identified the main particle sources, their spatial, energy and angular distribution for all the image acquisition presets currently used in our clinical practice. This includes a combination of two energies (100 and 120 kVp), two filters (neutral and bowtie), and eight different x-ray beam apertures. We subsequently built a virtual source model which we validated against full Monte Carlo calculations. Results: We found that the radiation output of the clinical kilo-voltage cone beam CT unit investigated in this study could be reproduced with a virtual model comprising of two sources (target and filtration cone) or three sources (target, filtration cone and bowtie filter) when additional filtration was used. With this model, we accounted for more than 97% of the photons exiting the unit. Each source in our model was characterised by a origin distribution in both X and Y directions, a fluence map, a single energy spectrum for unfiltered beams and a two dimensional energy spectrum for bowtie filtered beams. The percentage dose difference between full Monte Carlo and virtual source model based dose distributions was well within the statistical uncertainty associated with the calculations ( {+-} 2%, one standard deviation) in all cases studied. Conclusions: The virtual source that we developed is accurate in calculating the dose delivered from a commercial kilo-voltage cone beam CT unit operating with routine clinical image acquisition settings. Our data have also shown that target, filtration cone, and bowtie filter sources needed to be all included in the model in order to accurately replicate the dose distribution from the clinical radiation beam.

  2. Model validation protocol for determining the performance of the terrain-responsive atmospheric code against the Rocky Flats Plant Winter Validation Study

    Microsoft Academic Search

    C. R. Hodgin; M. L. Smith

    1992-01-01

    The objective for this Model Validation Protocol is to establish a plan for quantifying the performance (accuracy and precision) of the Terrain-Responsive Atmospheric Code (TRAC) model. The performance will be determined by comparing model predictions against tracer characteristics observed in the free atmosphere. The Protocol will also be applied to other reference'' dispersion models. The performance of the TRAC model

  3. Model validation protocol for determining the performance of the terrain-responsive atmospheric code against the Rocky Flats Plant Winter Validation Study

    Microsoft Academic Search

    C. R. Hodgin; M. L. Smith

    1992-01-01

    The objective for this Model Validation Protocol is to establish a plan for quantifying the performance (accuracy and precision) of the Terrain-Responsive Atmospheric Code (TRAC) model. The performance will be determined by comparing model predictions against tracer characteristics observed in the free atmosphere. The Protocol will also be applied to other ``reference`` dispersion models. The performance of the TRAC model

  4. Fiber Reinforced Polymer Composite Structures in Fire: Modeling and Validation

    Microsoft Academic Search

    Ziqing Yu; Aixi Zhou

    2012-01-01

    This paper presents a thermomechanical model for predicting the behavior of fiber reinforced polymer (FRP) composite structures subject to simultaneous fire and compressive loading. The model includes a thermal sub-model to calculate the temperature history of the structure and a structural sub-model to predict the mechanical performance of the structure. Both thermal and mechanical properties in the two sub-models are

  5. Implementing the Ecosystem Model: Phase II.

    ERIC Educational Resources Information Center

    Schuh, John H.

    1978-01-01

    The ecosystem model was used to assess student perceptions of certain aspects of residential life at a large university. Over 70 percent of questionnaires were returned. From the data, aspects of the environment were changed according to student recommendations. A great need for more information communication was found. (RPG)

  6. Bow shock models of ultracompact H II regions

    NASA Technical Reports Server (NTRS)

    Mac Low, Mordecai-Mark; Van Buren, Dave; Wood, Douglas O. S.; Churchwell, ED

    1991-01-01

    This paper presents models of ultracompact H II regions as the bow shocks formed by massive stars, with strong stellar winds, moving supersonically through molecular clouds. The morphologies, sizes and brightnesses of observed objects match the models well. Plausible models are provided for the ultracompact H II regions G12.21 - 0.1, G29.96 - 0.02, G34.26 + 0.15, and G43.89 - 0.78. To do this, the equilibrium shape of the wind-blown shell is calculated, assuming momentum conservation. Then the shell is illuminated with ionizing radiation from the central star, radiative transfer for free-free emission through the shell is performed, and the resulting object is visualized at various angles for comparison with radio continuum maps. The model unifies most of the observed morphologies of ultracompact H II regions, excluding only those objects with spherical shells. Ram pressure confinement greatly lengthens the life of ultracompact H II regions, explaining the large number that exist in the Galaxy despite their low apparent kinematic ages.

  7. Automatic Specialization of Actor-oriented Models in Ptolemy II

    Microsoft Academic Search

    Stephen Neuendorffer; Edward Lee

    This report presents a series of techniques forautomatic specialization of generic component specifications. These techniques allow thetransformation of a generic component specifications into more compact and efficient ones.We have integrated these techniques into a code generator for Ptolemy II, a softwareframework for actor-oriented design in Java [15]. Combining automatic code generationwith actor specialization enables efficient implementation of models without sacrificingdesign...

  8. METASTATES IN DISORDERED MEAN FIELD MODELS II: THE SUPERSTATES \\Lambda

    E-print Network

    METASTATES IN DISORDERED MEAN FIELD MODELS II: THE SUPERSTATES \\Lambda Christof K¨ulske 1 2 Courant the concept of `superstates', as recently proposed by Bovier and Gayrard. We discuss various notionsN ] (¸), this will lead us to the notion of `superstates', that was proposed by Bovier and Gayrard in [BG3]. (Here [s

  9. CheS-Mapper 2.0 for visual validation of (Q)SAR models

    PubMed Central

    2014-01-01

    Background Sound statistical validation is important to evaluate and compare the overall performance of (Q)SAR models. However, classical validation does not support the user in better understanding the properties of the model or the underlying data. Even though, a number of visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allow the investigation of model validation results are still lacking. Results We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. The approach applies the 3D viewer CheS-Mapper, an open-source application for the exploration of small molecules in virtual 3D space. The present work describes the new functionalities in CheS-Mapper 2.0, that facilitate the analysis of (Q)SAR information and allows the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. The approach is generic: It is model-independent and can handle physico-chemical and structural input features as well as quantitative and qualitative endpoints. Conclusions Visual validation with CheS-Mapper enables analyzing (Q)SAR information in the data and indicates how this information is employed by the (Q)SAR model. It reveals, if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org. Graphical abstract Comparing actual and predicted activity values with CheS-Mapper.

  10. Validating model predictions of MRT measurements on LWIR imaging systems

    NASA Astrophysics Data System (ADS)

    Burks, Stephen D.; Garner, Kenneth; Miller, Stephen; Teaney, Brian P.

    2009-05-01

    The predicted Minimum Resolvable Temperature (MRT) values from five MRT models are compared to the measured MRT values for eighteen long-wave thermal imaging systems. The most accurate model, which is based upon the output of NVTherm IP, has an advantage over the other candidate models because it accounts for performance degradations due to blur and bar sampling. Models based upon the FLIR 92 model tended to predict overly optimistic values for all frequencies. The earliest models for MRT's for staring arrays did not incorporate advanced eye effects and had the tendency to provide pessimistic estimates as the frequency approached the Nyquist limit.

  11. GCR Environmental Models III: GCR Model Validation and Propagated Uncertainties in Effective Dose

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Xu, Xiaojing; Blattnig, Steve R.; Norman, Ryan B.

    2014-01-01

    This is the last of three papers focused on quantifying the uncertainty associated with galactic cosmic rays (GCR) models used for space radiation shielding applications. In the first paper, it was found that GCR ions with Z>2 and boundary energy below 500 MeV/nucleon induce less than 5% of the total effective dose behind shielding. This is an important finding since GCR model development and validation have been heavily biased toward Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer measurements below 500 MeV/nucleon. Weights were also developed that quantify the relative contribution of defined GCR energy and charge groups to effective dose behind shielding. In the second paper, it was shown that these weights could be used to efficiently propagate GCR model uncertainties into effective dose behind shielding. In this work, uncertainties are quantified for a few commonly used GCR models. A validation metric is developed that accounts for measurements uncertainty, and the metric is coupled to the fast uncertainty propagation method. For this work, the Badhwar-O'Neill (BON) 2010 and 2011 and the Matthia GCR models are compared to an extensive measurement database. It is shown that BON2011 systematically overestimates heavy ion fluxes in the range 0.5-4 GeV/nucleon. The BON2010 and BON2011 also show moderate and large errors in reproducing past solar activity near the 2000 solar maximum and 2010 solar minimum. It is found that all three models induce relative errors in effective dose in the interval [-20%, 20%] at a 68% confidence level. The BON2010 and Matthia models are found to have similar overall uncertainty estimates and are preferred for space radiation shielding applications.

  12. Dynamical dark matter. II. An explicit model

    DOE PAGESBeta

    Dienes, Keith R.; Thomas, Brooks

    2012-04-01

    In a recent paper [K. R. Dienes and B. Thomas, Phys. Rev. D 85, 083523 (2012).], we introduced “dynamical dark matter,” a new framework for dark-matter physics, and outlined its underlying theoretical principles and phenomenological possibilities. Unlike most traditional approaches to the dark-matter problem which hypothesize the existence of one or more stable dark-matter particles, our dynamical dark-matter framework is characterized by the fact that the requirement of stability is replaced by a delicate balancing between cosmological abundances and lifetimes across a vast ensemble of individual dark-matter components. This setup therefore collectively produces a time-varying cosmological dark-matter abundance, and the different dark-matter components can interact and decay throughout the current epoch. While the goal of our previous paper was to introduce the broad theoretical aspects of this framework, the purpose of the current paper is to provide an explicit model of dynamical dark matter and demonstrate that this model satisfies all collider, astrophysical, and cosmological constraints. The results of this paper therefore constitute an “existence proof” of the phenomenological viability of our overall dynamical dark-matter framework, and demonstrate that dynamical dark matter is indeed a viable alternative to the traditional paradigm of dark-matter physics. Dynamical dark matter must therefore be considered alongside other approaches to the dark-matter problem, particularly in scenarios involving large extra dimensions or string theory in which there exist large numbers of particles which are neutral under standard-model symmetries.

  13. Differential validation of the US-TEC model

    NASA Astrophysics Data System (ADS)

    Araujo-Pradere, E. A.; Fuller-Rowell, T. J.; Spencer, P. S. J.; Minter, C. F.

    2007-06-01

    This paper presents a validation and accuracy assessment of the total electron content (TEC) from US-TEC, a new product presented by the Space Environment Center over the contiguous United States (CONUS). US-TEC is a real-time operational implementation of the MAGIC code and provides TEC maps every 15 min and the line-of-sight electron content between any point within the CONUS and all GPS satellites in view. Validation of TEC is difficult since there are no absolute or true values of TEC. All methods of obtaining TEC, for instance, from GPS, ocean surface monitors (TOPEX), and lightning detectors (FORTE), have challenges that limit their accuracy. GPS data have interfrequency biases; TOPEX also has biases, and data are collected only over the oceans; and FORTE can eliminate biases, but because of the lower operating frequency, the signals suffer greater bending on the rays. Because of the difficulty in obtaining an absolute unbiased TEC measurement, a "differential" accuracy estimate has been performed. The method relies on the fact that uninterrupted GPS data along a particular receiver-satellite link with no cycle slips are very precise. The phase difference (scaled to TEC units) from one epoch to the next can be determined with an accuracy of less than 0.01 TEC units. This fact can be utilized to estimate the uncertainty in the US-TEC vertical and slant path maps. By integrating through US-TEC inversion maps at two different times, the difference in the slant TEC can be compared with the direct phase difference in the original RINEX data file for nine receivers not used in the US-TEC calculations. The results of this study, for the period of April-September 2004, showed an average root mean square error of 2.4 TEC units, which is equivalent to less than 40 cm of signal delay at the GPS L1 frequency. The accuracy estimates from this "differential" method are similar to the results from a companion paper utilizing an "absolute" validation method by comparing with FORTE data.

  14. Validating timed UML models by simulation and verification

    Microsoft Academic Search

    Iulian Ober; Susanne Graf; Ileana Ober

    2006-01-01

    This paper presents a technique and a tool for model-checking operational (design level) UML models based on a mapping to\\u000a a model of communicating extended timed automata. The target language of the mapping is the IF format, for which existing\\u000a model-checking and simulation tools can be used.\\u000a \\u000a Our approach takes into consideration most of the structural and behavioural features of

  15. Motivating the Additional Use of External Validity: Examining Transportability in a Model of Glioblastoma Multiforme

    PubMed Central

    Singleton, Kyle W.; Speier, William; Bui, Alex AT; Hsu, William

    2014-01-01

    Despite the growing ubiquity of data in the medical domain, it remains difficult to apply results from experimental and observational studies to additional populations suffering from the same disease. Many methods are employed for testing internal validity; yet limited effort is made in testing generalizability, or external validity. The development of disease models often suffers from this lack of validity testing and trained models frequently have worse performance on different populations, rendering them ineffective. In this work, we discuss the use of transportability theory, a causal graphical model examination, as a mechanism for determining what elements of a data resource can be shared or moved between a source and target population. A simplified Bayesian model of glioblastoma multiforme serves as the example for discussion and preliminary analysis. Examination over data collection hospitals from the TCGA dataset demonstrated improvement of prediction in a transported model over a baseline model. PMID:25954466

  16. Discrete-Time Dataflow Models for Visual Simulation in Ptolemy II

    E-print Network

    Discrete-Time Dataflow Models for Visual Simulation in Ptolemy II by Chamberlain Fong Research;_____________________________________________________________________ Discrete Time Dataflow Models for Visual Simulation in Ptolemy II ii Abstract The Discrete Time (DT) domain in Ptolemy II is a timed extension of the Synchronous Dataflow (SDF) domain. Although not completely backward

  17. FIELD VALIDATION OF EXPOSURE ASSESSMENT MODELS. VOLUME 2. ANALYSIS

    EPA Science Inventory

    This is the second of two volumes describing a series of dual tracer experiments designed to evaluate the PAL-DS model, a Gaussian diffusion model modified to take into account settling and deposition, as well as three other deposition models. In this volume, an analysis of the d...

  18. Estimation and Q-Matrix Validation for Diagnostic Classification Models

    ERIC Educational Resources Information Center

    Feng, Yuling

    2013-01-01

    Diagnostic classification models (DCMs) are structured latent class models widely discussed in the field of psychometrics. They model subjects' underlying attribute patterns and classify subjects into unobservable groups based on their mastery of attributes required to answer the items correctly. The effective implementation of DCMs depends…

  19. The Validity of Value-Added Models: An Allegory

    ERIC Educational Resources Information Center

    Martineau, Joseph A.

    2010-01-01

    Value-added models have become popular fixes for various accountability schemes aimed at measuring teacher effectiveness. Value-added models may resolve some of the issues in accountability models, but they bring their own set of challenges to the table. Unfortunately, political and emotional considerations sometimes keep one from examining…

  20. Development and validation of a microcontroller model for EMC

    Microsoft Academic Search

    Shaohua Li; Hemant Bishnoi; Jason Whiles; Pius Ng; Haixiao Weng; David Pommerenke; D. Beetner

    2008-01-01

    Models of integrated circuits (ICs) allow printed circuit board (PCB) developers to predict radiated and conducted emissions early in board development and allow IC manufactures insight into how to build their ICs better for electromagnetic compatibility (EMC). A model of the power delivery network, similar to the ICEM or LECCS model, was developed for a microcontroller running a typical program

  1. Validating a predictive model for computer icon development

    Microsoft Academic Search

    Samantha Wright

    1997-01-01

    A predictive model for icon design was evaluated. Eleven participants gave their interpretations of and rated thirteen icons. Average information content and articulatory distance measures were calculated from the interpretation results. Semantic differential scales resulted in ratings which provided a subjective measure of the icon. All measures were then combined to develop a model to predict icon communication. This model

  2. A validated model of passive muscle in compression

    Microsoft Academic Search

    M. Van Loocke; C. G. Lyons; C. K. Simms

    2006-01-01

    A better characterisation of soft tissues is required to improve the accuracy of human body models used, amongst other applications, for virtual crash modelling. This paper presents a theoretical model and the results of an experimental procedure to characterise the quasi-static, compressive behaviour of skeletal muscle in three dimensions. Uniaxial, unconstrained compression experiments have been conducted on aged and fresh

  3. Validation of an agricultural non-point source model in a watershed in southern Ontario

    Microsoft Academic Search

    L. F. Leon; W. G. Booty; G. S. Bowen; D. C. L. Lam

    2004-01-01

    As part of an integrated watershed management study for watersheds of southern Ontario, the agricultural non-point source (AGNPS) model was interfaced with a decision support system that reduces the time-consuming data input steps and scenario testing. The objectives of this study were to define appropriate model input parameters for the region and to define a protocol for model validation at

  4. Computational fluid dynamics modelling and validation of the temperature distribution in a forced convection oven

    Microsoft Academic Search

    Pieter Verboven; Nico Scheerlinck; Josse De Baerdemaeker

    2000-01-01

    This paper discusses the validation of a Computational Fluid Dynamics (CFD) model to calculate the heat transfer in an industrial electrical forced-convection oven. The CFD model consists of the continuity, momentum and energy equation with the standard k–? approach to model the flow turbulence. Density effects are accounted for through a weakly compressible formulation. Time-dependent boundary conditions and source terms

  5. Verification and validation for a penetration model using a deterministic and probabilistic design tool

    Microsoft Academic Search

    D. S. Riha; B. H. Thacker; J. B. Pleming; J. D. Walker; S. A. Mullin; C. E. Weiss; E. A. Rodriguez; P. O. Leslie

    2006-01-01

    The Los Alamos National Laboratory Dynamic Experimentation (DynEx) program is the designing and validating steel blast containment vessels using limited experiments coupled with computational models. Through a need to design portions of the vessel to protect against breeches by projectiles, an analytical model was developed along the lines of the Walker–Anderson penetration model to predict the penetration depth of a

  6. Tsunami Generation by Submarine Mass Failure. I: Modeling, Experimental Validation, and Sensitivity Analyses

    E-print Network

    Grilli, Stéphan T.

    Tsunami Generation by Submarine Mass Failure. I: Modeling, Experimental Validation, and Sensitivity with a two-dimensional 2D fully nonlinear potential flow FNPF model for tsunami generation by two idealized a simple wavemaker formalism, and prescribed as a boundary condition in the FNPF model. Tsunami amplitudes

  7. Three-Dimensional Human Head Finite-Element Model Validation Against Two Experimental Impacts

    Microsoft Academic Search

    Remy Willinger; Ho-Sung Kang; Baye Diaw

    1999-01-01

    The impact response of a three-dimensional human head model has been determined by simulating two cadaver tests. The objective of this study was to validate a finite-element human head model under different impact conditions by considering intracranial compressibility. The current University Louis Pasteur model was subjected initially to a direct head impact, of short (6 ms) duration, and the simulation

  8. Sensitivity Analysis, Calibration, and Validations for a Multisite and Multivariable SWAT Model

    Microsoft Academic Search

    Kati L. White; Indrajeet Chaubey

    2005-01-01

    The ability of a watershed model to mimic specified watershed processes is assessed through the calibration and validation process. The Soil and Water Assessment Tool (SWAT) watershed model was implemented in the Beaver Reservoir Watershed of Northwest Arkansas. The objectives were to: (1) provide detailed information on calibrating and applying a multisite and multivariable SWAT model; (2) conduct sensitivity analysis;

  9. An Experimental Validation for Broadband Power-Line Communication (BPLC) Model

    Microsoft Academic Search

    Justinian Anatory; Nelson Theethayi; Rajeev Thottappillil; Mussa M. Kissaka; Nerey H. Mvungi

    2008-01-01

    Recently, different models have been proposed for analyzing the broadband power-line communication (BPLC) systems based on transmission-line (TL) theory. In this paper, we make an attempt to validate one such BPLC model with laboratory experiments by comparing the channel transfer functions. A good agreement between the BPLC model based on TL theory and experiments are found for channel frequencies up

  10. Spatial calibration and temporal validation of flow for regional scale hydrologic modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Physically based regional scale hydrologic modeling is gaining importance for planning and management of water resources. Calibration and validation of such regional scale model is necessary before applying it for scenario assessment. However, in most regional scale hydrologic modeling, flow validat...

  11. A computer-based system for validation of thermal models for multichip power modules

    Microsoft Academic Search

    Zharadeen Parrilla; Jose J. Rodriguez; Allen Hefner; Misuel Velez-Reyes; D. Berning

    2002-01-01

    This paper presents a computer-based system for experimental validation and calibration of thermal models for multi-chip power electronic modules. The thermal models under study are based on the thermal component network. The paper describes the basic system features and experimental set up as well as experimental results. Calibration results show good performance for the proposed models.

  12. The Weird World, and Equally Weird Measurement Models: Reactive Indicators and the Validity Revolution

    ERIC Educational Resources Information Center

    Hayduk, Leslie A.; Robinson, Hannah Pazderka; Cummings, Greta G.; Boadu, Kwame; Verbeek, Eric L.; Perks, Thomas A.

    2007-01-01

    Researchers using structural equation modeling (SEM) aspire to learn about the world by seeking models with causal specifications that match the causal forces extant in the world. This quest for a model matching existing worldly causal forces constitutes an ontology that orients, or perhaps reorients, thinking about measurement validity. This…

  13. The Weird World, and Equally Weird Measurement Models: Reactive Indicators and the Validity Revolution

    Microsoft Academic Search

    Leslie A. Hayduk; Hannah Pazderka Robinson; Greta G. Cummings; Kwame Boadu; Eric L. Verbeek; Thomas A. Perks

    2007-01-01

    Researchers using structural equation modeling (SEM) aspire to learn about the world by seeking models with causal specifications that match the causal forces extant in the world. This quest for a model matching existing worldly causal forces constitutes an ontology that orients, or perhaps reorients, thinking about measurement validity. This article illustrates several ways the seemingly innocuous quest for structural

  14. Social Validity of the Critical Incident Stress Management Model for School-Based Crisis Intervention

    ERIC Educational Resources Information Center

    Morrison, Julie Q.

    2007-01-01

    The Critical Incident Stress Management (CISM) model for crisis intervention was developed for use with emergency service personnel. Research regarding the use of the CISM model has been conducted among civilians and high-risk occupation groups with mixed results. The purpose of this study is to examine the social validity of the CISM model for…

  15. The Common Factors, Empirically Validated Treatments, and Recovery Models of Therapeutic Change

    ERIC Educational Resources Information Center

    Reisner, Andrew D.

    2005-01-01

    I review the Common Factors Model, the Empirically Validated Therapy Model, and the Recovery Model of therapeutic change and effectiveness. In general, psychotherapy appears to be effective and common factors account for more of the variance than do specific techniques. However, in some areas, particularly in the treatment of anxiety disorders,…

  16. Modeling the Object-Oriented Space Through Validated Measures

    NASA Technical Reports Server (NTRS)

    Neal, Ralph D.

    1996-01-01

    In order to truly understand software and the software development process, software measurement must be better understood. A beginning step toward a better understanding of software measurement is the categorization of the measurements by some meaningful taxonomy. The most meaningful taxonomy would capture the basic nature of the subject oriented (O-O) space. The interesting characteristics of object oriented software offer a starting point for such a categorization of measures. A taxonomy has been developed based on fourteen characteristics of object-oriented software gathered from the literature This taxonomy allows us to easily see gaps and redundancies in the O-O measures. The taxonomy also clearly differentiates among taxa so that there is no ambiguity as to the taxon to which a measure belongs. The taxonomy has been populated with thirty-two measures that have been validated in the narrow sense of Fenton, using measurement theory with Zuse's augmentation.

  17. A comprehensive validation toolbox for regional ocean models - Outline, implementation and application to the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Jandt, Simon; Laagemaa, Priidik; Janssen, Frank

    2014-05-01

    The systematic and objective comparison between output from a numerical ocean model and a set of observations, called validation in the context of this presentation, is a beneficial activity at several stages, starting from early steps in model development and ending at the quality control of model based products delivered to customers. Even though the importance of this kind of validation work is widely acknowledged it is often not among the most popular tasks in ocean modelling. In order to ease the validation work a comprehensive toolbox has been developed in the framework of the MyOcean-2 project. The objective of this toolbox is to carry out validation integrating different data sources, e.g. time-series at stations, vertical profiles, surface fields or along track satellite data, with one single program call. The validation toolbox, implemented in MATLAB, features all parts of the validation process - ranging from read-in procedures of datasets to the graphical and numerical output of statistical metrics of the comparison. The basic idea is to have only one well-defined validation schedule for all applications, in which all parts of the validation process are executed. Each part, e.g. read-in procedures, forms a module in which all available functions of this particular part are collected. The interface between the functions, the module and the validation schedule is highly standardized. Functions of a module are set up for certain validation tasks, new functions can be implemented into the appropriate module without affecting the functionality of the toolbox. The functions are assigned for each validation task in user specific settings, which are externally stored in so-called namelists and gather all information of the used datasets as well as paths and metadata. In the framework of the MyOcean-2 project the toolbox is frequently used to validate the forecast products of the Baltic Sea Marine Forecasting Centre. Hereby the performance of any new product version is compared with the previous version. Although, the toolbox is mainly tested for the Baltic Sea yet, it can easily be adapted to different datasets and parameters, regardless of the geographic region. In this presentation the usability of the toolbox is demonstrated along with several results of the validation process.

  18. Chemical kinetics parameters and model validation for the gasification of PCEA nuclear graphite

    NASA Astrophysics Data System (ADS)

    El-Genk, Mohamed S.; Tournier, Jean-Michel P.; Contescu, Cristian I.

    2014-01-01

    A series of gasification experiments, using two right cylinder specimens (?12.7 × 25.4 mm and 25.4 × 25.4 mm) of PCEA nuclear graphite in ambient airflow, measured the total gasification flux at weight losses up to 41.5% and temperatures (893-1015 K) characteristics of those for in-pores gasification Mode (a) and in-pores diffusion-limited Mode (b). The chemical kinetics parameters for the gasification of PCEA graphite are determined using a multi-parameters optimization algorithm from the measurements of the total gasification rate and transient weight loss in experiments. These parameters are: (i) the pre-exponential rate coefficients and the Gaussian distributions and values of specific activation energies for adsorption of oxygen and desorption of CO gas; (ii) the specific activation energy and pre-exponential rate coefficient for the breakup of stable un-dissociated C(O2) oxygen radicals to form stable (CO) complexes; (iii) the specific activation energy and pre-exponential coefficient for desorption of CO2 gas and; (iv) the initial surface area of reactive free sites per unit mass. This area is consistently 13.5% higher than that for nuclear graphite grades of NBG-25 and IG-110 and decreases inversely proportional with the square root of the initial mass of the graphite specimens in the experiments. Experimental measurements successfully validate the chemical-reactions kinetics model that calculates continuous Arrhenius curves of the total gasification flux and the production rates of CO and CO2 gases. The model results at different total weight losses agree well with measurements and expand beyond the temperatures in the experiments to the diffusion-limited mode of gasification. Also calculated are the production rates of CO and CO2 gases and their relative contributions to the total gasification rate in the experiments as functions of temperature, for total weight losses of 5% and 10%.

  19. Chemical kinetics parameters and model validation for the gasification of PCEA nuclear graphite

    SciTech Connect

    El-Genk, Mohamed S [University of New Mexico, Albuquerque] [University of New Mexico, Albuquerque; Tournier, Jean-Michel [University of New Mexico, Albuquerque] [University of New Mexico, Albuquerque; Contescu, Cristian I [ORNL] [ORNL

    2014-01-01

    A series of gasification experiments, using two right cylinder specimens (~ 12.7 x 25.4 mm and 25.4 x 25.4 mm) of PCEA nuclear graphite in ambient airflow, measured the total gasification flux at weight losses up to 41.5% and temperatures (893-1015 K) characteristics of those for in-pores gasification Mode (a) and in-pores diffusion-limited Mode (b). The chemical kinetics parameters for the gasification of PCEA graphite are determined using a multi-parameters optimization algorithm from the measurements of the total gasification rate and transient weight loss in experiments. These parameters are: (i) the pre-exponential rate coefficients and the Gaussian distributions and values of specific activation energies for adsorption of oxygen and desorption of CO gas; (ii) the specific activation energy and pre-exponential rate coefficient for the breakup of stable un-dissociated C(O2) oxygen radicals to form stable (CO) complexes; (iii) the specific activation energy and pre-exponential coefficient for desorption of CO2 gas and; (iv) the initial surface area of reactive free sites per unit mass. This area is consistently 13.5% higher than that for nuclear graphite grades of NBG-25 and IG-110 and decreases inversely proportional with the square root of the initial mass of the graphite specimens in the experiments. Experimental measurements successfully validate the chemical-reactions kinetics model that calculates continuous Arrhenius curves of the total gasification flux and the production rates of CO and CO2 gases. The model results at different total weight losses agree well with measurements and expand beyond the temperatures in the experiments to the diffusion-limited mode of gasification. Also calculated are the production rates of CO and CO2 gases and their relative contributions to the total gasification rate in the experiments as functions of temperature, for total weight losses of 5% and 10%.

  20. Validity of agroecosystem models a comparison of results of different models applied to the same data set

    Microsoft Academic Search

    B. Diekkrüger; D. Söndgerath; K. C. Kersebaum; C. W. McVoy

    1995-01-01

    The simulation results obtained from 19 participants of the workshop “Validation of Agroecosystem Models” were compared and discussed. Although all models were applied to the same data set, the results differ significantly. From the results it can be concluded that the experience of a scientist applying a model is as important as the differences between various model approaches. Only one

  1. Shape memory polymer filled honeycomb model and experimental validation

    NASA Astrophysics Data System (ADS)

    Beblo, R. V.; Puttmann, J. P.; Joo, J. J.; Reich, G. W.

    2015-02-01

    An analytical model predicting the in-plane Young’s and shear moduli of a shape memory polymer filled honeycomb composite is presented. By modeling the composite as a series of rigidly attached beams, the mechanical advantage of the load distributed on each beam by the infill is accounted for. The model is compared to currently available analytical models as well as experimental data. The model correlates extremely well with experimental data for empty honeycomb and when the polymer is above its glass transition temperature. Below the glass transition temperature, rule of mixtures is shown to be more accurate as bending is no longer the dominant mode of deformation. The model is also derived for directions other than the typical x and y allowing interpolation of the stiffness of the composite in any direction.

  2. Radiation model predictions and validation using LDEF data

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    1992-01-01

    Radiation dosimetry aboard LDEF, as well as post-flight measurements of the radioactivity induced in numerous LDEF spacecraft components, provide unique data for evaluating the accuracy of current models for predicting both the space radiation environments (trapped proton intensity, spectra, and directionality; cosmic ray fluence) and the radiation environments induced in spacecraft components (fluence, energy spectra, secondary particle, linear energy transfer, etc.). By determining the accuracy of such models using LDEF data, and with model updates where required, improved radiation environment predictions can be made for future missions, which in turn allows improved predictions for specific radiation effects for future spacecraft components (single event upsets of microelectronics, radiation damage to focal plane arrays, noise in sensitive instrumentation, etc.). Herein, the status and results from radiation model predictions and comparisons with LDEF data is given. The calculations are made using radiation transport codes coupled with a 3-D geometry/mass model of LDEF, together with current models of the space radiation environment.

  3. Atmospheric Dispersion Model Validation in Low Wind Conditions

    SciTech Connect

    Sawyer, Patrick

    2007-11-01

    Atmospheric plume dispersion models are used for a variety of purposes including emergency planning and response to hazardous material releases, determining force protection actions in the event of a Weapons of Mass Destruction (WMD) attack and for locating sources of pollution. This study provides a review of previous studies that examine the accuracy of atmospheric plume dispersion models for chemical releases. It considers the principles used to derive air dispersion plume models and looks at three specific models currently in use: Aerial Location of Hazardous Atmospheres (ALOHA), Emergency Prediction Information Code (EPIcode) and Second Order Closure Integrated Puff (SCIPUFF). Results from this study indicate over-prediction bias by the EPIcode and SCIPUFF models and under-prediction bias by the ALOHA model. The experiment parameters were for near field dispersion (less than 100 meters) in low wind speed conditions (less than 2 meters per second).

  4. A FVCOM-based unstructured grid wave, current, sediment transport model, I. Model description and validation

    NASA Astrophysics Data System (ADS)

    Wu, Lunyu; Chen, Changsheng; Guo, Peifang; Shi, Maochong; Qi, Jianhua; Ge, Jianzhong

    2011-03-01

    An effort was made to couple FVCOM (a three-dimensional (3D), unstructured grid, Finite Volume Coastal Ocean Model) and FVCOM-SWAVE (an unstructured grid, finite-volume surface wave model) for the study of nearshore ocean processes such as tides, circulation, storm surge, waves, sediment transport, and morphological evolution. The coupling between FVCOM and FVCOM-SWAVE was achieved through incorporating 3D radiation stress, wave-current-sediment-related bottom boundary layer, sea surface stress parameterizations, and morphology process. FVCOM also includes a 3D sediment transport module. With accurate fitting of irregular coastlines, the model provides a unique tool to study sediment dynamics in coastal ocean, estuaries, and wetlands where local geometries are characterized by inlets, islands, and intertidal marsh zones. The model was validated by two standard benchmark tests: 1) spectral waves approaching a mild sloping beach and 2) morphological changes of seabed in an idealized tidal inlet. In Test 1, model results were compared with both analytical solutions and laboratory experiments. A further comparison was also made with the structured grid Regional Ocean Model System (ROMS), which provides an insight into the performance of the two models with the same open boundary forcing.

  5. Web-page on UrQMD Model Validation

    E-print Network

    A. Galoyan; J. Ritman; V. Uzhinsky

    2006-05-18

    A WEB-page containing materials of comparing experimental data and UrQMD model calculations has been designed. The page provides its user with a variety of tasks solved with the help of the model, accuracy and/or quality of experimental data description, and so on. The page can be useful for new experimental data analysis, or new experimental research planning. The UrQMD model is cited in more than 272 publications. Only 44 of them present original calculations. Their main results on the model are presented on the page.

  6. The Performance of Cross-Validation Indices Used to Select among Competing Covariance Structure Models under Multivariate Nonnormality Conditions

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Stapleton, Laura M.

    2006-01-01

    Cudeck and Browne (1983) proposed using cross-validation as a model selection technique in structural equation modeling. The purpose of this study is to examine the performance of eight cross-validation indices under conditions not yet examined in the relevant literature, such as nonnormality and cross-validation design. The performance of each…

  7. Comparison of occlusal contact areas of class I and class II molar relationships at finishing using three-dimensional digital models

    PubMed Central

    Lee, Hyejoon; Kim, Minji

    2015-01-01

    Objective This study compared occlusal contact areas of ideally planned set-up and accomplished final models against the initial in class I and II molar relationships at finishing. Methods Evaluations were performed for 41 post-orthodontic treatment cases, of which 22 were clinically diagnosed as class I and the remainder were diagnosed as full cusp class II. Class I cases had four first premolars extracted, while class II cases had maxillary first premolars extracted. Occlusal contact areas were measured using a three-dimensional scanner and RapidForm 2004. Independent t-tests were used to validate comparison values between class I and II finishings. Repeated measures analysis of variance was used to compare initial, set up, and final models. Results Molars from cases in the class I finishing for the set-up model showed significantly greater contact areas than those from class II finishing (p < 0.05). The final model class I finishing showed significantly larger contact areas for the second molars (p < 0.05). The first molars of the class I finishing for the final model showed a tendency to have larger contact areas than those of class II finishing, although the difference was not statistically significant (p = 0.078). Conclusions In set-up models, posterior occlusal contact was better in class I than in class II finishing. In final models, class I finishing tended to have larger occlusal contact areas than class II finishing.

  8. Kohlberg's Moral Development Model: Cohort Influences on Validity.

    ERIC Educational Resources Information Center

    Bechtel, Ashleah

    An overview of Kohlberg's theory of moral development is presented; three interviews regarding the theory are reported, and the author's own moral development is compared to the model; finally, a critique of the theory is addressed along with recommendations for future enhancement. Lawrence Kohlberg's model of moral development, also referred to…

  9. A Macro Model of Training and Development: Validation.

    ERIC Educational Resources Information Center

    Al-Khayyat, Ridha M.; Elgamal, Mahmoud A.

    1997-01-01

    A macro model of training and development includes input (training and development climate), process, and output (individual/organizational change) indicators. A test of the model with 387 Kuwaiti bank employees supported these indicators. Managers' perceptions of training and development and the organization's return on investment were…

  10. Atmospheric Dispersion Model Validation in Low Wind Conditions

    Microsoft Academic Search

    Patrick Sawyer

    2007-01-01

    Atmospheric plume dispersion models are used for a variety of purposes including emergency planning and response to hazardous material releases, determining force protection actions in the event of a Weapons of Mass Destruction (WMD) attack and for;\\u000alocating sources of pollution. This study provides a review of previous studies that examine the accuracy of atmospheric plume dispersion models for chemical

  11. Validation and Verification of LADEE Models and Software

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen

    2013-01-01

    The Lunar Atmosphere Dust Environment Explorer (LADEE) mission will orbit the moon in order to measure the density, composition and time variability of the lunar dust environment. The ground-side and onboard flight software for the mission is being developed using a Model-Based Software methodology. In this technique, models of the spacecraft and flight software are developed in a graphical dynamics modeling package. Flight Software requirements are prototyped and refined using the simulated models. After the model is shown to work as desired in this simulation framework, C-code software is automatically generated from the models. The generated software is then tested in real time Processor-in-the-Loop and Hardware-in-the-Loop test beds. Travelling Road Show test beds were used for early integration tests with payloads and other subsystems. Traditional techniques for verifying computational sciences models are used to characterize the spacecraft simulation. A lightweight set of formal methods analysis, static analysis, formal inspection and code coverage analyses are utilized to further reduce defects in the onboard flight software artifacts. These techniques are applied early and often in the development process, iteratively increasing the capabilities of the software and the fidelity of the vehicle models and test beds.

  12. A Statistical, Nonparametric Methodology for Document Degradation Model Validation

    Microsoft Academic Search

    Tapas Kanungo; Robert M. Haralick; Henry S. Baird; Werner Stuetzle; David Madigan

    2000-01-01

    Abstract—Printing, photocopying, and scanning processes degrade the image quality of a document. Statistical models of these degradation processes are crucial for document image understanding research. Models allow us to predict system performance, conduct controlled experiments to study the breakdown points of the systems, create large multilingual data sets with groundtruth for training classifiers, design optimal noise removal algorithms, choose values

  13. A Statistical, Nonparametric Methodology for Document Degradation Model Validation

    Microsoft Academic Search

    Tapas Kanungo; Robert M. Haralick; Henry S. Baird; Werner Stuezle; David Madigan

    1999-01-01

    Printing, photocopying, and scanning processes degrade the image quality of a document. Statistical models of these degradation processes are crucial for document image understanding research. Models allow us to predict system performance, conduct controlled experiments to study the breakdown points of the systems, create large multilingual data sets with groundtruth for training classifiers, design optimal noise removal algorithms, choose values

  14. An unstructuredgrid, finitevolume sea ice model: Development, validation, and application

    E-print Network

    Chen, Changsheng

    finitevolume solver. Implementing UGCICE into the Arctic Ocean finitevolume community ocean model provides a new unstructuredgrid, MPIparallelized model system to resolve the iceocean interaction dynamics of the sea ice concentration, ice coverage, and ice drifting in the Arctic Ocean and adjacent coastal regions

  15. Role of weather data in validating air quality models

    Microsoft Academic Search

    J. S. Sudarsan; Deepak Maurya; Ruchi Singh; O. S. Muhammad Feroz

    2010-01-01

    Air quality dispersion models have been used to predict the ground level concentrations (GLC) of air pollutants such as Particulate matter, SO2 and NOx etc. Industrial Source Complex Short Term Version 3 (ISCST3), a dispersion model developed by United States Environment Protection Agency (USEPA) is widely adopted in India to predict the GLC due to emissions from the industries. American

  16. On the validation of cloud parametrization schemes in numerical atmospheric models with satellite data from ISCCP

    NASA Astrophysics Data System (ADS)

    Meinke, I.

    2003-04-01

    A new method is presented to validate cloud parametrization schemes in numerical atmospheric models with satellite data of scanning radiometers. This method is applied to the regional atmospheric model HRM (High Resolution Regional Model) using satellite data from ISCCP (International Satellite Cloud Climatology Project). Due to the limited reliability of former validations there has been a need for developing a new validation method: Up to now differences between simulated and measured cloud properties are mostly declared as deficiencies of the cloud parametrization scheme without further investigation. Other uncertainties connected with the model or with the measurements have not been taken into account. Therefore changes in the cloud parametrization scheme based on such kind of validations might not be realistic. The new method estimates uncertainties of the model and the measurements. Criteria for comparisons of simulated and measured data are derived to localize deficiencies in the model. For a better specification of these deficiencies simulated clouds are classified regarding their parametrization. With this classification the localized model deficiencies are allocated to a certain parametrization scheme. Applying this method to the regional model HRM the quality of forecasting cloud properties is estimated in detail. The overestimation of simulated clouds in low emissivity heights especially during the night is localized as model deficiency. This is caused by subscale cloudiness. As the simulation of subscale clouds in the regional model HRM is described by a relative humidity parametrization these deficiencies are connected with this parameterization.

  17. J-Integral modeling and validation for GTS reservoirs.

    SciTech Connect

    Martinez-Canales, Monica L.; Nibur, Kevin A.; Lindblad, Alex J.; Brown, Arthur A.; Ohashi, Yuki; Zimmerman, Jonathan A.; Huestis, Edwin; Hong, Soonsung; Connelly, Kevin; Margolis, Stephen B.; Somerday, Brian P.; Antoun, Bonnie R.

    2009-01-01

    Non-destructive detection methods can reliably certify that gas transfer system (GTS) reservoirs do not have cracks larger than 5%-10% of the wall thickness. To determine the acceptability of a reservoir design, analysis must show that short cracks will not adversely affect the reservoir behavior. This is commonly done via calculation of the J-Integral, which represents the energetic driving force acting to propagate an existing crack in a continuous medium. J is then compared against a material's fracture toughness (J{sub c}) to determine whether crack propagation will occur. While the quantification of the J-Integral is well established for long cracks, its validity for short cracks is uncertain. This report presents the results from a Sandia National Laboratories project to evaluate a methodology for performing J-Integral evaluations in conjunction with its finite element analysis capabilities. Simulations were performed to verify the operation of a post-processing code (J3D) and to assess the accuracy of this code and our analysis tools against companion fracture experiments for 2- and 3-dimensional geometry specimens. Evaluation is done for specimens composed of 21-6-9 stainless steel, some of which were exposed to a hydrogen environment, for both long and short cracks.

  18. Validated biomechanical model for efficiency and speed of rowing.

    PubMed

    Pelz, Peter F; Vergé, Angela

    2014-10-17

    The speed of a competitive rowing crew depends on the number of crew members, their body mass, sex and the type of rowing-sweep rowing or sculling. The time-averaged speed is proportional to the rower's body mass to the 1/36th power, to the number of crew members to the 1/9th power and to the physiological efficiency (accounted for by the rower's sex) to the 1/3rd power. The quality of the rowing shell and propulsion system is captured by one dimensionless parameter that takes the mechanical efficiency, the shape and drag coefficient of the shell and the Froude propulsion efficiency into account. We derive the biomechanical equation for the speed of rowing by two independent methods and further validate it by successfully predicting race times. We derive the theoretical upper limit of the Froude propulsion efficiency for low viscous flows. This upper limit is shown to be a function solely of the velocity ratio of blade to boat speed (i.e., it is completely independent of the blade shape), a result that may also be of interest for other repetitive propulsion systems. PMID:25189093

  19. On the validity of travel-time based nonlinear bioreactive transport models in steady-state flow.

    PubMed

    Sanz-Prat, Alicia; Lu, Chuanhe; Finkel, Michael; Cirpka, Olaf A

    2015-01-01

    Travel-time based models simplify the description of reactive transport by replacing the spatial coordinates with the groundwater travel time, posing a quasi one-dimensional (1-D) problem and potentially rendering the determination of multidimensional parameter fields unnecessary. While the approach is exact for strictly advective transport in steady-state flow if the reactive properties of the porous medium are uniform, its validity is unclear when local-scale mixing affects the reactive behavior. We compare a two-dimensional (2-D), spatially explicit, bioreactive, advective-dispersive transport model, considered as "virtual truth", with three 1-D travel-time based models which differ in the conceptualization of longitudinal dispersion: (i) neglecting dispersive mixing altogether, (ii) introducing a local-scale longitudinal dispersivity constant in time and space, and (iii) using an effective longitudinal dispersivity that increases linearly with distance. The reactive system considers biodegradation of dissolved organic carbon, which is introduced into a hydraulically heterogeneous domain together with oxygen and nitrate. Aerobic and denitrifying bacteria use the energy of the microbial transformations for growth. We analyze six scenarios differing in the variance of log-hydraulic conductivity and in the inflow boundary conditions (constant versus time-varying concentration). The concentrations of the 1-D models are mapped to the 2-D domain by means of the kinematic (for case i), and mean groundwater age (for cases ii & iii), respectively. The comparison between concentrations of the "virtual truth" and the 1-D approaches indicates extremely good agreement when using an effective, linearly increasing longitudinal dispersivity in the majority of the scenarios, while the other two 1-D approaches reproduce at least the concentration tendencies well. At late times, all 1-D models give valid approximations of two-dimensional transport. We conclude that the conceptualization of nonlinear bioreactive transport in complex multidimensional domains by quasi 1-D travel-time models is valid for steady-state flow fields if the reactants are introduced over a wide cross-section, flow is at quasi steady state, and dispersive mixing is adequately parametrized. PMID:25723340

  20. Validation of mathematical models of complex endocrine-metabolic systems. A case study on a model of glucose regulation

    Microsoft Academic Search

    C. Cobelli; A. Mari

    1983-01-01

    The validation process is an essential component of the modelling ofin vivo endocrine and metabolic systems. In the paper a validation study of a comprehensive model of the glucose regulation system,\\u000a previously developed for intravenous testing, is performed on a new data set based on oral glucose tolerance studies. A novel\\u000a approach based on a ‘partition and input\\/output inversion’ technique

  1. Validating models of ecosystem response to global change

    SciTech Connect

    Rastetter, E.B. [Marine Biological Lab, Woods Hole, MA (United States)

    1996-03-01

    Models are an essential component of any assessment of ecosystem response to changes in global climate and elevated atmospheric carbon dioxide concentration. The problem with these models is that their long-term predictions are impossible to test unambiguously except by allowing enough time for the full ecosystem response to develop. Unfortunately, when one must assess potentially devastating changes in the global environment, time becomes a luxury. Therefore, confidence in these models has to be built through the accumulation of fairly weak corrobatin evidence rather than through a few crucial and unambiguous tests. The criteria employed to judge the value of these models are thus likely to differ greatly from those used to judge finer scale models, which are more amenable to the scientific tradition of hypothesis formulation and testing. This article looks at four categories of tests which could potentially be used to evaluate ERCC (ecosystem response to climate and carbon dioxide concentration) models and illustrates why they cannot be considered crucial tests. The the synthesis role of ERCC models are is discussed and why they are vital to any assessment of long-term responses of ecosystems to changes in global climate and carbon dioxide concentration. 49 refs., 2 figs.

  2. Nearshore Tsunami Inundation Model Validation: Toward Sediment Transport Applications

    USGS Publications Warehouse

    Apotsos, Alex; Buckley, Mark; Gelfenbaum, Guy; Jaffe, Bruce; Vatvani, Deepak

    2011-01-01

    Model predictions from a numerical model, Delft3D, based on the nonlinear shallow water equations are compared with analytical results and laboratory observations from seven tsunami-like benchmark experiments, and with field observations from the 26 December 2004 Indian Ocean tsunami. The model accurately predicts the magnitude and timing of the measured water levels and flow velocities, as well as the magnitude of the maximum inundation distance and run-up, for both breaking and non-breaking waves. The shock-capturing numerical scheme employed describes well the total decrease in wave height due to breaking, but does not reproduce the observed shoaling near the break point. The maximum water levels observed onshore near Kuala Meurisi, Sumatra, following the 26 December 2004 tsunami are well predicted given the uncertainty in the model setup. The good agreement between the model predictions and the analytical results and observations demonstrates that the numerical solution and wetting and drying methods employed are appropriate for modeling tsunami inundation for breaking and non-breaking long waves. Extension of the model to include sediment transport may be appropriate for long, non-breaking tsunami waves. Using available sediment transport formulations, the sediment deposit thickness at Kuala Meurisi is predicted generally within a factor of 2.

  3. Small scale model for CFD validation in DAF application.

    PubMed

    Hague, J; Ta, C T; Biggs, M J; Sattary, J A

    2001-01-01

    A laboratory model is used to measure the generic flow patterns in dissolved air flotation (DAF). The Perspex model used in this study allows the use of laser Doppler velocimetry (LDV), a non-invasive, high-resolution (+/- 2 mm s-1) laser technique of flow velocity measurement. Measurement of flow velocity in the single-phase situation was first carried out. Air-saturated water was then supplied to the tank and measurements of bubble velocity in the two-phase system were made. Vertical flow re-circulation was observed in the flotation zone. In the bottom of the flotation zone (near the riser) secondary flow re-circulation was observed, but only in the two-phase system. Another phenomenon was the apparent movement of flow across the tank width, which may be due to lateral dispersion of the bubble cloud. Data from preliminary computational fluid dynamics (CFD) models were compared against this measured data in the case of the single-phase system. The CFD model incorporating a k-epsilon model of turbulence was found to give closer agreement with the measured data than the corresponding laminar flow model. The measured velocity data will be used to verify two-phase computational fluid dynamics (CFD) models of DAF. PMID:11394270

  4. A Hardware Model Validation Tool for Use in Complex Space Systems

    NASA Technical Reports Server (NTRS)

    Davies, Misty Dawn; Gundy-Burlet, Karen L.; Limes, Gregory L.

    2010-01-01

    One of the many technological hurdles that must be overcome in future missions is the challenge of validating as-built systems against the models used for design. We propose a technique composed of intelligent parameter exploration in concert with automated failure analysis as a scalable method for the validation of complex space systems. The technique is impervious to discontinuities and linear dependencies in the data, and can handle dimensionalities consisting of hundreds of variables over tens of thousands of experiments.

  5. Spectral modeling of two incline cylinders with validation in the time domain 

    E-print Network

    Oswalt, Aaron Jacob

    1999-01-01

    SPECTRAL MODELING OF TWO INLINE CYLINDERS %ITH VALIDATION IN THE TIME DOMAIN A Thesis by AARON JACOB OSWALT Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree... of MASTER OF SCIENCE May 1999 Major Subject: Ocean Engineering SPECTRAL ANALYSIS OF TWO INLINE CYLINDERS WITH VALIDATION IN THE TIME DOMAIN A Thesis by AARON JACOB OSWALT Submitted to Texas A&M University in partial fulfillment of the requirements...

  6. Validation of road vehicle and traffic emission models - A review and meta-analysis

    NASA Astrophysics Data System (ADS)

    Smit, Robin; Ntziachristos, Leonidas; Boulter, Paul

    2010-08-01

    Road transport is often the main source of air pollution in urban areas, and there is an increasing need to estimate its contribution precisely so that pollution-reduction measures (e.g. emission standards, scrapage programs, traffic management, ITS) are designed and implemented appropriately. This paper presents a meta-analysis of 50 studies dealing with the validation of various types of traffic emission model, including 'average speed', 'traffic situation', 'traffic variable', 'cycle variable', and 'modal' models. The validation studies employ measurements in tunnels, ambient concentration measurements, remote sensing, laboratory tests, and mass-balance techniques. One major finding of the analysis is that several models are only partially validated or not validated at all. The mean prediction errors are generally within a factor of 1.3 of the observed values for CO 2, within a factor of 2 for HC and NO x, and within a factor of 3 for CO and PM, although differences as high as a factor of 5 have been reported. A positive mean prediction error for NO x (i.e. overestimation) was established for all model types and practically all validation techniques. In the case of HC, model predictions have been moving from underestimation to overestimation since the 1980s. The large prediction error for PM may be associated with different PM definitions between models and observations (e.g. size, measurement principle, exhaust/non-exhaust contribution). Statistical analyses show that the mean prediction error is generally not significantly different ( p < 0.05) when the data are categorised according to model type or validation technique. Thus, there is no conclusive evidence that demonstrates that more complex models systematically perform better in terms of prediction error than less complex models. In fact, less complex models appear to perform better for PM. Moreover, the choice of validation technique does not systematically affect the result, with the exception of a CO underprediction when the validation is based on ambient concentration measurements and inverse modelling. The analysis identified two vital elements currently lacking in traffic emissions modelling: 1) guidance on the allowable error margins for different applications/scales, and 2) estimates of prediction errors. It is recommended that current and future emission models incorporate the capability to quantify prediction errors, and that clear guidelines are developed internationally with respect to expected accuracy.

  7. Modeling and experimental validation of buckling dielectric elastomer actuators

    Microsoft Academic Search

    Rocco Vertechy; Antonio Frisoli; Massimo Bergamasco; Federico Carpi; Gabriele Frediani; Danilo De Rossi

    2012-01-01

    Buckling dielectric elastomer actuators are a special type of electromechanical transducers that exploit electro-elastic instability phenomena to generate large out-of-plane axial-symmetric deformations of circular membranes made of non-conductive rubbery material. In this paper a simplified explicit analytical model and a general monolithic finite element model are described for the coupled electromechanical analysis and simulation of buckling dielectric elastomer membranes which

  8. The Chaboche nonlinear kinematic hardening model: calibration methodology and validation

    Microsoft Academic Search

    Giovanni B. Broggiato; Francesca Campana; Luca Cortese

    2008-01-01

    This work studies how a nonlinear kinematic model aimed for cyclic plasticity could be put into effect and used within a FEM\\u000a code. A correct modeling of cyclic elasto-plastic behavior can be exploited in low-cycle fatigue life investigation as well\\u000a as in manufacturing problems related to springback prediction. The chosen formulation has been proposed by Chaboche, and it\\u000a is implemented

  9. Criticality and Safety Parameter Studies of a 3MW TRIGA MARK-II Research Reactor and Validation of the Generated Cross-Section Library and Computational Method

    Microsoft Academic Search

    S. I. Bhuiyan; M. A. W. Mondal; M. M. Sarker; M. Rahman; M. S. Shahdatullah; M. Q. Huda; T. K. Chakrobortty; M. J. H Khan

    2000-01-01

    This study deals with the analysis of some neutronics and safety parameters of the current core of a 3-MW TRIGA MARK-II research reactor and validation of the generated macroscopic cross-section library and calculational techniques by benchmarking with experimental, operational, and available Safety Analysis Report (SAR) values. The overall strategy is: (a) generation of the problem-dependent cross-section library from basic Evaluated

  10. Validation of absolute axial neutron flux distribution calculations with MCNP with 197Au(n,?)198Au reaction rate distribution measurements at the JSI TRIGA Mark II reactor.

    PubMed

    Radulovi?, Vladimir; Štancar, Žiga; Snoj, Luka; Trkov, Andrej

    2014-02-01

    The calculation of axial neutron flux distributions with the MCNP code at the JSI TRIGA Mark II reactor has been validated with experimental measurements of the (197)Au(n,?)(198)Au reaction rate. The calculated absolute reaction rate values, scaled according to the reactor power and corrected for the flux redistribution effect, are in good agreement with the experimental results. The effect of different cross-section libraries on the calculations has been investigated and shown to be minor. PMID:24316530

  11. H II Galaxies versus Photoionization Models for Evolving Starbursts

    NASA Astrophysics Data System (ADS)

    Stasi?ska, Grazyna; Leitherer, Claus

    1996-12-01

    We have constructed a grid of models representing an H II region produced by an evolving starburst embedded in a gas cloud of the same metallicity. The models were produced with the spectral energy distribution from a stellar evolutionary synthesis code as input for a photoionization code that computes the emission-line strengths and equivalent widths. Stellar evolution was assumed to proceed according to the models of Maeder. The radiation field was computed using the Kurucz model atmospheres, supplemented by the expanding non-LTE atmospheres of Schmutz et al. for stellar evolutionary phases with strong winds, making a significant improvement over previous works using classical static, plane-parallel model atmospheres. Models for stellar interiors and atmospheres being still in a phase of continuous improvement, our population synthesis models reflect the state of the art in 1995. The models were used to analyze a sample of 100 H II galaxies for which both the Hf? equivalent widths and the [O III] ?4363 line intensities were available (the latter allowing a direct determination of the oxygen abundances based on measured electron temperatures). Because of these selection criteria, the results of our study are restricted to metal-poor objects with metallicities less than about one-half solar. The confrontation of models with observations is presented in six diagnostic diagrams involving hydrogen and oxygen lines. Our approach is in many respects much more constraining for the models than previous studies on H II regions ionized by evolving starbursts. We found that the standard starburst model (instantaneous burst of star formation with a Salpeter initial mass function and an upper cutoff mass of 100 Msun) reproduces the observational constraints provided by the nebular emission lines extremely well if selection effects are taken into account. Models with a unique initial mass function are consistent with essentially all observational constraints over a metallicity range from ˜0.025 to ˜0.25 Zsun. In contrast, models with a Salpeter-type initial mass function truncated at 50 Msun are not consistent with the observations: they violate the observed distribution of H? equivalent widths. The mean effective temperature of the ionizing star cluster declines from about 50,000 to 40,000 K during the time when the line [0 III] ?4363 is strong enough to be measurable. Within the framework of our models, and in the abundance range where comparisons were made with observations, there is no significant evidence for a variation of the star cluster mean effective temperature with metallicity, other than the one generated by the -dependent stellar atmospheric and evolutionary models. A very narrow range in ionization parameters is required to reproduce the observed line ratios. This should set limits on the dynamical evolution of giant H II regions. We find a large fraction of H II galaxies having [O I] ?6300/H? ratios larger than 0.02. Even models with the lowest ionization parameters considered do not produce these large ratios. An approximate estimate of the mechanical energy released by winds and supernovae during later phases of the starburst leads to the suggestion that the [O I] ?6300/H? ratio in contrast to other line ratios studied is significantly affected by shocks. The small spread in the free parameters necessary to reproduce the emission-line properties of metalpoor H II galaxies allows us to propose a new indicator of the starburst age: the [O III] ?5007 equivalent width is quite robust and can be used up to larger ages than the traditional H? equivalent width for high signal-to-noise spectra. This indicator should also prove useful for low signal-to-noise spectra of star-forming galaxies at higher redshift, because of the large value of [O III] ?5007/H? in starbursts younger than 5 Myr.

  12. Elasto-dynamic analysis of a gear pump-Part III: Experimental validation procedure and model extension to helical gears

    NASA Astrophysics Data System (ADS)

    Mucchi, E.; Dalpiaz, G.

    2015-01-01

    This work concerns external gear pumps for automotive applications, which operate at high speed and low pressure. In previous works of the authors (Part I and II, [1,2]), a non-linear lumped-parameter kineto-elastodynamic model for the prediction of the dynamic behaviour of external gear pumps was presented. It takes into account the most important phenomena involved in the operation of this kind of machine. The two main sources of noise and vibration are considered: pressure pulsation and gear meshing. The model has been used in order to foresee the influence of working conditions and design modifications on vibration generation. The model's experimental validation is a difficult task. Thus, Part III proposes a novel methodology for the validation carried out by the comparison of simulations and experimental results concerning forces and moments: it deals with the external and inertial components acting on the gears, estimated by the model, and the reactions and inertial components on the pump casing and the test plate, obtained by measurements. The validation is carried out comparing the level of the time synchronous average in the time domain and the waterfall maps in the frequency domain, with particular attention to identify system resonances. The validation results are satisfactory globally, but discrepancies are still present. Moreover, the assessed model has been properly modified for the application to a new virtual pump prototype with helical gears in order to foresee gear accelerations and dynamic forces. Part IV is focused on improvements in the modelling and analysis of the phenomena bound to the pressure evolution around the gears in order to achieve results closer to the measured values. As a matter of fact, the simulation results have shown that a variable meshing stiffness has a notable contribution on the dynamic behaviour of the pump but this is not as important as the pressure phenomena. As a consequence, the original model was modified with the aim at improving the calculation of pressure forces and torques. The improved pressure formulation includes several phenomena not considered in the previous one, such as the variable pressure evolution at input and output ports, as well as an accurate description of the trapped volume and its connections with high and low pressure chambers. The importance of these improvements are highlighted by comparison with experimental results, showing satisfactory matching.

  13. CFD Modeling Needs and What Makes a Good Supersonic Combustion Validation Experiment

    NASA Technical Reports Server (NTRS)

    Gaffney, Richard L., Jr.; Cutler, Andrew D.

    2005-01-01

    If a CFD code/model developer is asked what experimental data he wants to validate his code or numerical model, his answer will be: "Everything, everywhere, at all times." Since this is not possible, practical, or even reasonable, the developer must understand what can be measured within the limits imposed by the test article, the test location, the test environment and the available diagnostic equipment. At the same time, it is important for the expermentalist/diagnostician to understand what the CFD developer needs (as opposed to wants) in order to conduct a useful CFD validation experiment. If these needs are not known, it is possible to neglect easily measured quantities at locations needed by the developer, rendering the data set useless for validation purposes. It is also important for the experimentalist/diagnostician to understand what the developer is trying to validate so that the experiment can be designed to isolate (as much as possible) the effects of a particular physical phenomena that is associated with the model to be validated. The probability of a successful validation experiment can be greatly increased if the two groups work together, each understanding the needs and limitations of the other.

  14. Validating DICOM content in a remote storage model.

    PubMed

    Mongkolwat, Pattanasak; Bhalodia, Pankit; Gehl, James A; Channin, David S

    2005-03-01

    Verifying the integrity of DICOM files transmitted between separate archives (eg, storage service providers, network attached storage, or storage area networks) is of critical importance. The software application described in this article retrieves a specified number of DICOM studies from two different DICOM storage applications; the primary picture archiving and communication system (PACS) and an off-site long-term archive. The system includes a query/retrieve (Q/R) module, storage service class provider (SCP), a DICOM comparison module, and a graphical user interface. The system checks the two studies for DICOM 3.0 compliance and then verifies that the DICOM data elements and pixel data are identical. Discrepancies in the two data sets are recorded with the data elements (tag number, value representation, value length, and value field) and pixel data (pixel value and pixel location) in question. The system can be operated automatically, in batch mode, and manually to meet a wide variety of use cases. We ran this program on a 15% statistical sample of 50,000 studies (7500 studies examined). We found 2 pixel data mismatches (resolved on retransmission) and 831 header element mismatches. We subsequently ran the program against a smaller batch of 1000 studies, identifying no pixel data mismatches and 958 header element mismatches. Although we did not find significant issues in our limited study, given other incidents that we have experienced when moving images between systems, we conclude that it is vital to maintain an ongoing, automatic, systematic validation of DICOM transfers so as to be proactive in preventing possibly catastrophic data loss. PMID:15645332

  15. Root zone water quality model (RZWQM2): Model use, calibration and validation

    USGS Publications Warehouse

    Ma, Liwang; Ahuja, Lajpat; Nolan, B.T.; Malone, Robert; Trout, Thomas; Qi, Z.

    2012-01-01

    The Root Zone Water Quality Model (RZWQM2) has been used widely for simulating agricultural management effects on crop production and soil and water quality. Although it is a one-dimensional model, it has many desirable features for the modeling community. This article outlines the principles of calibrating the model component by component with one or more datasets and validating the model with independent datasets. Users should consult the RZWQM2 user manual distributed along with the model and a more detailed protocol on how to calibrate RZWQM2 provided in a book chapter. Two case studies (or examples) are included in this article. One is from an irrigated maize study in Colorado to illustrate the use of field and laboratory measured soil hydraulic properties on simulated soil water and crop production. It also demonstrates the interaction between soil and plant parameters in simulated plant responses to water stresses. The other is from a maize-soybean rotation study in Iowa to show a manual calibration of the model for crop yield, soil water, and N leaching in tile-drained soils. Although the commonly used trial-and-error calibration method works well for experienced users, as shown in the second example, an automated calibration procedure is more objective, as shown in the first example. Furthermore, the incorporation of the Parameter Estimation Software (PEST) into RZWQM2 made the calibration of the model more efficient than a grid (ordered) search of model parameters. In addition, PEST provides sensitivity and uncertainty analyses that should help users in selecting the right parameters to calibrate.

  16. Verifying Ptolemy II Discrete-Event Models Using Real-Time Maude

    Microsoft Academic Search

    Kyungmin Bae; Peter Csaba Ölveczky; Thomas Huining Feng; Stavros Tripakis

    2009-01-01

    This paper shows how Ptolemy II discrete-event (DE) models can be formally analyzed using Real-Time Maude. We formalize in\\u000a Real-Time Maude the semantics of a subset of hierarchical Ptolemy II DE models, and explain how the code generation infrastructure\\u000a of Ptolemy II has been used to automatically synthesize a Real-Time Maude verification model from a Ptolemy II design model.\\u000a This

  17. Validating and Applying Numerical Models for Current Energy Capture Devices

    NASA Astrophysics Data System (ADS)

    Hirlinger, C. Y.; James, S. C.; Cardenas, M. P.

    2014-12-01

    With the growing focus on renewable energy, there is increased interest in modeling and optimizing current energy capture (CEC) devices. The interaction of multiple wakes from CEC devices can affect optimal placement strategy, and issues of environmental impacts on sediment transport and large-scale flow should be examined. Numerical models of four flume-scale experiments were built using Sandia National Laboratories' Environmental Fluid Dynamics Code (SNL-EFDC.) Model predictions were calibrated against measured velocities to estimate flow and turbine parameters. The velocity deficit was most sensitive to ?md, the dimensionless Smagorinsky constant related to horizontal momentum diffusion, and to CPB, the dimensionless partial blockage coefficient accounting for the physical displacement of fluid due to turbine blockage. Calibration to four data sets showed ?md ranged from 0.3 to 1.0 while CPB ranged from 40 to 300. Furthermore, results of parameter estimation indicated centerline velocity data were insufficient to uniquely identify the turbulence, flow, and device parameters; cross-channel velocity measurements at multiple locations downstream yielded important calibration information and it is likely that vertical velocity profiles would also be useful to the calibration effort. In addition to flume scale models, a full-scale implementation of a CEC device at Roza Canal in Yakima, WA was developed. The model was analyzed to find an appropriate grid size and to understand the sensitivity of downstream velocity profiles to horizontal momentum diffusion and partial blockage coefficients. Preliminary results generally showed that as CPB increased the wake was enhanced vertically.

  18. Validation of modelled forest biomass in Germany using BETHY/DLR

    NASA Astrophysics Data System (ADS)

    Tum, M.; Buchhorn, M.; Günther, K. P.; Haller, B. C.

    2011-07-01

    We present a new approach to the validation of modelled forest Net Primary Productivity (NPP), using empirical data on the mean annual increment, or MAI, in above-ground forest stock. The dynamic biomass model BETHY/DLR is used to estimate the NPP of forest areas in Germany, driven by remote sensing data from VEGETATION, meteorological data from the European Centre for Medium-Range Weather Forecasts (ECMWF), and additional tree coverage information from the MODIS Vegetation Continuous Field (VCF). The output of BETHY/DLR, Gross Primary Productivity (GPP), is converted to NPP by subtracting the cumulative plant maintenance and growth respiration, and then validated against MAI data derived from German forestry inventories. Validation is conducted for 2000 and 2001 by converting modelled NPP to stem volume at a regional level. Our analysis shows that the presented method fills an important gap in methods for validating modelled NPP against empirically derived data. In addition, we examine theoretical energy potentials calculated from the modelled and validated NPP, assuming sustainable forest management and using species-specific tree heating values. Such estimated forest biomass energy potentials play an important role in the sustainable energy debate.

  19. Modeling dissolved organic carbon in temperate forest soils: TRIPLEX-DOC model development and validation

    NASA Astrophysics Data System (ADS)

    Wu, H.; Peng, C.; Moore, T. R.; Hua, D.; Li, C.; Zhu, Q.; Peichl, M.; Arain, M. A.; Guo, Z.

    2014-05-01

    Even though dissolved organic carbon (DOC) is the most active carbon (C) cycling in soil organic carbon (SOC) pools, it receives little attention from the global C budget. DOC fluxes are critical to aquatic ecosystem inputs and contribute to the C balance of terrestrial ecosystems, but few ecosystem models have attempted to integrate DOC dynamics into terrestrial C cycling. This study introduces a new process-based model, TRIPLEX-DOC, that is capable of estimating DOC dynamics in forest soils by incorporating both ecological drivers and biogeochemical processes. TRIPLEX-DOC was developed from Forest-DNDC, a biogeochemical model simulating C and nitrogen (N) dynamics, coupled with a new DOC process module that predicts metabolic transformations, sorption/desorption, and DOC leaching in forest soils. The model was validated against field observations of DOC concentrations and fluxes at white pine forest stands located in southern Ontario, Canada. The model was able to simulate seasonal dynamics of DOC concentrations and the magnitudes observed within different soil layers, as well as DOC leaching in the age sequence of these forests. Additionally, TRIPLEX-DOC estimated the effect of forest harvesting on DOC leaching, with a significant increase following harvesting, illustrating that land use change is of critical importance in regulating DOC leaching in temperate forests as an important source of C input to aquatic ecosystems.

  20. Modeling dissolved organic carbon in temperate forest soils: TRIPLEX-DOC model development and validation

    NASA Astrophysics Data System (ADS)

    Wu, H.; Peng, C.; Moore, T. R.; Hua, D.; Li, C.; Zhu, Q.; Peichl, M.; Arain, M. A.; Guo, Z.

    2013-06-01

    Even though dissolved organic carbon (DOC) is the most active carbon (C) cycling that takes place in soil organic carbon (SOC) pools, it is missing from the global C budget. Fluxes in DOC are critical to aquatic ecosystem inputs and contribute to C balances of terrestrial ecosystems. Only a few ecosystem models have attempted to integrate DOC dynamics into terrestrial C cycling. This study introduces a new process-based model, TRIPLEX-DOC that is capable of estimating DOC dynamics in forest soils by incorporating both ecological drivers and biogeochemical processes. TRIPLEX-DOC was developed from Forest-DNDC, a biogeochemical model simulating C and nitrogen (N) dynamics, coupled with a new DOC process module that predicts metabolic transformations, sorption/desorption, and DOC leaching in forest soils. The model was validated against field observations of DOC concentrations and fluxes at white pine forest stands located in southern Ontario, Canada. The model was able to simulate seasonal dynamics of DOC concentrations and the magnitudes observed within different soil layers, as well as DOC leaching in the age-sequence of these forests. Additionally, TRIPLEX-DOC estimated the effect of forest harvesting on DOC leaching, with a significant increase following harvesting, illustrating that change in land use is of critical importance in regulating DOC leaching in temperate forests as an important source of C input to aquatic ecosystems.