Science.gov

Sample records for ii model validation

  1. Filament winding cylinders. II - Validation of the process model

    NASA Technical Reports Server (NTRS)

    Calius, Emilio P.; Lee, Soo-Yong; Springer, George S.

    1990-01-01

    Analytical and experimental studies were performed to validate the model developed by Lee and Springer for simulating the manufacturing process of filament wound composite cylinders. First, results calculated by the Lee-Springer model were compared to results of the Calius-Springer thin cylinder model. Second, temperatures and strains calculated by the Lee-Springer model were compared to data. The data used in these comparisons were generated during the course of this investigation with cylinders made of Hercules IM-6G/HBRF-55 and Fiberite T-300/976 graphite-epoxy tows. Good agreement was found between the calculated and measured stresses and strains, indicating that the model is a useful representation of the winding and curing processes.

  2. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    SciTech Connect

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and sensible

  3. Validated Competing Event Model for the Stage I-II Endometrial Cancer Population

    SciTech Connect

    Carmona, Ruben; Gulaya, Sachin; Murphy, James D.; Rose, Brent S.; Wu, John; Noticewala, Sonal; McHale, Michael T.; Yashar, Catheryn M.; Vaida, Florin; Mell, Loren K.

    2014-07-15

    Purpose/Objectives(s): Early-stage endometrial cancer patients are at higher risk of noncancer mortality than of cancer mortality. Competing event models incorporating comorbidity could help identify women most likely to benefit from treatment intensification. Methods and Materials: 67,397 women with stage I-II endometrioid adenocarcinoma after total hysterectomy diagnosed from 1988 to 2009 were identified in Surveillance, Epidemiology, and End Results (SEER) and linked SEER-Medicare databases. Using demographic and clinical information, including comorbidity, we sought to develop and validate a risk score to predict the incidence of competing mortality. Results: In the validation cohort, increasing competing mortality risk score was associated with increased risk of noncancer mortality (subdistribution hazard ratio [SDHR], 1.92; 95% confidence interval [CI], 1.60-2.30) and decreased risk of endometrial cancer mortality (SDHR, 0.61; 95% CI, 0.55-0.78). Controlling for other variables, Charlson Comorbidity Index (CCI) = 1 (SDHR, 1.62; 95% CI, 1.45-1.82) and CCI >1 (SDHR, 3.31; 95% CI, 2.74-4.01) were associated with increased risk of noncancer mortality. The 10-year cumulative incidences of competing mortality within low-, medium-, and high-risk strata were 27.3% (95% CI, 25.2%-29.4%), 34.6% (95% CI, 32.5%-36.7%), and 50.3% (95% CI, 48.2%-52.6%), respectively. With increasing competing mortality risk score, we observed a significant decline in omega (ω), indicating a diminishing likelihood of benefit from treatment intensification. Conclusion: Comorbidity and other factors influence the risk of competing mortality among patients with early-stage endometrial cancer. Competing event models could improve our ability to identify patients likely to benefit from treatment intensification.

  4. Update: Validation, Edits, and Application Processing. Phase II and Error-Prone Model Report.

    ERIC Educational Resources Information Center

    Gray, Susan; And Others

    An update to the Validation, Edits, and Application Processing and Error-Prone Model Report (Section 1, July 3, 1980) is presented. The objective is to present the most current data obtained from the June 1980 Basic Educational Opportunity Grant applicant and recipient files and to determine whether the findings reported in Section 1 of the July…

  5. Fluids with competing interactions. II. Validating a free energy model for equilibrium cluster size

    NASA Astrophysics Data System (ADS)

    Bollinger, Jonathan A.; Truskett, Thomas M.

    2016-08-01

    Using computer simulations, we validate a simple free energy model that can be analytically solved to predict the equilibrium size of self-limiting clusters of particles in the fluid state governed by a combination of short-range attractive and long-range repulsive pair potentials. The model is a semi-empirical adaptation and extension of the canonical free energy-based result due to Groenewold and Kegel [J. Phys. Chem. B 105, 11702-11709 (2001)], where we use new computer simulation data to systematically improve the cluster-size scalings with respect to the strengths of the competing interactions driving aggregation. We find that one can adapt a classical nucleation like theory for small energetically frustrated aggregates provided one appropriately accounts for a size-dependent, microscopic energy penalty of interface formation, which requires new scaling arguments. This framework is verified in part by considering the extensive scaling of intracluster bonding, where we uncover a superlinear scaling regime distinct from (and located between) the known regimes for small and large aggregates. We validate our model based on comparisons against approximately 100 different simulated systems comprising compact spherical aggregates with characteristic (terminal) sizes between six and sixty monomers, which correspond to wide ranges in experimentally controllable parameters.

  6. Assessing the wildlife habitat value of New England salt marshes: II. Model testing and validation.

    PubMed

    McKinney, Richard A; Charpentier, Michael A; Wigand, Cathleen

    2009-07-01

    We tested a previously described model to assess the wildlife habitat value of New England salt marshes by comparing modeled habitat values and scores with bird abundance and species richness at sixteen salt marshes in Narragansett Bay, Rhode Island USA. As a group, wildlife habitat value assessment scores for the marshes ranged from 307-509, or 31-67% of the maximum attainable score. We recorded 6 species of wading birds (Ardeidae; herons, egrets, and bitterns) at the sites during biweekly survey. Species richness (r (2)=0.24, F=4.53, p=0.05) and abundance (r (2)=0.26, F=5.00, p=0.04) of wading birds significantly increased with increasing assessment score. We optimized our assessment model for wading birds by using Akaike information criteria (AIC) to compare a series of models comprised of specific components and categories of our model that best reflect their habitat use. The model incorporating pre-classification, wading bird habitat categories, and natural land surrounding the sites was substantially supported by AIC analysis as the best model. The abundance of wading birds significantly increased with increasing assessment scores generated with the optimized model (r (2)=0.48, F=12.5, p=0.003), demonstrating that optimizing models can be helpful in improving the accuracy of the assessment for a given species or species assemblage. In addition to validating the assessment model, our results show that in spite of their urban setting our study marshes provide substantial wildlife habitat value. This suggests that even small wetlands in highly urbanized coastal settings can provide important wildlife habitat value if key habitat attributes (e.g., natural buffers, habitat heterogeneity) are present.

  7. SAGE II aerosol data validation based on retrieved aerosol model size distribution from SAGE II aerosol measurements

    NASA Technical Reports Server (NTRS)

    Wang, Pi-Huan; Mccormick, M. P.; Mcmaster, L. R.; Chu, W. P.; Swissler, T. J.; Osborn, M. T.; Russell, P. B.; Oberbeck, V. R.; Livingston, J.; Rosen, J. M.

    1989-01-01

    Consideration is given to aerosol correlative measurements experiments for the Stratospheric Aerosol and Gas Experiment (SAGE) II, conducted between November 1984 and July 1986. The correlative measurements were taken with an impactor/laser probe, a dustsonde, and an airborne 36-cm lidar system. The primary aerosol quantities measured by the ground-based instruments are compared with those calculated from the aerosol size distributions from SAGE II aerosol extinction measurements. Good agreement is found between the two sets of measurements.

  8. Predictive modeling of infrared radiative heating in tomato dry-peeling process: Part II. Model validation and sensitivity analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A predictive mathematical model was developed to simulate heat transfer in a tomato undergoing double sided infrared (IR) heating in a dry-peeling process. The aims of this study were to validate the developed model using experimental data and to investigate different engineering parameters that mos...

  9. Person Heterogeneity of the BDI-II-C and Its Effects on Dimensionality and Construct Validity: Using Mixture Item Response Models

    ERIC Educational Resources Information Center

    Wu, Pei-Chen; Huang, Tsai-Wei

    2010-01-01

    This study was to apply the mixed Rasch model to investigate person heterogeneity of Beck Depression Inventory-II-Chinese version (BDI-II-C) and its effects on dimensionality and construct validity. Person heterogeneity was reflected by two latent classes that differ qualitatively. Additionally, person heterogeneity adversely affected the…

  10. A new 3D finite element model of the IEC 60318-1 artificial ear: II. Experimental and numerical validation

    NASA Astrophysics Data System (ADS)

    Bravo, Agustín; Barham, Richard; Ruiz, Mariano; López, Juan Manuel; De Arcas, Guillermo; Alonso, Jesus

    2012-12-01

    In part I, the feasibility of using three-dimensional (3D) finite elements (FEs) to model the acoustic behaviour of the IEC 60318-1 artificial ear was studied and the numerical approach compared with classical lumped elements modelling. It was shown that by using a more complex acoustic model that took account of thermo-viscous effects, geometric shapes and dimensions, it was possible to develop a realistic model. This model then had clear advantages in comparison with the models based on equivalent circuits using lumped parameters. In fact results from FE modelling produce a better understanding about the physical phenomena produced inside ear simulator couplers, facilitating spatial and temporal visualization of the sound fields produced. The objective of this study (part II) is to extend the investigation by validating the numerical calculations against measurements on an ear simulator conforming to IEC 60318-1. For this purpose, an appropriate commercially available device is taken and a complete 3D FE model developed for it. The numerical model is based on key dimensional data obtained with a non-destructive x-ray inspection technique. Measurements of the acoustic transfer impedance have been carried out on the same device at a national measurement institute using the method embodied in IEC 60318-1. Having accounted for the actual device dimensions, the thermo-viscous effects inside narrow slots and holes and environmental conditions, the results of the numerical modelling were found to be in good agreement with the measured values.

  11. Validating the Serpent Model of FiR 1 Triga Mk-II Reactor by Means of Reactor Dosimetry

    NASA Astrophysics Data System (ADS)

    Viitanen, Tuomas; Leppänen, Jaakko

    2016-02-01

    A model of the FiR 1 Triga Mk-II reactor has been previously generated for the Serpent Monte Carlo reactor physics and burnup calculation code. In the current article, this model is validated by comparing the predicted reaction rates of nickel and manganese at 9 different positions in the reactor to measurements. In addition, track-length estimators are implemented in Serpent 2.1.18 to increase its performance in dosimetry calculations. The usage of the track-length estimators is found to decrease the reaction rate calculation times by a factor of 7-8 compared to the standard estimator type in Serpent, the collision estimators. The differences in the reaction rates between the calculation and the measurement are below 20%.

  12. Black liquor combustion validated recovery boiler modeling: Final year report. Volume 2 (Appendices I, section 5 and II, section 1)

    SciTech Connect

    Grace, T.M.; Frederick, W.J.; Salcudean, M.; Wessel, R.A.

    1998-08-01

    This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished were as follows: (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes. (2) Validate the enhanced furnace models, so that users can have confidence in the predicted results. (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler. (4) Facilitate the transfer of codes, black liquid submodels, and fundamental knowledge to the US kraft pulp industry. Volume 2 contains the last section of Appendix I, Radiative heat transfer in kraft recovery boilers, and the first section of Appendix II, The effect of temperature and residence time on the distribution of carbon, sulfur, and nitrogen between gaseous and condensed phase products from low temperature pyrolysis of kraft black liquor.

  13. Assessing Wildlife Habitat Value of New England Salt Marshes: II. Model Testing and Validation

    EPA Science Inventory

    We test a previously described model to assess the wildlife habitat value of New England salt marshes by comparing modeled habitat values and scores with bird abundance and species richness at sixteen salt marshes in Narragansett Bay, Rhode Island USA. Assessment scores ranged f...

  14. Modeling the influence of cyclodextrins on oral absorption of low solubility drugs: II. Experimental validation.

    PubMed

    Gamsiz, Ece Dilber; Miller, Lee; Thombre, Avinash G; Ahmed, Imran; Carrier, Rebecca Lyn

    2010-02-01

    A model was developed for predicting the influence of cyclodextrins (CDs) delivered as a physical mixture with drug on oral absorption. CDs are cyclic oligosaccharides which form inclusion complexes with many drugs and are often used as solubilizing agents. The purpose of this work is to compare the simulation predictions with in vitro as well as in vivo experimental results to test the model's ability to capture the influence of CD on key processes in the gastrointestinal (GI) tract environment. Dissolution and absorption kinetics of low solubility drugs (Naproxen and Nifedipine) were tested in the presence and absence of CD in a simulated gastrointestinal environment. Model predictions were also compared with in vivo experimental results (Glibenclamide and Carbamazepine) from the literature to demonstrate the model's ability to predict oral bioavailability. Comparisons of simulation and experimental results indicate that a model incorporating the influence of CD (delivered as a physical mixture) on dissolution kinetics and binding of neutral drug can predict trends in the influence of CD on bioavailability. Overall, a minimal effect of CD dosed as a physical mixture was observed and predicted. Modeling may aid in enabling rational design of CD containing formulations.

  15. The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models

    EPA Science Inventory

    The second phase of the MicroArray Quality Control (MAQC-II) project evaluated common practices for developing and validating microarray-based models aimed at predicting toxicological and clinical endpoints. Thirty-six teams developed classifiers for 13 endpoints - some easy, som...

  16. TAMDAR Sensor Validation in 2003 AIRS II

    NASA Technical Reports Server (NTRS)

    Daniels, Taumi S.; Murray, John J.; Anderson, Mark V.; Mulally, Daniel J.; Jensen, Kristopher R.; Grainger, Cedric A.; Delene, David J.

    2005-01-01

    This study entails an assessment of TAMDAR in situ temperature, relative humidity and winds sensor data from seven flights of the UND Citation II. These data are undergoing rigorous assessment to determine their viability to significantly augment domestic Meteorological Data Communications Reporting System (MDCRS) and the international Aircraft Meteorological Data Reporting (AMDAR) system observational databases to improve the performance of regional and global numerical weather prediction models. NASA Langley Research Center participated in the Second Alliance Icing Research Study from November 17 to December 17, 2003. TAMDAR data taken during this period is compared with validation data from the UND Citation. The data indicate acceptable performance of the TAMDAR sensor when compared to measurements from the UND Citation research instruments.

  17. INACTIVATION OF CRYPTOSPORIDIUM OOCYSTS IN A PILOT-SCALE OZONE BUBBLE-DIFFUSER CONTACTOR - II: MODEL VALIDATION AND APPLICATION

    EPA Science Inventory

    The ADR model developed in Part I of this study was successfully validated with experimenta data obtained for the inactivation of C. parvum and C. muris oocysts with a pilot-scale ozone-bubble diffuser contactor operated with treated Ohio River water. Kinetic parameters, required...

  18. Model Validation Status Review

    SciTech Connect

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  19. SOSS ICN Model Validation

    NASA Technical Reports Server (NTRS)

    Zhu, Zhifan

    2016-01-01

    Under the NASA-KAIA-KARI ATM research collaboration agreement, SOSS ICN Model has been developed for Incheon International Airport. This presentation describes the model validation work in the project. The presentation will show the results and analysis of the validation.

  20. Black liquor combustion validated recovery boiler modeling: Final year report. Volume 3 (Appendices II, sections 2--3 and III)

    SciTech Connect

    Grace, T.M.; Frederick, W.J.; Salcudean, M.; Wessel, R.A.

    1998-08-01

    This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished were as follows: (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes. (2) Validate the enhanced furnace models, so that users can have confidence in the predicted results. (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler. (4) Facilitate the transfer of codes, black liquid submodels, and fundamental knowledge to the US kraft pulp industry. Volume 3 contains the following appendix sections: Formation and destruction of nitrogen oxides in recovery boilers; Sintering and densification of recovery boiler deposits laboratory data and a rate model; and Experimental data on rates of particulate formation during char bed burning.

  1. Discriminant analysis for predicting dystocia in beef cattle. II. Derivation and validation of a prebreeding prediction model.

    PubMed

    Morrison, D G; Humes, P E; Keith, N K; Godke, R A

    1985-03-01

    Discriminant analysis was utilized to derive and validate a model for predicting dystocia using only data available at the beginning of the breeding season. Data were collected from 211 Chianina crossbred cows (2 to 6 yr old) bred to Chianina bulls. A proportionally stratified sampling procedure divided females into an analysis sample (n = 134) on which the model was derived and a hold-out sample (n = 77) on which the prediction model was validated (tested). Variables available during the derivation stage were cow age, cow weight, pelvic height, pelvic width, pelvic area and calf sire. Dystocia was categorized as either unassisted or assisted. Occurrence of dystocia was 17.2 and 18.2% in the analysis and hold-out samples, respectively. All data were standardized to a mean of zero and a variance of one before statistical analysis. The centroid of cows experiencing dystocia differed (P less than .01) from that of cows calving unassisted in the analysis sample. Significant variables were pelvic area and cow age (standardized coefficients = .56 and .51, respectively). This model correctly classified 85.1% of the cows in the analysis sample. This was 13.5% greater than the proportional chance criterion. For model validation, prediction accuracy was 84.4% in the hold-out group, which was 14.2% greater than the proportional chance criterion. However, only 57.1% of the cows that experienced dystocia were correctly classified. Examination of the data revealed that those cows misclassified were 3 yr of age or older.(ABSTRACT TRUNCATED AT 250 WORDS)

  2. Macrotransport-solidification kinetics modeling of equiaxed dendritic growth: Part II. Computation problems and validation on INCONEL 718 superalloy castings

    NASA Astrophysics Data System (ADS)

    Nastac, L.; Stefanescu, D. M.

    1996-12-01

    In Part I of the article, a new analytical model that describes solidification of equiaxed dendrites was presented. In this part of the article, the model is used to simulate the solidification of INCONEL 718 superalloy castings. The model was incorporated into a commercial finite-element code, PROCAST. A special procedure called microlatent heat method (MLHM) was used for coupling between macroscopic heat flow and microscopic growth kinetics. A criterion for time-stepping selection in microscopic modeling has been derived in conjunction with MLHM. Reductions in computational (CPU) time up to 90 pct over the classic latent heat method were found by adopting this coupling. Validation of the model was performed against experimental data for an INCONEL 718 superalloy casting. In the present calculations, the model for globulitic dendrite was used. The evolution of fraction of solid calculated with the present model was compared with Scheil’s model and experiments. An important feature in solidification of INCONEL 718 is the detrimental Laves phase. Laves phase content is directly related to the intensity of microsegregation of niobium, which is very sensitive to the evolution of the fraction of solid. It was found that there is a critical cooling rate at which the amount of Laves phase is maximum. The critical cooling rate is not a function of material parameters (diffusivity, partition coefficient, etc.). It depends only on the grain size and solidification time. The predictions generated with the present model are shown to agree very well with experiments.

  3. Groundwater Model Validation

    SciTech Connect

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  4. Validation of SAGE II NO2 measurements

    NASA Technical Reports Server (NTRS)

    Cunnold, D. M.; Zawodny, J. M.; Chu, W. P.; Mccormick, M. P.; Pommereau, J. P.; Goutail, F.

    1991-01-01

    The validity of NO2 measurements from the stratospheric aerosol and gas experiment (SAGE) II is examined by comparing the data with climatological distributions of NO2 and by examining the consistency of the observations themselves. The precision at high altitudes is found to be 5 percent, which is also the case at specific low altitudes for certain latitudes where the mixing ratio is 4 ppbv, and the precision is 0.2 ppbv at low altitudes. The autocorrelation distance of the smoothed profile measurement noise is 3-5 km and 10 km for 1-km and 5-km smoothing, respectively. The SAGE II measurements agree with spectroscopic measurements to within 10 percent, and the SAGE measurements are about 20 percent smaller than average limb monitor measurements at the mixing ratio peak. SAGE I and SAGE II measurements are slightly different, but the difference is not attributed to changes in atmospheric NO2.

  5. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part II: Benchmark comparisons of PUMA core parameters with MCNP5 and improvements due to a simple cell heterogeneity correction

    SciTech Connect

    Grant, C.; Mollerach, R.; Leszczynski, F.; Serra, O.; Marconi, J.; Fink, J.

    2006-07-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure vessel design with 451 vertical coolant channels and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update of reactor physics calculation methods and models was recently carried out covering cell, supercell (control rod) and core calculations. This paper presents benchmark comparisons of core parameters of a slightly idealized model of the Atucha-I core obtained with the PUMA reactor code with MCNP5. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, more symmetric than Atucha-II, and has some experimental data available. To validate the new models benchmark comparisons of k-effective, channel power and axial power distributions obtained with PUMA and MCNP5 have been performed. In addition, a simple cell heterogeneity correction recently introduced in PUMA is presented, which improves significantly the agreement of calculated channel powers with MCNP5. To complete the validation, the calculation of some of the critical configurations of the Atucha-I reactor measured during the experiments performed at first criticality is also presented. (authors)

  6. Resolving the mass-anisotropy degeneracy of the spherically symmetric Jeans equation - II. Optimum smoothing and model validation

    NASA Astrophysics Data System (ADS)

    Diakogiannis, Foivos I.; Lewis, Geraint F.; Ibata, Rodrigo A.

    2014-09-01

    The spherical Jeans equation is widely used to estimate the mass content of stellar systems with apparent spherical symmetry. However, this method suffers from a degeneracy between the assumed mass density and the kinematic anisotropy profile, β(r). In a previous work, we laid the theoretical foundations for an algorithm that combines smoothing B splines with equations from dynamics to remove this degeneracy. Specifically, our method reconstructs a unique kinematic profile of σ _{rr}^2 and σ _{tt}^2 for an assumed free functional form of the potential and mass density (Φ, ρ) and given a set of observed line-of-sight velocity dispersion measurements, σ _los^2. In Paper I, we demonstrated the efficiency of our algorithm with a very simple example and we commented on the need for optimum smoothing of the B-spline representation; this is in order to avoid unphysical variational behaviour when we have large uncertainty in our data. In the current contribution, we present a process of finding the optimum smoothing for a given data set by using information of the behaviour from known ideal theoretical models. Markov Chain Monte Carlo methods are used to explore the degeneracy in the dynamical modelling process. We validate our model through applications to synthetic data for systems with constant or variable mass-to-light ratio Υ. In all cases, we recover excellent fits of theoretical functions to observables and unique solutions. Our algorithm is a robust method for the removal of the mass-anisotropy degeneracy of the spherically symmetric Jeans equation for an assumed functional form of the mass density.

  7. Data Assimilation of Photosynthetic Light-use Efficiency using Multi-angular Satellite Data: II Model Implementation and Validation

    NASA Technical Reports Server (NTRS)

    Hilker, Thomas; Hall, Forest G.; Tucker, J.; Coops, Nicholas C.; Black, T. Andrew; Nichol, Caroline J.; Sellers, Piers J.; Barr, Alan; Hollinger, David Y.; Munger, J. W.

    2012-01-01

    Spatially explicit and temporally continuous estimates of photosynthesis will be of great importance for increasing our understanding of and ultimately closing the terrestrial carbon cycle. Current capabilities to model photosynthesis, however, are limited by accurate enough representations of the complexity of the underlying biochemical processes and the numerous environmental constraints imposed upon plant primary production. A potentially powerful alternative to model photosynthesis through these indirect observations is the use of multi-angular satellite data to infer light-use efficiency (e) directly from spectral reflectance properties in connection with canopy shadow fractions. Hall et al. (this issue) introduced a new approach for predicting gross ecosystem production that would allow the use of such observations in a data assimilation mode to obtain spatially explicit variations in e from infrequent polar-orbiting satellite observations, while meteorological data are used to account for the more dynamic responses of e to variations in environmental conditions caused by changes in weather and illumination. In this second part of the study we implement and validate the approach of Hall et al. (this issue) across an ecologically diverse array of eight flux-tower sites in North America using data acquired from the Compact High Resolution Imaging Spectroradiometer (CHRIS) and eddy-flux observations. Our results show significantly enhanced estimates of e and therefore cumulative gross ecosystem production (GEP) over the course of one year at all examined sites. We also demonstrate that e is greatly heterogeneous even across small study areas. Data assimilation and direct inference of GEP from space using a new, proposed sensor could therefore be a significant step towards closing the terrestrial carbon cycle.

  8. Validation Studies for the Diet History Questionnaire II

    Cancer.gov

    Data show that the DHQ I instrument provides reasonable nutrient estimates, and three studies were conducted to assess its validity/calibration. There have been no such validation studies with the DHQ II.

  9. Ab initio structural modeling of and experimental validation for Chlamydia trachomatis protein CT296 reveal structural similarity to Fe(II) 2-oxoglutarate-dependent enzymes

    SciTech Connect

    Kemege, Kyle E.; Hickey, John M.; Lovell, Scott; Battaile, Kevin P.; Zhang, Yang; Hefty, P. Scott

    2012-02-13

    Chlamydia trachomatis is a medically important pathogen that encodes a relatively high percentage of proteins with unknown function. The three-dimensional structure of a protein can be very informative regarding the protein's functional characteristics; however, determining protein structures experimentally can be very challenging. Computational methods that model protein structures with sufficient accuracy to facilitate functional studies have had notable successes. To evaluate the accuracy and potential impact of computational protein structure modeling of hypothetical proteins encoded by Chlamydia, a successful computational method termed I-TASSER was utilized to model the three-dimensional structure of a hypothetical protein encoded by open reading frame (ORF) CT296. CT296 has been reported to exhibit functional properties of a divalent cation transcription repressor (DcrA), with similarity to the Escherichia coli iron-responsive transcriptional repressor, Fur. Unexpectedly, the I-TASSER model of CT296 exhibited no structural similarity to any DNA-interacting proteins or motifs. To validate the I-TASSER-generated model, the structure of CT296 was solved experimentally using X-ray crystallography. Impressively, the ab initio I-TASSER-generated model closely matched (2.72-{angstrom} C{alpha} root mean square deviation [RMSD]) the high-resolution (1.8-{angstrom}) crystal structure of CT296. Modeled and experimentally determined structures of CT296 share structural characteristics of non-heme Fe(II) 2-oxoglutarate-dependent enzymes, although key enzymatic residues are not conserved, suggesting a unique biochemical process is likely associated with CT296 function. Additionally, functional analyses did not support prior reports that CT296 has properties shared with divalent cation repressors such as Fur.

  10. Ab Initio Structural Modeling of and Experimental Validation for Chlamydia trachomatis Protein CT296 Reveal Structural Similarity to Fe(II) 2-Oxoglutarate-Dependent Enzymes▿

    PubMed Central

    Kemege, Kyle E.; Hickey, John M.; Lovell, Scott; Battaile, Kevin P.; Zhang, Yang; Hefty, P. Scott

    2011-01-01

    Chlamydia trachomatis is a medically important pathogen that encodes a relatively high percentage of proteins with unknown function. The three-dimensional structure of a protein can be very informative regarding the protein's functional characteristics; however, determining protein structures experimentally can be very challenging. Computational methods that model protein structures with sufficient accuracy to facilitate functional studies have had notable successes. To evaluate the accuracy and potential impact of computational protein structure modeling of hypothetical proteins encoded by Chlamydia, a successful computational method termed I-TASSER was utilized to model the three-dimensional structure of a hypothetical protein encoded by open reading frame (ORF) CT296. CT296 has been reported to exhibit functional properties of a divalent cation transcription repressor (DcrA), with similarity to the Escherichia coli iron-responsive transcriptional repressor, Fur. Unexpectedly, the I-TASSER model of CT296 exhibited no structural similarity to any DNA-interacting proteins or motifs. To validate the I-TASSER-generated model, the structure of CT296 was solved experimentally using X-ray crystallography. Impressively, the ab initio I-TASSER-generated model closely matched (2.72-Å Cα root mean square deviation [RMSD]) the high-resolution (1.8-Å) crystal structure of CT296. Modeled and experimentally determined structures of CT296 share structural characteristics of non-heme Fe(II) 2-oxoglutarate-dependent enzymes, although key enzymatic residues are not conserved, suggesting a unique biochemical process is likely associated with CT296 function. Additionally, functional analyses did not support prior reports that CT296 has properties shared with divalent cation repressors such as Fur. PMID:21965559

  11. Thermospheric dynamics during September 18-19, 1984. II - Validation of the NCAR thermospheric general circulation model

    NASA Technical Reports Server (NTRS)

    Crowley, G.; Emery, B. A.; Roble, R. G.; Carlson, H. C., Jr.; Salah, J. E.

    1989-01-01

    The winds, temperatures, and densities predicted by the thermospheric GCM are compared with measurements from the Equinox Transition Study of September 17-24, 1984. Agreement between predictions and observation is good in many respects. The quiet day observations contain a strong semidiurnal wind variation which is mainly due to upward-propagating tides. The storm day wind behavior is significantly different and includes a surge of equatorward winds due to a global propagating disturbance associated with the storm onset. A quantitative statistical comparison of the predicted and measured winds indicates that the equatorward winds in the model are weaker than the observed winds, particularly during storm times. A quiet day phase anomaly in the measured F region winds which is not reproduced by the model suggests the occurrence of an important unmodeled interaction between upward propagating semidiurnal tides and high-latitude effects.

  12. Model Fe-Al Steel with Exceptional Resistance to High Temperature Coarsening. Part II: Experimental Validation and Applications

    NASA Astrophysics Data System (ADS)

    Zhou, Tihe; Zhang, Peng; O'Malley, Ronald J.; Zurob, Hatem S.; Subramanian, Mani

    2015-01-01

    In order to achieve a fine uniform grain-size distribution using the process of thin slab casting and directing rolling (TSCDR), it is necessary to control the grain-size prior to the onset of thermomechanical processing. In the companion paper, Model Fe- Al Steel with Exceptional Resistance to High Temperature Coarsening. Part I: Coarsening Mechanism and Particle Pinning Effects, a new steel composition which uses a small volume fraction of austenite particles to pin the growth of delta-ferrite grains at high temperature was proposed and grain growth was studied in reheated samples. This paper will focus on the development of a simple laboratory-scale setup to simulate thin-slab casting of the newly developed steel and demonstrate the potential for grain size control under industrial conditions. Steel bars with different diameters are briefly dipped into the molten steel to create a shell of solidified material. These are then cooled down to room temperature at different cooling rates. During cooling, the austenite particles nucleate along the delta-ferrite grain boundaries and greatly retard grain growth. With decreasing temperature, more austenite particles precipitate, and grain growth can be completely arrested in the holding furnace. Additional applications of the model alloy are discussed including grain-size control in the heat affected zone in welds and grain-growth resistance at high temperature.

  13. Modeling lipid accumulation in oleaginous fungi in chemostat cultures. II: Validation of the chemostat model using yeast culture data from literature.

    PubMed

    Meeuwse, Petra; Tramper, Johannes; Rinzema, Arjen

    2011-10-01

    A model that predicts cell growth, lipid accumulation and substrate consumption of oleaginous fungi in chemostat cultures (Meeuwse et al. in Bioproc Biosyst Eng. doi: 10.1007/s00449-011-0545-8 , 2011) was validated using 12 published data sets for chemostat cultures of oleaginous yeasts and one published data set for a poly-hydroxyalkanoate accumulating bacterial species. The model could describe all data sets well with only minor modifications that do not affect the key assumptions, i.e. (1) oleaginous yeasts and fungi give the highest priority to C-source utilization for maintenance, second priority to growth and third priority to lipid accumulation, and (2) oleaginous yeasts and fungi have a growth rate independent maximum specific lipid production rate. The analysis of all data showed that the maximum specific lipid production rate is in most cases very close to the specific production rate of membrane and other functional lipids for cells growing at their maximum specific growth rate. The limiting factor suggested by Ykema et al. (in Biotechnol Bioeng 34:1268-1276, 1989), i.e. the maximum glucose uptake rate, did not give good predictions of the maximum lipid production rate.

  14. Validation of SAGE II ozone measurements

    NASA Technical Reports Server (NTRS)

    Cunnold, D. M.; Chu, W. P.; Mccormick, M. P.; Veiga, R. E.; Barnes, R. A.

    1989-01-01

    Five ozone profiles from the Stratospheric Aerosol and Gas Experiment (SAGE) II are compared with coincident ozonesonde measurements obtained at Natal, Brazil, and Wallops Island, Virginia. It is shown that the mean difference between all of the measurements is about 1 percent and that the agreement is within 7 percent at altitudes between 20 and 53 km. Good agreement is also found for ozone mixing ratios on pressure surfaces. It is concluded that the SAGE II profiles provide useful ozone information up to about 60 km altitude.

  15. Empirical agreement in model validation.

    PubMed

    Jebeile, Julie; Barberousse, Anouk

    2016-04-01

    Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation.

  16. SAGE II aerosol validation - Selected altitude measurements, including particle micromeasurements

    NASA Technical Reports Server (NTRS)

    Oberbeck, Verne R.; Russell, Philip B.; Pueschel, Rudolf F.; Snetsinger, Kenneth G.; Ferry, Guy V.; Livingston, John M.; Rosen, James N.; Osborn, Mary T.; Kritz, Mark A.

    1989-01-01

    The validity of particulate extinction coefficients derived from limb path solar radiance measurements obtained during the Stratospheric Aerosol and Gas Experiment (SAGE) II is tested. The SAGE II measurements are compared with correlative aerosol measurements taken during January 1985, August 1985, and July 1986 with impactors, laser spectrometers, and filter samplers on a U-2 aircraft, an upward pointing lidar on a P-3 aircraft, and balloon-borne optical particle counters. The data for July 29, 1986 are discussed in detail. The aerosol measurements taken on this day at an altitude of 20.5 km produce particulate extinction values which validate the SAGE II values for similar wavelengths.

  17. Testing and validating environmental models

    USGS Publications Warehouse

    Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.

    1996-01-01

    Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series

  18. Validation of the Sexual Assault Symptom Scale II (SASS II) using a panel research design.

    PubMed

    Ruch, Libby O; Wang, Chang-Hwai

    2006-11-01

    To examine the utility of a self-report scale of sexual assault trauma, 223 female victims were interviewed with the 43-item Sexual Assault Symptom Scale II (SASS II) at 1, 3, 7, 11, and 15 months postassault. Factor analyses using principal-components extraction with an oblimin rotation yielded 7 common factors with 31 items. The internal consistency was high for 4 factors and moderate for 2 factors. The multitrait-multimethod matrix, correlating the factor subscale scores of self-reported trauma and clinical assessment ratings, demonstrated both convergent and discriminant validity, indicating that the SASS II has construct validity. Correlations between the SASS II subscales and the intrusion subscale of the Impact of Events Scale also indicated the convergent and discriminant validity of the SASS II. Significant positive correlations between current and prior trauma levels further evidence the validity of the SASS.

  19. The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models.

    PubMed

    Shi, Leming; Campbell, Gregory; Jones, Wendell D; Campagne, Fabien; Wen, Zhining; Walker, Stephen J; Su, Zhenqiang; Chu, Tzu-Ming; Goodsaid, Federico M; Pusztai, Lajos; Shaughnessy, John D; Oberthuer, André; Thomas, Russell S; Paules, Richard S; Fielden, Mark; Barlogie, Bart; Chen, Weijie; Du, Pan; Fischer, Matthias; Furlanello, Cesare; Gallas, Brandon D; Ge, Xijin; Megherbi, Dalila B; Symmans, W Fraser; Wang, May D; Zhang, John; Bitter, Hans; Brors, Benedikt; Bushel, Pierre R; Bylesjo, Max; Chen, Minjun; Cheng, Jie; Cheng, Jing; Chou, Jeff; Davison, Timothy S; Delorenzi, Mauro; Deng, Youping; Devanarayan, Viswanath; Dix, David J; Dopazo, Joaquin; Dorff, Kevin C; Elloumi, Fathi; Fan, Jianqing; Fan, Shicai; Fan, Xiaohui; Fang, Hong; Gonzaludo, Nina; Hess, Kenneth R; Hong, Huixiao; Huan, Jun; Irizarry, Rafael A; Judson, Richard; Juraeva, Dilafruz; Lababidi, Samir; Lambert, Christophe G; Li, Li; Li, Yanen; Li, Zhen; Lin, Simon M; Liu, Guozhen; Lobenhofer, Edward K; Luo, Jun; Luo, Wen; McCall, Matthew N; Nikolsky, Yuri; Pennello, Gene A; Perkins, Roger G; Philip, Reena; Popovici, Vlad; Price, Nathan D; Qian, Feng; Scherer, Andreas; Shi, Tieliu; Shi, Weiwei; Sung, Jaeyun; Thierry-Mieg, Danielle; Thierry-Mieg, Jean; Thodima, Venkata; Trygg, Johan; Vishnuvajjala, Lakshmi; Wang, Sue Jane; Wu, Jianping; Wu, Yichao; Xie, Qian; Yousef, Waleed A; Zhang, Liang; Zhang, Xuegong; Zhong, Sheng; Zhou, Yiming; Zhu, Sheng; Arasappan, Dhivya; Bao, Wenjun; Lucas, Anne Bergstrom; Berthold, Frank; Brennan, Richard J; Buness, Andreas; Catalano, Jennifer G; Chang, Chang; Chen, Rong; Cheng, Yiyu; Cui, Jian; Czika, Wendy; Demichelis, Francesca; Deng, Xutao; Dosymbekov, Damir; Eils, Roland; Feng, Yang; Fostel, Jennifer; Fulmer-Smentek, Stephanie; Fuscoe, James C; Gatto, Laurent; Ge, Weigong; Goldstein, Darlene R; Guo, Li; Halbert, Donald N; Han, Jing; Harris, Stephen C; Hatzis, Christos; Herman, Damir; Huang, Jianping; Jensen, Roderick V; Jiang, Rui; Johnson, Charles D; Jurman, Giuseppe; Kahlert, Yvonne; Khuder, Sadik A; Kohl, Matthias; Li, Jianying; Li, Li; Li, Menglong; Li, Quan-Zhen; Li, Shao; Li, Zhiguang; Liu, Jie; Liu, Ying; Liu, Zhichao; Meng, Lu; Madera, Manuel; Martinez-Murillo, Francisco; Medina, Ignacio; Meehan, Joseph; Miclaus, Kelci; Moffitt, Richard A; Montaner, David; Mukherjee, Piali; Mulligan, George J; Neville, Padraic; Nikolskaya, Tatiana; Ning, Baitang; Page, Grier P; Parker, Joel; Parry, R Mitchell; Peng, Xuejun; Peterson, Ron L; Phan, John H; Quanz, Brian; Ren, Yi; Riccadonna, Samantha; Roter, Alan H; Samuelson, Frank W; Schumacher, Martin M; Shambaugh, Joseph D; Shi, Qiang; Shippy, Richard; Si, Shengzhu; Smalter, Aaron; Sotiriou, Christos; Soukup, Mat; Staedtler, Frank; Steiner, Guido; Stokes, Todd H; Sun, Qinglan; Tan, Pei-Yi; Tang, Rong; Tezak, Zivana; Thorn, Brett; Tsyganova, Marina; Turpaz, Yaron; Vega, Silvia C; Visintainer, Roberto; von Frese, Juergen; Wang, Charles; Wang, Eric; Wang, Junwei; Wang, Wei; Westermann, Frank; Willey, James C; Woods, Matthew; Wu, Shujian; Xiao, Nianqing; Xu, Joshua; Xu, Lei; Yang, Lun; Zeng, Xiao; Zhang, Jialu; Zhang, Li; Zhang, Min; Zhao, Chen; Puri, Raj K; Scherf, Uwe; Tong, Weida; Wolfinger, Russell D

    2010-08-01

    Gene expression data from microarrays are being applied to predict preclinical and clinical endpoints, but the reliability of these predictions has not been established. In the MAQC-II project, 36 independent teams analyzed six microarray data sets to generate predictive models for classifying a sample with respect to one of 13 endpoints indicative of lung or liver toxicity in rodents, or of breast cancer, multiple myeloma or neuroblastoma in humans. In total, >30,000 models were built using many combinations of analytical methods. The teams generated predictive models without knowing the biological meaning of some of the endpoints and, to mimic clinical reality, tested the models on data that had not been used for training. We found that model performance depended largely on the endpoint and team proficiency and that different approaches generated models of similar performance. The conclusions and recommendations from MAQC-II should be useful for regulatory agencies, study committees and independent investigators that evaluate methods for global gene expression analysis.

  20. The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models

    PubMed Central

    2012-01-01

    Gene expression data from microarrays are being applied to predict preclinical and clinical endpoints, but the reliability of these predictions has not been established. In the MAQC-II project, 36 independent teams analyzed six microarray data sets to generate predictive models for classifying a sample with respect to one of 13 endpoints indicative of lung or liver toxicity in rodents, or of breast cancer, multiple myeloma or neuroblastoma in humans. In total, >30,000 models were built using many combinations of analytical methods. The teams generated predictive models without knowing the biological meaning of some of the endpoints and, to mimic clinical reality, tested the models on data that had not been used for training. We found that model performance depended largely on the endpoint and team proficiency and that different approaches generated models of similar performance. The conclusions and recommendations from MAQC-II should be useful for regulatory agencies, study committees and independent investigators that evaluate methods for global gene expression analysis. PMID:20676074

  1. Partial Validation of Multibody Program to Optimize Simulated Trajectories II (POST II) Parachute Simulation With Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Ben; Queen, Eric M.

    2002-01-01

    A capability to simulate trajectories Of Multiple interacting rigid bodies has been developed. This capability uses the Program to Optimize Simulated Trajectories II (POST II). Previously, POST II had the ability to simulate multiple bodies without interacting forces. The current implementation is used for the Simulation of parachute trajectories, in which the parachute and suspended bodies can be treated as rigid bodies. An arbitrary set of connecting lines can be included in the model and are treated as massless spring-dampers. This paper discusses details of the connection line modeling and results of several test cases used to validate the capability.

  2. SAGE II aerosol validation: selected altitude measurements, including particle micromeasurements.

    PubMed

    Oberbeck, V R; Livingston, J M; Russell, P B; Pueschel, R F; Rosen, J N; Osborn, M T; Kritz, M A; Snetsinger, K G; Ferry, G V

    1989-06-20

    Correlative aerosol measurements taken at a limited number of altitudes during coordinated field experiments are used to test the validity of particulate extinction coefficients derived from limb path solar radiance measurements taken by the Stratospheric Aerosol and Gas Experiment (SAGE) II Sun photometer. In particular, results are presented from correlative measurement missions that were conducted during January 1985, August 1985, and July 1986. Correlative sensors included impactors, laser spectrometers, and filter samplers aboard an U-2-airplane, an upward pointing lidar aboard a P-3 airplane, and balloon-borne optical particle counters (dustsondes). The main body of this paper focuses on the July 29, 1986, validation experiment, which minimized the many difficulties (e.g., spatial and temporal inhomogeneities, imperfect coincidences) that can complicate the validation process. On this day, correlative aerosol measurements taken at an altitude of 20.5 km agreed with each other within their respective uncertainties, and particulate extinction values calculated at SAGE II wavelengths from these measurements validated corresponding SAGE II values. Additional validation efforts on days when measurement and logistical conditions were much less favorable for validation are discussed in an appendix.

  3. Turbulence Modeling Verification and Validation

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2014-01-01

    Computational fluid dynamics (CFD) software that solves the Reynolds-averaged Navier-Stokes (RANS) equations has been in routine use for more than a quarter of a century. It is currently employed not only for basic research in fluid dynamics, but also for the analysis and design processes in many industries worldwide, including aerospace, automotive, power generation, chemical manufacturing, polymer processing, and petroleum exploration. A key feature of RANS CFD is the turbulence model. Because the RANS equations are unclosed, a model is necessary to describe the effects of the turbulence on the mean flow, through the Reynolds stress terms. The turbulence model is one of the largest sources of uncertainty in RANS CFD, and most models are known to be flawed in one way or another. Alternative methods such as direct numerical simulations (DNS) and large eddy simulations (LES) rely less on modeling and hence include more physics than RANS. In DNS all turbulent scales are resolved, and in LES the large scales are resolved and the effects of the smallest turbulence scales are modeled. However, both DNS and LES are too expensive for most routine industrial usage on today's computers. Hybrid RANS-LES, which blends RANS near walls with LES away from walls, helps to moderate the cost while still retaining some of the scale-resolving capability of LES, but for some applications it can still be too expensive. Even considering its associated uncertainties, RANS turbulence modeling has proved to be very useful for a wide variety of applications. For example, in the aerospace field, many RANS models are considered to be reliable for computing attached flows. However, existing turbulence models are known to be inaccurate for many flows involving separation. Research has been ongoing for decades in an attempt to improve turbulence models for separated and other nonequilibrium flows. When developing or improving turbulence models, both verification and validation are important

  4. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part I: Benchmark comparisons of WIMS-D5 and DRAGON cell and control rod parameters with MCNP5

    SciTech Connect

    Mollerach, R.; Leszczynski, F.; Fink, J.

    2006-07-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure-vessel design with 451 vertical coolant channels, and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update calculation methods and models (cell, supercell and reactor) was recently carried out covering cell, supercell (control rod) and core calculations. As a validation of the new models some benchmark comparisons were done with Monte Carlo calculations with MCNP5. This paper presents comparisons of cell and supercell benchmark problems based on a slightly idealized model of the Atucha-I core obtained with the WIMS-D5 and DRAGON codes with MCNP5 results. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, and more symmetric than Atucha-II Cell parameters compared include cell k-infinity, relative power levels of the different rings of fuel rods, and some two-group macroscopic cross sections. Supercell comparisons include supercell k-infinity changes due to the control rods (tubes) of steel and hafnium. (authors)

  5. Proposed Modifications to the Conceptual Model of Coaching Efficacy and Additional Validity Evidence for the Coaching Efficacy Scale II-High School Teams

    ERIC Educational Resources Information Center

    Myers, Nicholas; Feltz, Deborah; Chase, Melissa

    2011-01-01

    The purpose of this study was to determine whether theoretically relevant sources of coaching efficacy could predict the measures derived from the Coaching Efficacy Scale II-High School Teams (CES II-HST). Data were collected from head coaches of high school teams in the United States (N = 799). The analytic framework was a multiple-group…

  6. Validation for a recirculation model.

    PubMed

    LaPuma, P T

    2001-04-01

    Recent Clean Air Act regulations designed to reduce volatile organic compound (VOC) emissions have placed new restrictions on painting operations. Treating large volumes of air which contain dilute quantities of VOCs can be expensive. Recirculating some fraction of the air allows an operator to comply with environmental regulations at reduced cost. However, there is a potential impact on employee safety because indoor pollutants will inevitably increase when air is recirculated. A computer model was developed, written in Microsoft Excel 97, to predict compliance costs and indoor air concentration changes with respect to changes in the level of recirculation for a given facility. The model predicts indoor air concentrations based on product usage and mass balance equations. This article validates the recirculation model using data collected from a C-130 aircraft painting facility at Hill Air Force Base, Utah. Air sampling data and air control cost quotes from vendors were collected for the Hill AFB painting facility and compared to the model's predictions. The model's predictions for strontium chromate and isocyanate air concentrations were generally between the maximum and minimum air sampling points with a tendency to predict near the maximum sampling points. The model's capital cost predictions for a thermal VOC control device ranged from a 14 percent underestimate to a 50 percent overestimate of the average cost quotes. A sensitivity analysis of the variables is also included. The model is demonstrated to be a good evaluation tool in understanding the impact of recirculation.

  7. Assimilation of Geosat Altimetric Data in a Nonlinear Shallow-Water Model of the Indian Ocean by Adjoint Approach. Part II: Some Validation and Interpretation of the Assimilated Results. Part 2; Some Validation and Interpretation of the Assimilated Results

    NASA Technical Reports Server (NTRS)

    Greiner, Eric; Perigaud, Claire

    1996-01-01

    This paper examines the results of assimilating Geosat sea level variations relative to the November 1986-November 1988 mean reference, in a nonlinear reduced-gravity model of the Indian Ocean, Data have been assimilated during one year starting in November 1986 with the objective of optimizing the initial conditions and the yearly averaged reference surface. The thermocline slope simulated by the model with or without assimilation is validated by comparison with the signal, which can be derived from expandable bathythermograph measurements performed in the Indian Ocean at that time. The topography simulated with assimilation on November 1986 is in very good agreement with the hydrographic data. The slopes corresponding to the South Equatorial Current and to the South Equatorial Countercurrent are better reproduced with assimilation than without during the first nine months. The whole circulation of the cyclonic gyre south of the equator is then strongly intensified by assimilation. Another assimilation experiment is run over the following year starting in November 1987. The difference between the two yearly mean surfaces simulated with assimilation is in excellent agreement with Geosat. In the southeastern Indian Ocean, the correction to the yearly mean dynamic topography due to assimilation over the second year is negatively correlated to the one the year before. This correction is also in agreement with hydrographic data. It is likely that the signal corrected by assimilation is not only due to wind error, because simulations driven by various wind forcings present the same features over the two years. Model simulations run with a prescribed throughflow transport anomaly indicate that assimilation is rather correcting in the interior of the model domain for inadequate boundary conditions with the Pacific.

  8. Validation of Magnetospheric Magnetohydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Curtis, Brian

    Magnetospheric magnetohydrodynamic (MHD) models are commonly used for both prediction and modeling of Earth's magnetosphere. To date, very little validation has been performed to determine their limits, uncertainties, and differences. In this work, we performed a comprehensive analysis using several commonly used validation techniques in the atmospheric sciences to MHD-based models of Earth's magnetosphere for the first time. The validation techniques of parameter variability/sensitivity analysis and comparison to other models were used on the OpenGGCM, BATS-R-US, and SWMF magnetospheric MHD models to answer several questions about how these models compare. The questions include: (1) the difference between the model's predictions prior to and following to a reversal of Bz in the upstream interplanetary field (IMF) from positive to negative, (2) the influence of the preconditioning duration, and (3) the differences between models under extreme solar wind conditions. A differencing visualization tool was developed and used to address these three questions. We find: (1) For a reversal in IMF Bz from positive to negative, the OpenGGCM magnetopause is closest to Earth as it has the weakest magnetic pressure near-Earth. The differences in magnetopause positions between BATS-R-US and SWMF are explained by the influence of the ring current, which is included in SWMF. Densities are highest for SWMF and lowest for OpenGGCM. The OpenGGCM tail currents differ significantly from BATS-R-US and SWMF; (2) A longer preconditioning time allowed the magnetosphere to relax more, giving different positions for the magnetopause with all three models before the IMF Bz reversal. There were differences greater than 100% for all three models before the IMF Bz reversal. The differences in the current sheet region for the OpenGGCM were small after the IMF Bz reversal. The BATS-R-US and SWMF differences decreased after the IMF Bz reversal to near zero; (3) For extreme conditions in the solar

  9. SRVAL. Stock-Recruitment Model VALidation Code

    SciTech Connect

    Christensen, S.W.

    1989-12-07

    SRVAL is a computer simulation model of the Hudson River striped bass population. It was designed to aid in assessing the validity of curve-fits of the linearized Ricker stock-recruitment model, modified to incorporate multiple-age spawners and to include an environmental variable, to variously processed annual catch-per-unit-effort (CPUE) statistics for a fish population. It is sometimes asserted that curve-fits of this kind can be used to determine the sensitivity of fish populations to such man-induced stresses as entrainment and impingement at power plants. SRVAL was developed to test such assertions and was utilized in testimony written in connection with the Hudson River Power Case (U. S. Environmental Protection Agency, Region II).

  10. Software Validation via Model Animation

    NASA Technical Reports Server (NTRS)

    Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.

    2015-01-01

    This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.

  11. Preparation of Power Distribution System for High Penetration of Renewable Energy Part I. Dynamic Voltage Restorer for Voltage Regulation Pat II. Distribution Circuit Modeling and Validation

    NASA Astrophysics Data System (ADS)

    Khoshkbar Sadigh, Arash

    and the power rating for loads, is presented to prioritize which loads, lines and cables the meters should be installed at to have the most effect on model validation.

  12. Verifying and Validating Simulation Models

    SciTech Connect

    Hemez, Francois M.

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  13. Validation of PEP-II Resonantly Excited Turn-by-Turn BPM Data

    SciTech Connect

    Yan, Yiton T.; Cai, Yunhai; Colocho, William.; Decker, Franz-Josef; /SLAC

    2007-06-28

    For optics measurement and modeling of the PEP-II electron (HER) and position (LER) storage rings, we have been doing well with MIA [1] which requires analyzing turn-by-turn Beam Position Monitor (BPM) data that are resonantly excited at the horizontal, vertical, and longitudinal tunes. However, in anticipation that certain BPM buttons and even pins in the PEP-II IR region would be missing for the run starting in January 2007, we had been developing a data validation process to reduce the effect due to the reduced BPM data accuracy on PEP-II optics measurement and modeling. Besides the routine process for ranking BPM noise level through data correlation among BPMs with a singular-value decomposition (SVD), we could also check BPM data symplecticity by comparing the invariant ratios. Results from PEP-II measurement will be presented.

  14. Inert doublet model and LEP II limits

    SciTech Connect

    Lundstroem, Erik; Gustafsson, Michael; Edsjoe, Joakim

    2009-02-01

    The inert doublet model is a minimal extension of the standard model introducing an additional SU(2) doublet with new scalar particles that could be produced at accelerators. While there exists no LEP II analysis dedicated for these inert scalars, the absence of a signal within searches for supersymmetric neutralinos can be used to constrain the inert doublet model. This translation however requires some care because of the different properties of the inert scalars and the neutralinos. We investigate what restrictions an existing DELPHI Collaboration study of neutralino pair production can put on the inert scalars and discuss the result in connection with dark matter. We find that although an important part of the inert doublet model parameter space can be excluded by the LEP II data, the lightest inert particle still constitutes a valid dark matter candidate.

  15. SAGE II aerosol data validation and initial data use - An introduction and overview

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Mccormick, M. P.

    1989-01-01

    The process of validating data from the Stratospheric Aerosol and Gas Experiment (SAGE) II and the initial use of the validated data are reviewed. The instruments developed for the SAGE II, the influence of the eruption of El Chichon on the global stratospheric aerosol, and various data validation experiments are discussed. Consideration is given to methods for deriving aerosol physical and optical properties from SAGE II extinction data and for inferring particle size distribution moments from SAGE II spectral extinction values.

  16. Validation studies of a computational model for molten material freezing

    SciTech Connect

    Sawada, Tetsuo; Ninokata, Hisashi; Shimizu, Akinao

    1996-02-01

    Validation studies are described of a computational model for the freezing of molten core materials under core disruptive accident conditions of fast breeder reactors. A series of out-of-pile experiments named SIMBATH, performed at Forschungszentrum Karlsruhe in Germany, has already been analyzed with the SIMMER-II code. In the current study, TRAN simulation tests in the SIMBATH facility are analyzed by SIMMER-II for its modeling validation of molten material freezing. The original TRAN experiments were performed at Sandia National laboratories to examine the freezing behavior of molten UO{sub 2} injected into an annular channels. In the TAN simulation experiments of the SIMBATH series, similar freezing phenomena are investigated for molten thermite, a mixture of Al{sub 2}O{sub 3} and iron, instead of UO{sub 2}. Two typical TRAN simulation tests are analyzed that aim at clarification of the applicability of the code to the freezing process during the experiments. The distribution of molten materials that are deposited in the test section according to the experimental measurements and in calculations by SIMMER-II is compared. These studies confirm that the conduction-limited freezing model combined with the rudimentary bulk freezing (particle-jamming) model of SIMMER-II is compared. These studies confirm that the conduction-limited freezing model combined with the rudimentary bulk freezing (particle-jamming) model of SIMMER-II could be used to reproduce the TRAN simulation experiments satisfactorily. This finding encourages the extrapolation of the results of previous validation research for SIMMER-II based on other SIMBATH tests to reactor case analyses. The calculation by SIMMER-II suggest that further improvements of the model, such as freezing on a convex surface of pin cladding and the scraping of crusts, make possible more accurate simulation of freezing phenomena.

  17. Validation of Multibody Program to Optimize Simulated Trajectories II Parachute Simulation with Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.

    2009-01-01

    A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.

  18. Factorial validity and measurement invariance across intelligence levels and gender of the overexcitabilities questionnaire-II (OEQ-II).

    PubMed

    Van den Broeck, Wim; Hofmans, Joeri; Cooremans, Sven; Staels, Eva

    2014-03-01

    The concept of overexcitability, derived from Dabrowski's theory of personality development, offers a promising approach for the study of the developmental dynamics of giftedness. The present study aimed at (a) examining the factorial structure of the Overexcitabilities Questionnaire-II scores (OEQ-II) and (b) testing measurement invariance of these scores across intelligence and gender. A sample of 641 Dutch-speaking adolescents from 11 to 15 years old, 363 girls and 278 boys, participated in this study. Results showed that a model without cross-loadings did not fit the data well (using confirmatory factor analysis), whereas a factor model in which all cross-loadings were included yielded fit statistics that were in support of the factorial structure of the OEQ-II scores (using exploratory structural equation modeling). Furthermore, our findings supported the assumption of (partial) strict measurement invariance of the OEQ-II scores across intelligence levels and across gender. Such levels of measurement invariance allow valid comparisons between factor means and factor relationships across groups. In particular, the gifted group scored significantly higher on intellectual and sensual overexcitability (OE) than the nongifted group, girls scored higher on emotional and sensual OE than boys, and boys scored higher on intellectual and psychomotor OE than girls.

  19. Preparation of Power Distribution System for High Penetration of Renewable Energy Part I. Dynamic Voltage Restorer for Voltage Regulation Pat II. Distribution Circuit Modeling and Validation

    NASA Astrophysics Data System (ADS)

    Khoshkbar Sadigh, Arash

    by simulation and experimental tests under various conditions considering all possible cases such as different amounts of voltage sag depth (VSD), different amounts of point-on-wave (POW) at which voltage sag occurs, harmonic distortion, line frequency variation, and phase jump (PJ). Furthermore, the ripple amount of fundamental voltage amplitude calculated by the proposed method and its error is analyzed considering the line frequency variation together with harmonic distortion. The best and worst detection time of proposed method were measured 1ms and 8.8ms, respectively. Finally, the proposed method has been compared with other voltage sag detection methods available in literature. Part 2: Power System Modeling for Renewable Energy Integration: As power distribution systems are evolving into more complex networks, electrical engineers have to rely on software tools to perform circuit analysis. There are dozens of powerful software tools available in the market to perform the power system studies. Although their main functions are similar, there are differences in features and formatting structures to suit specific applications. This creates challenges for transferring power system circuit models data (PSCMD) between different software and rebuilding the same circuit in the second software environment. The objective of this part of thesis is to develop a Unified Platform (UP) to facilitate transferring PSCMD among different software packages and relieve the challenges of the circuit model conversion process. UP uses a commonly available spreadsheet file with a defined format, for any home software to write data to and for any destination software to read data from, via a script-based application called PSCMD transfer application. The main considerations in developing the UP are to minimize manual intervention and import a one-line diagram into the destination software or export it from the source software, with all details to allow load flow, short circuit and

  20. NASA GSFC CCMC Recent Model Validation Activities

    NASA Technical Reports Server (NTRS)

    Rastaetter, L.; Pulkkinen, A.; Taktakishvill, A.; Macneice, P.; Shim, J. S.; Zheng, Yihua; Kuznetsova, M. M.; Hesse, M.

    2012-01-01

    The Community Coordinated Modeling Center (CCMC) holds the largest assembly of state-of-the-art physics-based space weather models developed by the international space physics community. In addition to providing the community easy access to these modern space research models to support science research, its another primary goal is to test and validate models for transition from research to operations. In this presentation, we provide an overview of the space science models available at CCMC. Then we will focus on the community-wide model validation efforts led by CCMC in all domains of the Sun-Earth system and the internal validation efforts at CCMC to support space weather servicesjoperations provided its sibling organization - NASA GSFC Space Weather Center (http://swc.gsfc.nasa.gov). We will also discuss our efforts in operational model validation in collaboration with NOAA/SWPC.

  1. Algorithm for model validation: Theory and applications

    PubMed Central

    Sornette, D.; Davis, A. B.; Ide, K.; Vixie, K. R.; Pisarenko, V.; Kamm, J. R.

    2007-01-01

    Validation is often defined as the process of determining the degree to which a model is an accurate representation of the real world from the perspective of its intended uses. Validation is crucial as industries and governments depend increasingly on predictions by computer models to justify their decisions. We propose to formulate the validation of a given model as an iterative construction process that mimics the often implicit process occurring in the minds of scientists. We offer a formal representation of the progressive build-up of trust in the model. Thus, we replace static claims on the impossibility of validating a given model by a dynamic process of constructive approximation. This approach is better adapted to the fuzzy, coarse-grained nature of validation. Our procedure factors in the degree of redundancy versus novelty of the experiments used for validation as well as the degree to which the model predicts the observations. We illustrate the methodology first with the maturation of quantum mechanics as the arguably best established physics theory and then with several concrete examples drawn from some of our primary scientific interests: a cellular automaton model for earthquakes, a multifractal random walk model for financial time series, an anomalous diffusion model for solar radiation transport in the cloudy atmosphere, and a computational fluid dynamics code for the Richtmyer–Meshkov instability. PMID:17420476

  2. Ground-water models cannot be validated

    USGS Publications Warehouse

    Konikow, L.F.; Bredehoeft, J.D.

    1992-01-01

    Ground-water models are embodiments of scientific hypotheses. As such, the models cannot be proven or validated, but only tested and invalidated. However, model testing and the evaluation of predictive errors lead to improved models and a better understanding of the problem at hand. In applying ground-water models to field problems, errors arise from conceptual deficiencies, numerical errors, and inadequate parameter estimation. Case histories of model applications to the Dakota Aquifer, South Dakota, to bedded salts in New Mexico, and to the upper Coachella Valley, California, illustrate that calibration produces a nonunique solution and that validation, per se, is a futile objective. Although models are definitely valuable tools for analyzing ground-water systems, their predictive accuracy is limited. The terms validation and verification are misleading and their use in ground-water science should be abandoned in favor of more meaningful model-assessment descriptors. ?? 1992.

  3. Solvation models: theory and validation.

    PubMed

    Purisima, Enrico O; Sulea, Traian

    2014-01-01

    Water plays an active role in many fundamental phenomena in cellular systems such as molecular recognition, folding and conformational equilibria, reaction kinetics and phase partitioning. Hence, our ability to account for the energetics of these processes is highly dependent on the models we use for calculating solvation effects. For example, theoretical prediction of protein-ligand binding modes (i.e., docking) and binding affinities (i.e., scoring) requires an accurate description of the change in hydration that accompanies solute binding. In this review, we discuss the challenges of constructing solvation models that capture these effects, with an emphasis on continuum models and on more recent developments in the field. In our discussion of methods, relatively greater attention will be given to boundary element solutions to the Poisson equation and to nonpolar solvation models, two areas that have become increasingly important but are likely to be less familiar to many readers. The other focus will be upon the trending efforts for evaluating solvation models in order to uncover limitations, biases, and potentially attractive directions for their improvement and applicability. The prospective and retrospective performance of a variety of solvation models in the SAMPL blind challenges will be discussed in detail. After just a few years, these benchmarking exercises have already had a tangible effect in guiding the improvement of solvation models.

  4. Validity of the Sleep Subscale of the Diagnostic Assessment for the Severely Handicapped-II (DASH-II)

    ERIC Educational Resources Information Center

    Matson, Johnny L.; Malone, Carrie J.

    2006-01-01

    Currently there are no available sleep disorder measures for individuals with severe and profound intellectual disability. We, therefore, attempted to establish the external validity of the "Diagnostic Assessment for the Severely Handicapped-II" (DASH-II) sleep subscale by comparing daily observational sleep data with the responses of direct care…

  5. Reliability and Validity of the Beck Depression Inventory--II with Adolescent Psychiatric Inpatients

    ERIC Educational Resources Information Center

    Osman, Augustine; Kopper, Beverly A; Barrios, Frank; Gutierrez, Peter M.; Bagge, Courtney L.

    2004-01-01

    This investigation was conducted to validate the Beck Depression Inventory--II (BDI-II; A. T. Beck, R. A. Steer, & G. K. Brown, 1996) in samples of adolescent psychiatric inpatients. The sample in each substudy was primarily Caucasian. In Study 1, expert raters (N=7) and adolescent psychiatric inpatients (N=13) evaluated the BDI-II items to assess…

  6. Empirical assessment of model validity

    SciTech Connect

    Wolfe, R.R. )

    1991-05-01

    The metabolism of amino acids is far more complicated than a 1- to 2-pool model. Yet, these simple models have been extensively used with many different isotopically labeled tracers to study protein metabolism. A tracer of leucine and measurement of leucine kinetics has been a favorite choice for following protein metabolism. However, administering a leucine tracer and following it in blood will not adequately reflect the complex multi-pool nature of the leucine system. Using the tracer enrichment of the ketoacid metabolite of leucine, alpha-ketoisocaproate (KIC), to reflect intracellular events of leucine was an important improvement. Whether this approach is adequate to follow accurately leucine metabolism in vivo or not has not been tested. From data obtained using simultaneous administration of leucine and KIC tracers, we developed a 10-pool model of the in vivo leucine-KIC and bicarbonate kinetic system. Data from this model were compared with conventional measurements of leucine kinetics. The results from the 10-pool model agreed best with the simplified approach using a leucine tracer and measurement of KIC enrichment.

  7. Validation of the Hot Strip Mill Model

    SciTech Connect

    Richard Shulkosky; David Rosberg; Jerrud Chapman

    2005-03-30

    The Hot Strip Mill Model (HSMM) is an off-line, PC based software originally developed by the University of British Columbia (UBC) and the National Institute of Standards and Technology (NIST) under the AISI/DOE Advanced Process Control Program. The HSMM was developed to predict the temperatures, deformations, microstructure evolution and mechanical properties of steel strip or plate rolled in a hot mill. INTEG process group inc. undertook the current task of enhancing and validating the technology. With the support of 5 North American steel producers, INTEG process group tested and validated the model using actual operating data from the steel plants and enhanced the model to improve prediction results.

  8. Ground-water models: Validate or invalidate

    USGS Publications Warehouse

    Bredehoeft, J.D.; Konikow, L.F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  9. Numerical model representation and validation strategies

    SciTech Connect

    Dolin, R.M.; Hefele, J.

    1997-10-01

    This paper describes model representation and validation strategies for use in numerical tools that define models in terms of topology, geometry, or topography. Examples of such tools include Computer-Assisted Engineering (CAE), Computer-Assisted Manufacturing (CAM), Finite Element Analysis (FEA), and Virtual Environment Simulation (VES) tools. These tools represent either physical objects or conceptual ideas using numerical models for the purpose of posing a question, performing a task, or generating information. Dependence on these numerical representations require that models be precise, consistent across different applications, and verifiable. This paper describes a strategy for ensuring precise, consistent, and verifiable numerical model representations in a topographic framework. The main assertion put forth is that topographic model descriptions are more appropriate for numerical applications than topological or geometrical descriptions. A topographic model verification and validation methodology is presented.

  10. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    SciTech Connect

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.

  11. Oil spill impact modeling: development and validation.

    PubMed

    French-McCay, Deborah P

    2004-10-01

    A coupled oil fate and effects model has been developed for the estimation of impacts to habitats, wildlife, and aquatic organisms resulting from acute exposure to spilled oil. The physical fates model estimates the distribution of oil (as mass and concentrations) on the water surface, on shorelines, in the water column, and in the sediments, accounting for spreading, evaporation, transport, dispersion, emulsification, entrainment, dissolution, volatilization, partitioning, sedimentation, and degradation. The biological effects model estimates exposure of biota of various behavior types to floating oil and subsurface contamination, resulting percent mortality, and sublethal effects on production (somatic growth). Impacts are summarized as areas or volumes affected, percent of populations lost, and production foregone because of a spill's effects. This paper summarizes existing information and data used to develop the model, model algorithms and assumptions, validation studies, and research needs. Simulation of the Exxon Valdez oil spill is presented as a case study and validation of the model. PMID:15511105

  12. Oil spill impact modeling: development and validation.

    PubMed

    French-McCay, Deborah P

    2004-10-01

    A coupled oil fate and effects model has been developed for the estimation of impacts to habitats, wildlife, and aquatic organisms resulting from acute exposure to spilled oil. The physical fates model estimates the distribution of oil (as mass and concentrations) on the water surface, on shorelines, in the water column, and in the sediments, accounting for spreading, evaporation, transport, dispersion, emulsification, entrainment, dissolution, volatilization, partitioning, sedimentation, and degradation. The biological effects model estimates exposure of biota of various behavior types to floating oil and subsurface contamination, resulting percent mortality, and sublethal effects on production (somatic growth). Impacts are summarized as areas or volumes affected, percent of populations lost, and production foregone because of a spill's effects. This paper summarizes existing information and data used to develop the model, model algorithms and assumptions, validation studies, and research needs. Simulation of the Exxon Valdez oil spill is presented as a case study and validation of the model.

  13. Structural system identification: Structural dynamics model validation

    SciTech Connect

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  14. Feature extraction for structural dynamics model validation

    SciTech Connect

    Hemez, Francois; Farrar, Charles; Park, Gyuhae; Nishio, Mayuko; Worden, Keith; Takeda, Nobuo

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  15. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  16. Evaluation (not validation) of quantitative models.

    PubMed Central

    Oreskes, N

    1998-01-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  17. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  18. Validation of Space Weather Models at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, M. M.; Pulkkinen, A.; Rastaetter, L.; Hesse, M.; Chulaki, A.; Maddox, M.

    2011-01-01

    The Community Coordinated Modeling Center (CCMC) is a multiagency partnership, which aims at the creation of next generation space weather modes. CCMC goal is to support the research and developmental work necessary to substantially increase space weather modeling capabilities and to facilitate advanced models deployment in forecasting operations. The CCMC conducts unbiased model testing and validation and evaluates model readiness for operational environment. The presentation will demonstrate the recent progress in CCMC metrics and validation activities.

  19. VALIDATION OF IMPROVED 3D ATR MODEL

    SciTech Connect

    Soon Sam Kim; Bruce G. Schnitzler

    2005-11-01

    A full-core Monte Carlo based 3D model of the Advanced Test Reactor (ATR) was previously developed. [1] An improved 3D model has been developed by the International Criticality Safety Benchmark Evaluation Project (ICSBEP) to eliminate homogeneity of fuel plates of the old model, incorporate core changes into the new model, and to validate against a newer, more complicated core configuration. This new 3D model adds capability for fuel loading design and azimuthal power peaking studies of the ATR fuel elements.

  20. Heat transfer to foods: Modelling and validation

    NASA Astrophysics Data System (ADS)

    Cox, P. W.; Fryer, P. J.

    2002-11-01

    The food industry uses a wide variety of processes which are not well understood. Current modelling and measurement approaches are reviewed, with specific reference to work at Birmingham on Particle tracking (PEPT) and the potential of temperature time indicators in process validation.

  1. Validating the Beck Depression Inventory-II for Hong Kong Community Adolescents

    ERIC Educational Resources Information Center

    Byrne, Barbara M.; Stewart, Sunita M.; Lee, Peter W. H.

    2004-01-01

    The primary purpose of this study was to test for the validity of a Chinese version of the Beck Depression Inventory-II (C-BDI-II) for use with Hong Kong community (i.e., nonclinical) adolescents. Based on a randomized triadic split of the data (N = 1460), we conducted exploratory factor analysis on Group1 (n = 486) and confirmatory factor…

  2. Regimes of validity for balanced models

    NASA Astrophysics Data System (ADS)

    Gent, Peter R.; McWilliams, James C.

    1983-07-01

    Scaling analyses are presented which delineate the atmospheric and oceanic regimes of validity for the family of balanced models described in Gent and McWilliams (1983a). The analyses follow and extend the classical work of Charney (1948) and others. The analyses use three non-dimensional parameters which represent the flow scale relative to the Earth's radius, the dominance of turbulent or wave-like processes, and the dominant component of the potential vorticity. For each regime, the models that are accurate both at leading order and through at least one higher order of accuracy in the appropriate small parameter are then identified. In particular, it is found that members of the balanced family are the appropriate models of higher-order accuracy over a broad range of parameter regimes. Examples are also given of particular atmospheric and oceanic phenomena which are in the regimes of validity for the different balanced models.

  3. Validation of Hadronic Models in Geant4

    SciTech Connect

    Koi, Tatsumi; Wright, Dennis H.; Folger, Gunter; Ivantchenko, Vladimir; Kossov, Mikhail; Starkov, Nikolai; Heikkinen, Aatos; Truscott, Pete; LeiFan; Wellisch, Hans-Peter

    2007-03-19

    Geant4 is a software toolkit for the simulation of the passage of particles through matter. It has abundant hadronic models from thermal neutron interactions to ultra relativistic hadrons. An overview of validations in Geant4 hadronic physics is presented based on thin-target measurements. In most cases, good agreement is available between Monte Carlo prediction and experimental data; however, several problems have been detected which require some improvement in the models.

  4. Validation of Hadronic Models in GEANT4

    SciTech Connect

    Koi, Tatsumi; Wright, Dennis H.; Folger, Gunter; Ivanchenko, Vladimir; Kossov, Mikhail; Starkov, Nikolai; Heikkinen, Aatos; Truscott, Peter; Lei, Fan; Wellisch, Hans-Peter

    2007-09-26

    Geant4 is a software toolkit for the simulation of the passage of particles through matter. It has abundant hadronic models from thermal neutron interactions to ultra relativistic hadrons. An overview of validations in Geant4 hadronic physics is presented based on thin target measurements. In most cases, good agreement is available between Monte Carlo prediction and experimental data; however, several problems have been detected which require some improvement in the models.

  5. Validity evidence based on internal structure of scores on the Spanish version of the Self-Description Questionnaire-II.

    PubMed

    Ingles, Cándido J; Torregrosa, María S; Hidalgo, María D; Nuñez, Jose C; Castejón, Juan L; García-Fernández, Jose M; Valles, Antonio

    2012-03-01

    The aim of this study was to analyze the reliability and validity evidence of scores on the Spanish version of Self-Description Questionnaire II (SDQ-II). The instrument was administered in a sample of 2022 Spanish students (51.1% boys) from grades 7 to 10. Confirmatory factor analysis (CFA) was used to examine validity evidence based on internal structure drawn from the scores on the SDQ-II. CFA replicated the correlated II first-order factor structure. Furthermore, hierarchical confirmatory factor analysis (HCFA) was used to examine the hierarchical ordering of self-concept, as measured by scores on the Spanish version of the SDQ-II. Although a series of HCFA models were tested to assess academic and non-academic components organization, support for those hierarchical models was weaker than for the correlated 11 first-order factor structure. Results also indicated that scores on the Spanish version of the SDQ-II had internal consistency and test-retest reliability estimates within an acceptable range.

  6. A Hierarchical Systems Approach to Model Validation

    NASA Astrophysics Data System (ADS)

    Easterbrook, S. M.

    2011-12-01

    Existing approaches to the question of how climate models should be evaluated tend to rely on either philosophical arguments about the status of models as scientific tools, or on empirical arguments about how well runs from a given model match observational data. These have led to quantitative approaches expressed in terms of model bias or forecast skill, and ensemble approaches where models are assessed according to the extent to which the ensemble brackets the observational data. Unfortunately, such approaches focus the evaluation on models per se (or more specifically, on the simulation runs they produce) as though the models can be isolated from their context. Such approach may overlook a number of important aspects of the use of climate models: - the process by which models are selected and configured for a given scientific question. - the process by which model outputs are selected, aggregated and interpreted by a community of expertise in climatology. - the software fidelity of the models (i.e. whether the running code is actually doing what the modellers think it's doing). - the (often convoluted) history that begat a given model, along with the modelling choices long embedded in the code. - variability in the scientific maturity of different model components within a coupled system. These omissions mean that quantitative approaches cannot assess whether a model produces the right results for the wrong reasons, or conversely, the wrong results for the right reasons (where, say the observational data is problematic, or the model is configured to be unlike the earth system for a specific reason). Hence, we argue that it is a mistake to think that validation is a post-hoc process to be applied to an individual "finished" model, to ensure it meets some criteria for fidelity to the real world. We are therefore developing a framework for model validation that extends current approaches down into the detailed codebase and the processes by which the code is built

  7. Calibration and validation of rockfall models

    NASA Astrophysics Data System (ADS)

    Frattini, Paolo; Valagussa, Andrea; Zenoni, Stefania; Crosta, Giovanni B.

    2013-04-01

    Calibrating and validating landslide models is extremely difficult due to the particular characteristic of landslides: limited recurrence in time, relatively low frequency of the events, short durability of post-event traces, poor availability of continuous monitoring data, especially for small landslide and rockfalls. For this reason, most of the rockfall models presented in literature completely lack calibration and validation of the results. In this contribution, we explore different strategies for rockfall model calibration and validation starting from both an historical event and a full-scale field test. The event occurred in 2012 in Courmayeur (Western Alps, Italy), and caused serious damages to quarrying facilities. This event has been studied soon after the occurrence through a field campaign aimed at mapping the blocks arrested along the slope, the shape and location of the detachment area, and the traces of scars associated to impacts of blocks on the slope. The full-scale field test was performed by Geovert Ltd in the Christchurch area (New Zealand) after the 2011 earthquake. During the test, a number of large blocks have been mobilized from the upper part of the slope and filmed with high velocity cameras from different viewpoints. The movies of each released block were analysed to identify the block shape, the propagation path, the location of impacts, the height of the trajectory and the velocity of the block along the path. Both calibration and validation of rockfall models should be based on the optimization of the agreement between the actual trajectories or location of arrested blocks and the simulated ones. A measure that describe this agreement is therefore needed. For calibration purpose, this measure should simple enough to allow trial and error repetitions of the model for parameter optimization. In this contribution we explore different calibration/validation measures: (1) the percentage of simulated blocks arresting within a buffer of the

  8. Crystallographic Model Validation: from Diagnosis to Healing

    PubMed Central

    Richardson, Jane S.; Prisant, Michael G.; Richardson, David C.

    2013-01-01

    Model validation has evolved from a passive final gatekeeping step to an ongoing diagnosis and healing process that enables significant improvement of accuracy. A recent phase of active development was spurred by the worldwide Protein Data Bank requiring data deposition and establishing Validation Task Force committees, by strong growth in high-quality reference data, by new speed and ease of computations, and by an upswing of interest in large molecular machines and structural ensembles. Progress includes automated correction methods, concise and user-friendly validation reports for referees and on the PDB websites, extension of error correction to RNA and error diagnosis to ligands, carbohydrates, and membrane proteins, and a good start on better methods for low resolution and for multiple conformations. PMID:24064406

  9. Predicting Backdrafting and Spillage for Natural-Draft Gas Combustion Appliances: Validating VENT-II

    SciTech Connect

    Rapp, Vi H.; Pastor-Perez, Albert; Singer, Brett C.; Wray, Craig P.

    2013-04-01

    VENT-II is a computer program designed to provide detailed analysis of natural draft and induced draft combustion appliance vent-systems (i.e., furnace or water heater). This program is capable of predicting house depressurization thresholds that lead to backdrafting and spillage of combustion appliances; however, validation reports of the program being applied for this purpose are not readily available. The purpose of this report is to assess VENT-II’s ability to predict combustion gas spillage events due to house depressurization by comparing VENT-II simulated results with experimental data for four appliance configurations. The results show that VENT-II correctly predicts depressurizations resulting in spillage for natural draft appliances operating in cold and mild outdoor conditions, but not for hot conditions. In the latter case, the predicted depressurizations depend on whether the vent section is defined as part of the vent connector or the common vent when setting up the model. Overall, the VENTII solver requires further investigation before it can be used reliably to predict spillage caused by depressurization over a full year of weather conditions, especially where hot conditions occur.

  10. Solar Sail Model Validation from Echo Trajectories

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Brickerhoff, Adam T.

    2007-01-01

    The NASA In-Space Propulsion program has been engaged in a project to increase the technology readiness of solar sails. Recently, these efforts came to fruition in the form of several software tools to model solar sail guidance, navigation and control. Furthermore, solar sails are one of five technologies competing for the New Millennium Program Space Technology 9 flight demonstration mission. The historic Echo 1 and Echo 2 balloons were comprised of aluminized Mylar, which is the near-term material of choice for solar sails. Both spacecraft, but particularly Echo 2, were in low Earth orbits with characteristics similar to the proposed Space Technology 9 orbit. Therefore, the Echo balloons are excellent test cases for solar sail model validation. We present the results of studies of Echo trajectories that validate solar sail models of optics, solar radiation pressure, shape and low-thrust orbital dynamics.

  11. Using Model Checking to Validate AI Planner Domain Models

    NASA Technical Reports Server (NTRS)

    Penix, John; Pecheur, Charles; Havelund, Klaus

    1999-01-01

    This report describes an investigation into using model checking to assist validation of domain models for the HSTS planner. The planner models are specified using a qualitative temporal interval logic with quantitative duration constraints. We conducted several experiments to translate the domain modeling language into the SMV, Spin and Murphi model checkers. This allowed a direct comparison of how the different systems would support specific types of validation tasks. The preliminary results indicate that model checking is useful for finding faults in models that may not be easily identified by generating test plans.

  12. SPR Hydrostatic Column Model Verification and Validation.

    SciTech Connect

    Bettin, Giorgia; Lord, David; Rudeen, David Keith

    2015-10-01

    A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extended nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.

  13. Historical validation of an attrition model

    SciTech Connect

    Hartley, D.S. III.

    1990-05-01

    This paper is the third in a series of reports on the breakthrough research in historical validation of attrition in conflict. Significant defense policy decisions, including weapons acquisition and arms reduction, are based, in part, on models of conflict. Most of these models are driven by their attrition algorithms, usually forms of the Lanchester square and linear laws. None of these algorithms have been validated. Helmbold demonstrated a relationship between the Helmbold ratio, a ratio containing initial force sizes and causalities, and the initial force ratio in a large number of historical battles. It has also been shown that at least two models of warfare could produce these results, a mixed linear-logarithmic Lanchestrain attrition law and a constraint (of battle engagement and termination) model of attrition. This paper examines the distribution statistics of the historical data and determines that the mixed law model is favored. The differential form of the mixed law model that best fits the casualty data is found. This model also provides a parameter to predict the victor. 6 refs., 28 figs., 13 tabs.

  14. Hierarchical Model Validation of Symbolic Performance Models of Scientific Kernels

    SciTech Connect

    Alam, Sadaf R; Vetter, Jeffrey S

    2006-08-01

    Multi-resolution validation of hierarchical performance models of scientific applications is critical primarily for two reasons. First, the step-by-step validation determines the correctness of all essential components or phases in a science simulation. Second, a model that is validated at multiple resolution levels is the very first step to generate predictive performance models, for not only existing systems but also for emerging systems and future problem sizes. We present the design and validation of hierarchical performance models of two scientific benchmarks using a new technique called the modeling assertions (MA). Our MA prototype framework generates symbolic performance models that can be evaluated efficiently by generating the equivalent model representations in Octave and MATLAB. The multi-resolution modeling and validation is conducted on two contemporary, massively-parallel systems, XT3 and Blue Gene/L system. The workload distribution and the growth rates predictions generated by the MA models are confirmed by the experimental data collected on the MPP platforms. In addition, the physical memory requirements that are generated by the MA models are verified by the runtime values on the Blue Gene/L system, which has 512 MBytes and 256 MBytes physical memory capacity in its two unique execution modes.

  15. Validation of the Sexual Assault Symptom Scale II (SASS II) Using a Panel Research Design

    ERIC Educational Resources Information Center

    Ruch, Libby O.; Wang, Chang-Hwai

    2006-01-01

    To examine the utility of a self-report scale of sexual assault trauma, 223 female victims were interviewed with the 43-item Sexual Assault Symptom Scale II (SASS II) at 1, 3, 7, 11, and 15 months postassault. Factor analyses using principal-components extraction with an oblimin rotation yielded 7 common factors with 31 items. The internal…

  16. ExodusII Finite Element Data Model

    SciTech Connect

    2005-05-14

    EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface. (exodus II is based on netcdf)

  17. Predictive Validation of an Influenza Spread Model

    PubMed Central

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  18. Validation of Space Weather Models at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, M. M.; Hesse, M.; Pulkkinen, A.; Maddox, M.; Rastaetter, L.; Berrios, D.; Zheng, Y.; MacNeice, P. J.; Shim, J.; Taktakishvili, A.; Chulaki, A.

    2011-01-01

    The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership to support the research and developmental work necessary to substantially increase space weather modeling capabilities and to facilitate advanced models deployment in forecasting operations. Space weather models and coupled model chains hosted at the CCMC range from the solar corona to the Earth's upper atmosphere. CCMC has developed a number of real-time modeling systems, as well as a large number of modeling and data products tailored to address the space weather needs of NASA's robotic missions. The CCMC conducts unbiased model testing and validation and evaluates model readiness for operational environment. CCMC has been leading recent comprehensive modeling challenges under GEM, CEDAR and SHINE programs. The presentation will focus on experience in carrying out comprehensive and systematic validation of large sets of. space weather models

  19. Validation of the Korean version Moorehead-Ardelt quality of life questionnaire II

    PubMed Central

    Lee, Yeon Ji; Song, Hyun Jin; Oh, Sung-Hee; Kwon, Jin Won; Moon, Kon-Hak; Park, Joong-Min; Lee, Sang Kuon

    2014-01-01

    Purpose To investigate the weight loss effects with higher sensitivity, disease specific quality of life (QoL) instruments were important. The Moorehead-Ardelt quality of life questionnaire II (MA-II) is widely used, because it was simple and validated the several languages. The aims of present study was performed the translation of MA-II Korean version and the validation compared with EuroQol-5 dimension (EQ-5D), obesity-related problems scale (OP-scale), and impact of weight quality of life-lite (IWQoL-Lite). Methods The study design was a multicenter, cross-sectional survey and this study was included the postoperative patients. The validation procedure is translation-back translation procedure, pilot study, and field study. The instruments of measuring QoL included the MA-II, EQ-5D, OP-scale, and IWQoL-lite. The reliability was checked through internal consistency using Cronbach alpha coefficients. The construct validity was assessed the Spearman rank correlation between 6 domains of MA-II and EQ-5D, OP-scale, and 5 domains of IWQoL-Lite. Results The Cronbach alpha of MA-II was 0.763, so the internal consistency was confirmed. The total score of MA-II was significantly correlated with all other instruments; EQ-5D, OP-scale, and IWQoL-Lite. IWQoL-lite (ρ = 0.623, P < 0.001) was showed the strongest correlation compared with MA-II, followed by OP-scale (ρ = 0.588, P < 0.001) and EQ-5D (ρ = 0.378, P < 0.01). Conclusion The Korean version MA-II was valid instrument of measuring the obesity-specific QoL. Through the present study, the MA-II was confirmed to have good reliability and validity and it was also answered simple for investigating. Thus, MA-II could be estimated sensitive and exact QoL in obesity patients. PMID:25368853

  20. Turbulence Modeling Validation, Testing, and Development

    NASA Technical Reports Server (NTRS)

    Bardina, J. E.; Huang, P. G.; Coakley, T. J.

    1997-01-01

    The primary objective of this work is to provide accurate numerical solutions for selected flow fields and to compare and evaluate the performance of selected turbulence models with experimental results. Four popular turbulence models have been tested and validated against experimental data often turbulent flows. The models are: (1) the two-equation k-epsilon model of Wilcox, (2) the two-equation k-epsilon model of Launder and Sharma, (3) the two-equation k-omega/k-epsilon SST model of Menter, and (4) the one-equation model of Spalart and Allmaras. The flows investigated are five free shear flows consisting of a mixing layer, a round jet, a plane jet, a plane wake, and a compressible mixing layer; and five boundary layer flows consisting of an incompressible flat plate, a Mach 5 adiabatic flat plate, a separated boundary layer, an axisymmetric shock-wave/boundary layer interaction, and an RAE 2822 transonic airfoil. The experimental data for these flows are well established and have been extensively used in model developments. The results are shown in the following four sections: Part A describes the equations of motion and boundary conditions; Part B describes the model equations, constants, parameters, boundary conditions, and numerical implementation; and Parts C and D describe the experimental data and the performance of the models in the free-shear flows and the boundary layer flows, respectively.

  1. Validation of Kp Estimation and Prediction Models

    NASA Astrophysics Data System (ADS)

    McCollough, J. P., II; Young, S. L.; Frey, W.

    2014-12-01

    Specifification and forecast of geomagnetic indices is an important capability for space weather operations. The University Partnering for Operational Support (UPOS) effort at the Applied Physics Laboratory of Johns Hopkins University (JHU/APL) produced many space weather models, including the Kp Predictor and Kp Estimator. We perform a validation of index forecast products against definitive indices computed by the Deutches GeoForschungsZentstrum Potsdam (GFZ). We compute continuous predictant skill scores, as well as 2x2 contingency tables and associated scalar quantities for different index thresholds. We also compute a skill score against a nowcast persistence model. We discuss various sources of error for the models and how they may potentially be improved.

  2. Poisson validity for orbital debris: II. Combinatorics and simulation

    NASA Astrophysics Data System (ADS)

    Fudge, Michael L.; Maclay, Timothy D.

    1997-10-01

    The International Space Station (ISS) will be at risk from orbital debris and micrometeorite impact (i.e., an impact that penetrates a critical component, possibly leading to loss of life). In support of ISS, last year the authors examined a fundamental assumption upon which the modeling of risk is based; namely, the assertion that the orbital collision problem can be modeled using a Poisson distribution. The assumption was found to be appropriate based upon the Poisson's general use as an approximation for the binomial distribution and the fact that is it proper to physically model exposure to the orbital debris flux environment using the binomial. This paper examines another fundamental issue in the expression of risk posed to space structures: the methodology by which individual incremental collision probabilities are combined to express an overall collision probability. The specific situation of ISS in this regard is that the determination of the level of safety for ISS is made via a single overall expression of critical component penetration risk. This paper details the combinatorial mathematical methods for calculating and expressing individual component (or incremental) penetration risks, utilizing component risk probabilities to produce an overall station penetration risk probability, and calculating an expected probability of loss from estimates for the loss of life given a penetration. Additionally, the paper will examine whether the statistical Poissonian answer to the orbital collision problem can be favorably compared to the results of a Monte Carlo simulation.

  3. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  4. Concepts of Model Verification and Validation

    SciTech Connect

    B.H.Thacker; S.W.Doebling; F.M.Hemez; M.C. Anderson; J.E. Pepin; E.A. Rodriguez

    2004-10-30

    Model verification and validation (V&V) is an enabling methodology for the development of computational models that can be used to make engineering predictions with quantified confidence. Model V&V procedures are needed by government and industry to reduce the time, cost, and risk associated with full-scale testing of products, materials, and weapon systems. Quantifying the confidence and predictive accuracy of model calculations provides the decision-maker with the information necessary for making high-consequence decisions. The development of guidelines and procedures for conducting a model V&V program are currently being defined by a broad spectrum of researchers. This report reviews the concepts involved in such a program. Model V&V is a current topic of great interest to both government and industry. In response to a ban on the production of new strategic weapons and nuclear testing, the Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship Program (SSP). An objective of the SSP is to maintain a high level of confidence in the safety, reliability, and performance of the existing nuclear weapons stockpile in the absence of nuclear testing. This objective has challenged the national laboratories to develop high-confidence tools and methods that can be used to provide credible models needed for stockpile certification via numerical simulation. There has been a significant increase in activity recently to define V&V methods and procedures. The U.S. Department of Defense (DoD) Modeling and Simulation Office (DMSO) is working to develop fundamental concepts and terminology for V&V applied to high-level systems such as ballistic missile defense and battle management simulations. The American Society of Mechanical Engineers (ASME) has recently formed a Standards Committee for the development of V&V procedures for computational solid mechanics models. The Defense Nuclear Facilities Safety Board (DNFSB) has been a proponent of model V&V for all

  5. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    USGS Publications Warehouse

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead

  6. [Development and validation of a dynamic model of the knee].

    PubMed

    Mabit, C; Geais, L; Blanchard, B; Elbaroudi, F; Guingand, O

    2007-10-01

    The authors report the methodology of the construction of a multibody model of the knee and the validation of the kinematics of the modelled knee. The construction of the model includes: the rigid bodies represented by osseous components (femur, tibia, fibula, patella), the ligamentous structures (collateral ligaments, patellar ligament, cruciates ligaments), the muscular part represented by the quadriceps. Morphological data were acquired through 3D CT scans for the bones and a biometrical study of the ligaments (insertions, orientation, length, section). Ligament biomechanics was modelled as bilinear springs (in compression the tightness is null; in traction it is a function of length, section and Young modulus of elasticity). The quadriceps was modelled as a sliding channel with a translatory servocommand. Contacts at the interfaces (femur/patella; femur/tibia) were evaluated according to the index of penetration (distance D) between two bodies where effort was: Dx10(5) N/mm(2)). The model was tested simulating a symmetrical kneeling (800 N body weight) and required a ground link modelled as a ball and socket joint. The model was developed under ADAMS software. The validation of the kinematics of the modelled knee was provided according to the data of Wilson et al. who have shown that (i) in normal knees, internal/external rotation, abduction/adduction and all three components of translation are coupled to flexion angle both in passive flexion and extension; (ii) the tibia rotates internally as the knee is flexed. The consistency of the coupled motions support the model's premise that passive knee motion is guided by isometric fascicles in anterior and posterior cruciates, by the medial collateral ligament and by articular contact in the medial and lateral compartments. The main curves (internal/external rotations; posterior/anterior translation) of the model conforms with the framework of Wilson.

  7. Validated predictive modelling of the environmental resistome.

    PubMed

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  8. Validated predictive modelling of the environmental resistome.

    PubMed

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome.

  9. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  10. Integrated Medical Model Verification, Validation, and Credibility

    NASA Technical Reports Server (NTRS)

    Walton, Marlei; Kerstman, Eric; Foy, Millennia; Shah, Ronak; Saile, Lynn; Boley, Lynn; Butler, Doug; Myers, Jerry

    2014-01-01

    The Integrated Medical Model (IMM) was designed to forecast relative changes for a specified set of crew health and mission success risk metrics by using a probabilistic (stochastic process) model based on historical data, cohort data, and subject matter expert opinion. A probabilistic approach is taken since exact (deterministic) results would not appropriately reflect the uncertainty in the IMM inputs. Once the IMM was conceptualized, a plan was needed to rigorously assess input information, framework and code, and output results of the IMM, and ensure that end user requests and requirements were considered during all stages of model development and implementation. METHODS: In 2008, the IMM team developed a comprehensive verification and validation (VV) plan, which specified internal and external review criteria encompassing 1) verification of data and IMM structure to ensure proper implementation of the IMM, 2) several validation techniques to confirm that the simulation capability of the IMM appropriately represents occurrences and consequences of medical conditions during space missions, and 3) credibility processes to develop user confidence in the information derived from the IMM. When the NASA-STD-7009 (7009) was published, the IMM team updated their verification, validation, and credibility (VVC) project plan to meet 7009 requirements and include 7009 tools in reporting VVC status of the IMM. RESULTS: IMM VVC updates are compiled recurrently and include 7009 Compliance and Credibility matrices, IMM VV Plan status, and a synopsis of any changes or updates to the IMM during the reporting period. Reporting tools have evolved over the lifetime of the IMM project to better communicate VVC status. This has included refining original 7009 methodology with augmentation from the NASA-STD-7009 Guidance Document. End user requests and requirements are being satisfied as evidenced by ISS Program acceptance of IMM risk forecasts, transition to an operational model and

  11. Doubtful outcome of the validation of the Rome II questionnaire: validation of a symptom based diagnostic tool

    PubMed Central

    2009-01-01

    Background Questionnaires are used in research and clinical practice. For gastrointestinal complaints the Rome II questionnaire is internationally known but not validated. The aim of this study was to validate a printed and a computerized version of Rome II, translated into Swedish. Results from various analyses are reported. Methods Volunteers from a population based colonoscopy study were included (n = 1011), together with patients seeking general practice (n = 45) and patients visiting a gastrointestinal specialists' clinic (n = 67). The questionnaire consists of 38 questions concerning gastrointestinal symptoms and complaints. Diagnoses are made after a special code. Our validation included analyses of the translation, feasibility, predictability, reproducibility and reliability. Kappa values and overall agreement were measured. The factor structures were confirmed using a principal component analysis and Cronbach's alpha was used to test the internal consistency. Results and Discussion Translation and back translation showed good agreement. The questionnaire was easy to understand and use. The reproducibility test showed kappa values of 0.60 for GERS, 0.52 for FD, and 0.47 for IBS. Kappa values and overall agreement for the predictability when the diagnoses by the questionnaire were compared to the diagnoses by the clinician were 0.26 and 90% for GERS, 0.18 and 85% for FD, and 0.49 and 86% for IBS. Corresponding figures for the agreement between the printed and the digital version were 0.50 and 92% for GERS, 0.64 and 95% for FD, and 0.76 and 95% for IBS. Cronbach's alpha coefficient for GERS was 0.75 with a span per item of 0.71 to 0.76. For FD the figures were 0.68 and 0.54 to 0.70 and for IBS 0.61 and 0.56 to 0.66. The Rome II questionnaire has never been thoroughly validated before even if diagnoses made by the Rome criteria have been compared to diagnoses made in clinical practice. Conclusion The accuracy of the Swedish version of the Rome II is of

  12. Boron-10 Lined Proportional Counter Model Validation

    SciTech Connect

    Lintereur, Azaree T.; Siciliano, Edward R.; Kouzes, Richard T.

    2012-06-30

    The Department of Energy Office of Nuclear Safeguards (NA-241) is supporting the project “Coincidence Counting With Boron-Based Alternative Neutron Detection Technology” at Pacific Northwest National Laboratory (PNNL) for the development of an alternative neutron coincidence counter. The goal of this project is to design, build and demonstrate a boron-lined proportional tube-based alternative system in the configuration of a coincidence counter. This report discusses the validation studies performed to establish the degree of accuracy of the computer modeling methods current used to simulate the response of boron-lined tubes. This is the precursor to developing models for the uranium neutron coincidence collar under Task 2 of this project.

  13. Concurrent Validity of the Online Version of the Keirsey Temperament Sorter II.

    ERIC Educational Resources Information Center

    Kelly, Kevin R.; Jugovic, Heidi

    2001-01-01

    Data from the Keirsey Temperament Sorter II online instrument and Myers Briggs Type Indicator (MBTI) for 203 college freshmen were analyzed. Positive correlations appeared between the concurrent MBTI and Keirsey measures of psychological type, giving preliminary support to the validity of the online version of Keirsey. (Contains 28 references.)…

  14. Unit testing, model validation, and biological simulation.

    PubMed

    Sarma, Gopal P; Jacobs, Travis W; Watts, Mark D; Ghayoomie, S Vahid; Larson, Stephen D; Gerkin, Richard C

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models. PMID:27635225

  15. Unit testing, model validation, and biological simulation

    PubMed Central

    Watts, Mark D.; Ghayoomie, S. Vahid; Larson, Stephen D.; Gerkin, Richard C.

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models. PMID:27635225

  16. Unit testing, model validation, and biological simulation

    PubMed Central

    Watts, Mark D.; Ghayoomie, S. Vahid; Larson, Stephen D.; Gerkin, Richard C.

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  17. Validation and application of the SCALP model

    NASA Astrophysics Data System (ADS)

    Smith, D. A. J.; Martin, C. E.; Saunders, C. J.; Smith, D. A.; Stokes, P. H.

    The Satellite Collision Assessment for the UK Licensing Process (SCALP) model was first introduced in a paper presented at IAC 2003. As a follow-on, this paper details the steps taken to validate the model and describes some of its applications. SCALP was developed for the British National Space Centre (BNSC) to support liability assessments as part of the UK's satellite license application process. Specifically, the model determines the collision risk that a satellite will pose to other orbiting objects during both its operational and post-mission phases. To date SCALP has been used to assess several LEO and GEO satellites for BNSC, and subsequently to provide the necessary technical basis for licenses to be issued. SCALP utilises the current population of operational satellites residing in LEO and GEO (extracted from ESA's DISCOS database) as a starting point. Realistic orbital dynamics, including the approximate simulation of generic GEO station-keeping strategies are used to propagate the objects over time. The method takes into account all of the appropriate orbit perturbations for LEO and GEO altitudes and allows rapid run times for multiple objects over time periods of many years. The orbit of a target satellite is also propagated in a similar fashion. During these orbital evolutions, a collision prediction and close approach algorithm assesses the collision risk posed to the satellite population. To validate SCALP, specific cases were set up to enable the comparison of collision risk results with other established models, such as the ESA MASTER model. Additionally, the propagation of operational GEO satellites within SCALP was compared with the expected behaviour of controlled GEO objects. The sensitivity of the model to changing the initial conditions of the target satellite such as semi-major axis and inclination has also been demonstrated. A further study shows the effect of including extra objects from the GTO population (which can pass through the LEO

  18. EXODUS II: A finite element data model

    SciTech Connect

    Schoof, L.A.; Yarberry, V.R.

    1994-09-01

    EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface (API).

  19. Kinetic modeling of light limitation and sulfur deprivation effects in the induction of hydrogen production with Chlamydomonas reinhardtii. Part II: Definition of model-based protocols and experimental validation.

    PubMed

    Degrenne, B; Pruvost, J; Titica, M; Takache, H; Legrand, J

    2011-10-01

    Photosynthetic hydrogen production under light by the green microalga Chlamydomonas reinhardtii was investigated in a torus-shaped PBR in sulfur-deprived conditions. Culture conditions, represented by the dry biomass concentration of the inoculum, sulfate concentration, and incident photon flux density (PFD), were optimized based on a previously published model (Fouchard et al., 2009. Biotechnol Bioeng 102:232-245). This allowed a strictly autotrophic production, whereas the sulfur-deprived protocol is usually applied in photoheterotrophic conditions. Experimental results combined with additional information from kinetic simulations emphasize effects of sulfur deprivation and light attenuation in the PBR in inducing anoxia and hydrogen production. A broad range of PFD was tested (up to 500 µmol photons m(-2) s(-1) ). Maximum hydrogen productivities were 1.0 ± 0.2 mL H₂ /h/L (or 25 ± 5 mL H₂ /m(2) h) and 3.1 mL ± 0.4 H₂ /h L (or 77.5 ± 10 mL H₂ /m(2) h), at 110 and 500 µmol photons m(-2) s(-1) , respectively. These values approached a maximum specific productivity of approximately 1.9 mL ± 0.4 H₂ /h/g of biomass dry weight, clearly indicative of a limitation in cell capacity to produce hydrogen. The efficiency of the process and further optimizations are discussed.

  20. Full-Scale Cookoff Model Validation Experiments

    SciTech Connect

    McClelland, M A; Rattanapote, M K; Heimdahl, E R; Erikson, W E; Curran, P O; Atwood, A I

    2003-11-25

    This paper presents the experimental results of the third and final phase of a cookoff model validation effort. In this phase of the work, two generic Heavy Wall Penetrators (HWP) were tested in two heating orientations. Temperature and strain gage data were collected over the entire test period. Predictions for time and temperature of reaction were made prior to release of the live data. Predictions were comparable to the measured values and were highly dependent on the established boundary conditions. Both HWP tests failed at a weld located near the aft closure of the device. More than 90 percent of unreacted explosive was recovered in the end heated experiment and less than 30 percent recovered in the side heated test.

  1. Some useful statistical methods for model validation.

    PubMed Central

    Marcus, A H; Elias, R W

    1998-01-01

    Although formal hypothesis tests provide a convenient framework for displaying the statistical results of empirical comparisons, standard tests should not be used without consideration of underlying measurement error structure. As part of the validation process, predictions of individual blood lead concentrations from models with site-specific input parameters are often compared with blood lead concentrations measured in field studies that also report lead concentrations in environmental media (soil, dust, water, paint) as surrogates for exposure. Measurements of these environmental media are subject to several sources of variability, including temporal and spatial sampling, sample preparation and chemical analysis, and data entry or recording. Adjustments for measurement error must be made before statistical tests can be used to empirically compare environmental data with model predictions. This report illustrates the effect of measurement error correction using a real dataset of child blood lead concentrations for an undisclosed midwestern community. We illustrate both the apparent failure of some standard regression tests and the success of adjustment of such tests for measurement error using the SIMEX (simulation-extrapolation) procedure. This procedure adds simulated measurement error to model predictions and then subtracts the total measurement error, analogous to the method of standard additions used by analytical chemists. Images Figure 1 Figure 3 PMID:9860913

  2. Diurnal ocean surface layer model validation

    NASA Technical Reports Server (NTRS)

    Hawkins, Jeffrey D.; May, Douglas A.; Abell, Fred, Jr.

    1990-01-01

    The diurnal ocean surface layer (DOSL) model at the Fleet Numerical Oceanography Center forecasts the 24-hour change in a global sea surface temperatures (SST). Validating the DOSL model is a difficult task due to the huge areas involved and the lack of in situ measurements. Therefore, this report details the use of satellite infrared multichannel SST imagery to provide day and night SSTs that can be directly compared to DOSL products. This water-vapor-corrected imagery has the advantages of high thermal sensitivity (0.12 C), large synoptic coverage (nearly 3000 km across), and high spatial resolution that enables diurnal heating events to be readily located and mapped. Several case studies in the subtropical North Atlantic readily show that DOSL results during extreme heating periods agree very well with satellite-imagery-derived values in terms of the pattern of diurnal warming. The low wind and cloud-free conditions necessary for these events to occur lend themselves well to observation via infrared imagery. Thus, the normally cloud-limited aspects of satellite imagery do not come into play for these particular environmental conditions. The fact that the DOSL model does well in extreme events is beneficial from the standpoint that these cases can be associated with the destruction of the surface acoustic duct. This so-called afternoon effect happens as the afternoon warming of the mixed layer disrupts the sound channel and the propagation of acoustic energy.

  3. Jacobian conditioning analysis for model validation.

    PubMed

    Rivals, Isabelle; Personnaz, Léon

    2004-02-01

    Our aim is to stress the importance of Jacobian matrix conditioning for model validation. We also comment on Monari and Dreyfus (2002), where, following Rivals and Personnaz (2000), it is proposed to discard neural candidates that are likely to overfit and/or for which quantities of interest such as confidence intervals cannot be computed accurately. In Rivals and Personnaz (2000), we argued that such models are to be discarded on the basis of the condition number of their Jacobian matrix. But Monari and Dreyfus (2002) suggest making the decision on the basis of the computed values of the leverages, the diagonal elements of the projection matrix on the range of the Jacobian, or "hat" matrix: they propose to discard a model if computed leverages are outside some theoretical bounds, pretending that it is the symptom of the Jacobian rank deficiency. We question this proposition because, theoretically, the hat matrix is defined whatever the rank of the Jacobian and because, in practice, the computed leverages of very ill-conditioned networks may respect their theoretical bounds while confidence intervals cannot be estimated accurately enough, two facts that have escaped Monari and Dreyfus's attention. Wealso recall the most accurate way to estimate the leverages and the properties of these estimations. Finally, we make an additional comment concerning the performance estimation in Monari and Dreyfus (2002). PMID:15006102

  4. Validation of HEDR models. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid.

  5. Validating agent based models through virtual worlds.

    SciTech Connect

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior

  6. The SEGUE Stellar Parameter Pipeline. II. Validation with Galactic Globular and Open Clusters

    SciTech Connect

    Lee, Y.S.; Beers, T.C.; Sivarani, T.; Johnson, J.A.; An, D.; Wilhelm, R.; Prieto, C.Allende; Koesterke, L.; Re Fiorentin, P.; Bailer-Jones, C.A.L.; Norris, J.E.

    2007-10-01

    The authors validate the performance and accuracy of the current SEGUE (Sloan Extension for Galactic Understanding and Exploration) Stellar Parameter Pipeline (SSPP), which determines stellar atmospheric parameters (effective temperature, surface gravity, and metallicity) by comparing derived overall metallicities and radial velocities from selected likely members of three globular clusters (M 13, M 15, and M 2) and two open clusters (NGC 2420 and M 67) to the literature values. Spectroscopic and photometric data obtained during the course of the original Sloan Digital Sky Survey (SDSS-1) and its first extension (SDSS-II/SEGUE) are used to determine stellar radial velocities and atmospheric parameter estimates for stars in these clusters. Based on the scatter in the metallicities derived for the members of each cluster, they quantify the typical uncertainty of the SSPP values, {sigma}([Fe/H]) = 0.13 dex for stars in the range of 4500 K {le} T{sub eff} {le} 7500 K and 2.0 {le} log g {le} 5.0, at least over the metallicity interval spanned by the clusters studied (-2.3 {le} [Fe/H] < 0). The surface gravities and effective temperatures derived by the SSPP are also compared with those estimated from the comparison of the color-magnitude diagrams with stellar evolution models; they find satisfactory agreement. At present, the SSPP underestimates [Fe/H] for near-solar-metallicity stars, represented by members of M 67 in this study, by {approx} 0.3 dex.

  7. Empirical data validation for model building

    NASA Astrophysics Data System (ADS)

    Kazarian, Aram

    2008-03-01

    Optical Proximity Correction (OPC) has become an integral and critical part of process development for advanced technologies with challenging k I requirements. OPC solutions in turn require stable, predictive models to be built that can project the behavior of all structures. These structures must comprehend all geometries that can occur in the layout in order to define the optimal corrections by feature, and thus enable a manufacturing process with acceptable margin. The model is built upon two main component blocks. First, is knowledge of the process conditions which includes the optical parameters (e.g. illumination source, wavelength, lens characteristics, etc) as well as mask definition, resist parameters and process film stack information. Second, is the empirical critical dimension (CD) data collected using this process on specific test features the results of which are used to fit and validate the model and to project resist contours for all allowable feature layouts. The quality of the model therefore is highly dependent on the integrity of the process data collected for this purpose. Since the test pattern suite generally extends to below the resolution limit that the process can support with adequate latitude, the CD measurements collected can often be quite noisy with marginal signal-to-noise ratios. In order for the model to be reliable and a best representation of the process behavior, it is necessary to scrutinize empirical data to ensure that it is not dominated by measurement noise or flyer/outlier points. The primary approach for generating a clean, smooth and dependable empirical data set should be a replicated measurement sampling that can help to statistically reduce measurement noise by averaging. However, it can often be impractical to collect the amount of data needed to ensure a clean data set by this method. An alternate approach is studied in this paper to further smooth the measured data by means of curve fitting to identify remaining

  8. Cloud-chemistry interactions modeling and validation

    NASA Astrophysics Data System (ADS)

    Kristjansson, J.; Storelvmo, T.; Iversen, T.

    2006-12-01

    explicit calculation of heterogeneous freezing, while homogeneous freezing is assumed to take place spontaneously at temperatures below -35°C. Finally, when the aerosols are allowed to influence the state of the climate system, interesting interactions take place between climate change and the chemical processes. Results from such simulations will be presented, as well as results from simulations investigating the sensitivity to parameterization assumptions. Where appropriate, validation of model results against observations, in particular satellite retrievals (e.g., MODIS) will be presented.

  9. Micromachined accelerometer design, modeling and validation

    SciTech Connect

    Davies, B.R.; Bateman, V.I.; Brown, F.A.; Montague, S.; Murray, J.R.; Rey, D.; Smith, J.H.

    1998-04-01

    Micromachining technologies enable the development of low-cost devices capable of sensing motion in a reliable and accurate manner. The development of various surface micromachined accelerometers and gyroscopes to sense motion is an ongoing activity at Sandia National Laboratories. In addition, Sandia has developed a fabrication process for integrating both the micromechanical structures and microelectronics circuitry of Micro-Electro-Mechanical Systems (MEMS) on the same chip. This integrated surface micromachining process provides substantial performance and reliability advantages in the development of MEMS accelerometers and gyros. A Sandia MEMS team developed a single-axis, micromachined silicon accelerometer capable of surviving and measuring very high accelerations, up to 50,000 times the acceleration due to gravity or 50 k-G (actually measured to 46,000 G). The Sandia integrated surface micromachining process was selected for fabrication of the sensor due to the extreme measurement sensitivity potential associated with integrated microelectronics. Measurement electronics capable of measuring at to Farad (10{sup {minus}18} Farad) changes in capacitance were required due to the very small accelerometer proof mass (< 200 {times} 10{sup {minus}9} gram) used in this surface micromachining process. The small proof mass corresponded to small sensor deflections which in turn required very sensitive electronics to enable accurate acceleration measurement over a range of 1 to 50 k-G. A prototype sensor, based on a suspended plate mass configuration, was developed and the details of the design, modeling, and validation of the device will be presented in this paper. The device was analyzed using both conventional lumped parameter modeling techniques and finite element analysis tools. The device was tested and performed well over its design range.

  10. Design and Development Research: A Model Validation Case

    ERIC Educational Resources Information Center

    Tracey, Monica W.

    2009-01-01

    This is a report of one case of a design and development research study that aimed to validate an overlay instructional design model incorporating the theory of multiple intelligences into instructional systems design. After design and expert review model validation, The Multiple Intelligence (MI) Design Model, used with an Instructional Systems…

  11. The Use of Spectral Analysis To Validate Planning Models

    ERIC Educational Resources Information Center

    Fitzsimmons, James A.

    1974-01-01

    Statistical fit of model predictions to empirical evidence is found to be an insufficient condition for establishing the validity of a planning model where the dynamic behavior is of particular importance. Describes a spectral analysis statistical test that can be used to validate the structure of a planning model by comparing the time series…

  12. Techniques and Issues in Agent-Based Modeling Validation

    SciTech Connect

    Pullum, Laura L; Cui, Xiaohui

    2012-01-01

    Validation of simulation models is extremely important. It ensures that the right model has been built and lends confidence to the use of that model to inform critical decisions. Agent-based models (ABM) have been widely deployed in different fields for studying the collective behavior of large numbers of interacting agents. However, researchers have only recently started to consider the issues of validation. Compared to other simulation models, ABM has many differences in model development, usage and validation. An ABM is inherently easier to build than a classical simulation, but more difficult to describe formally since they are closer to human cognition. Using multi-agent models to study complex systems has attracted criticisms because of the challenges involved in their validation [1]. In this report, we describe the challenge of ABM validation and present a novel approach we recently developed for an ABM system.

  13. Validation of EuroSCORE II on a single-centre 3800 patient cohort

    PubMed Central

    Carnero-Alcázar, Manuel; Silva Guisasola, Jacobo Alberto; Reguillo Lacruz, Fernando José; Maroto Castellanos, Luis Carlos; Cobiella Carnicer, Javier; Villagrán Medinilla, Enrique; Tejerina Sánchez, Teresa; Rodríguez Hernández, José Enrique

    2013-01-01

    OBJECTIVES To compare and validate the new European System for Cardiac Operative Risk Evaluation (EuroSCORE) II with EuroSCORE at our institution. METHODS The logistic EuroSCORE and EuroSCORE II were calculated on the entire patient cohort undergoing major cardiac surgery at our centre between January 2005 and December 2010. The goodness of fit was compared by means of the Hosmer–Lemeshow (HL) chi-squared test and the area under the curve (AUC) of the receiver operating characteristic curves of both scales applied to the same sample of patients. These analyses were repeated and stratified by the type of surgery. RESULTS Mortality of 5.66% was observed, with estimated mortalities according to logistic EuroSCORE and EuroSCORE II of 9 and 4.46%, respectively. The AUC for EuroSCORE (0.82, 95% confidence interval [CI] 0.79–0.85) was lower than that for EuroSCORE II (0.85, 95% CI 0.83–0.87) without the differences being statistically significant (P = 0.056). Both scales showed a good discriminative capacity for all the pathologies subgroups. The two scales showed poor calibration in the sample: EuroSCORE (χ2 = 39.3, PHL < 0.001) and EuroSCORE II (χ2 = 86.69, PHL < 0.001). The calibration of EuroSCORE was poor in the groups of patients undergoing coronary (PHL = 0.01), valve (PHL = 0.01) and combined coronary valve surgery (PHL = 0.012); and that of EuroSCORE II in the group of coronary (PHL = 0.001) and valve surgery (PHL < 0.001) patients. CONCLUSIONS EuroSCORE II demonstrated good discriminative capacity and poor calibration in the patients undergoing major cardiac surgery at our centre. PMID:23178391

  14. SAGE III Aerosol Extinction Validation in the Arctic Winter: Comparisons with SAGE II and POAM III

    NASA Technical Reports Server (NTRS)

    Thomason, L. W.; Poole, L. R.; Randall, C. E.

    2007-01-01

    The use of SAGE III multiwavelength aerosol extinction coefficient measurements to infer PSC type is contingent on the robustness of both the extinction magnitude and its spectral variation. Past validation with SAGE II and other similar measurements has shown that the SAGE III extinction coefficient measurements are reliable though the comparisons have been greatly weighted toward measurements made at mid-latitudes. Some aerosol comparisons made in the Arctic winter as a part of SOLVE II suggested that SAGE III values, particularly at longer wavelengths, are too small with the implication that both the magnitude and the wavelength dependence are not reliable. Comparisons with POAM III have also suggested a similar discrepancy. Herein, we use SAGE II data as a common standard for comparison of SAGE III and POAM III measurements in the Arctic winters of 2002/2003 through 2004/2005. During the winter, SAGE II measurements are made infrequently at the same latitudes as these instruments. We have mitigated this problem through the use potential vorticity as a spatial coordinate and thus greatly increased the number of coincident events. We find that SAGE II and III extinction coefficient measurements show a high degree of compatibility at both 1020 nm and 450 nm except a 10-20% bias at both wavelengths. In addition, the 452 to 1020-nm extinction ratio shows a consistent bias of approx. 30% throughout the lower stratosphere. We also find that SAGE II and POAM III are on average consistent though the comparisons show a much higher variability and larger bias than SAGE II/III comparisons. In addition, we find that the two data sets are not well correlated below 18 km. Overall, we find both the extinction values and the spectral dependence from SAGE III are robust and we find no evidence of a significant defect within the Arctic vortex.

  15. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    SciTech Connect

    Freeman, J.; Whitmore, J.; Kaffine, L.; Blair, N.; Dobos, A. P.

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  16. Development and Validation of the Juvenile Sexual Offense Recidivism Risk Assessment Tool-II.

    PubMed

    Epperson, Douglas L; Ralston, Christopher A

    2015-12-01

    This article describes the development and initial validation of the Juvenile Sexual Offense Recidivism Risk Assessment Tool-II (JSORRAT-II). Potential predictor variables were extracted from case file information for an exhaustive sample of 636 juveniles in Utah who sexually offended between 1990 and 1992. Simultaneous and hierarchical logistic regression analyses were used to identify the group of variables that was most predictive of subsequent juvenile sexual recidivism. A simple categorical scoring system was applied to these variables without meaningful loss of accuracy in the development sample for any sexual (area under the curve [AUC] = .89) and sexually violent (AUC = .89) juvenile recidivism. The JSORRAT-II was cross-validated on an exhaustive sample of 566 juveniles who had sexually offended in Utah in 1996 and 1997. Reliability of scoring the tool across five coders was quite high (intraclass correlation coefficient [ICC] = .96). Relative to the development sample, however, there was considerable shrinkage in the indices of predictive accuracy for any sexual (AUC = .65) and sexually violent (AUC = .65) juvenile recidivism. The reduced level of accuracy was not explained by severity of the index sexual offense, time at risk, or missing data. Capitalization on chance and other explanations for the possible reduction in predictive accuracy are explored, and potential uses and limitations of the tool are discussed. PMID:24492618

  17. Development and Validation of the Juvenile Sexual Offense Recidivism Risk Assessment Tool-II.

    PubMed

    Epperson, Douglas L; Ralston, Christopher A

    2015-12-01

    This article describes the development and initial validation of the Juvenile Sexual Offense Recidivism Risk Assessment Tool-II (JSORRAT-II). Potential predictor variables were extracted from case file information for an exhaustive sample of 636 juveniles in Utah who sexually offended between 1990 and 1992. Simultaneous and hierarchical logistic regression analyses were used to identify the group of variables that was most predictive of subsequent juvenile sexual recidivism. A simple categorical scoring system was applied to these variables without meaningful loss of accuracy in the development sample for any sexual (area under the curve [AUC] = .89) and sexually violent (AUC = .89) juvenile recidivism. The JSORRAT-II was cross-validated on an exhaustive sample of 566 juveniles who had sexually offended in Utah in 1996 and 1997. Reliability of scoring the tool across five coders was quite high (intraclass correlation coefficient [ICC] = .96). Relative to the development sample, however, there was considerable shrinkage in the indices of predictive accuracy for any sexual (AUC = .65) and sexually violent (AUC = .65) juvenile recidivism. The reduced level of accuracy was not explained by severity of the index sexual offense, time at risk, or missing data. Capitalization on chance and other explanations for the possible reduction in predictive accuracy are explored, and potential uses and limitations of the tool are discussed.

  18. An innovative education program: the peer competency validator model.

    PubMed

    Ringerman, Eileen; Flint, Lenora L; Hughes, DiAnn E

    2006-01-01

    This article describes the development, implementation, and evaluation of a creative peer competency validation model leading to successful outcomes including a more proficient and motivated staff, the replacement of annual skill labs with ongoing competency validation, and significant cost savings. Trained staff assessed competencies of their coworkers directly in the practice setting. Registered nurses, licensed vocational nurses, and medical assistants recruited from patient care staff comprise the validator group. The model is applicable to any practice setting. PMID:16760770

  19. An innovative education program: the peer competency validator model.

    PubMed

    Ringerman, Eileen; Flint, Lenora L; Hughes, DiAnn E

    2006-01-01

    This article describes the development, implementation, and evaluation of a creative peer competency validation model leading to successful outcomes including a more proficient and motivated staff, the replacement of annual skill labs with ongoing competency validation, and significant cost savings. Trained staff assessed competencies of their coworkers directly in the practice setting. Registered nurses, licensed vocational nurses, and medical assistants recruited from patient care staff comprise the validator group. The model is applicable to any practice setting.

  20. Statistical Validation of Normal Tissue Complication Probability Models

    SciTech Connect

    Xu Chengjian; Schaaf, Arjen van der; Veld, Aart A. van't; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  1. Wake Vortex Encounter Model Validation Experiments

    NASA Technical Reports Server (NTRS)

    Vicroy, Dan; Brandon, Jay; Greene, George C.; Rivers, Robert; Shah, Gautam; Stewart, Eric; Stuever, Robert; Rossow, Vernon

    1997-01-01

    The goal of this current research is to establish a database that validate/calibrate wake encounter analysis methods for fleet-wide application; and measure/document atmospheric effects on wake decay. Two kinds of experiments, wind tunnel experiments and flight experiments, are performed. This paper discusses the different types of tests and compares their wake velocity measurement.

  2. Three Dimensional Vapor Intrusion Modeling: Model Validation and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Akbariyeh, S.; Patterson, B.; Rakoczy, A.; Li, Y.

    2013-12-01

    Volatile organic chemicals (VOCs), such as chlorinated solvents and petroleum hydrocarbons, are prevalent groundwater contaminants due to their improper disposal and accidental spillage. In addition to contaminating groundwater, VOCs may partition into the overlying vadose zone and enter buildings through gaps and cracks in foundation slabs or basement walls, a process termed vapor intrusion. Vapor intrusion of VOCs has been recognized as a detrimental source for human exposures to potential carcinogenic or toxic compounds. The simulation of vapor intrusion from a subsurface source has been the focus of many studies to better understand the process and guide field investigation. While multiple analytical and numerical models were developed to simulate the vapor intrusion process, detailed validation of these models against well controlled experiments is still lacking, due to the complexity and uncertainties associated with site characterization and soil gas flux and indoor air concentration measurement. In this work, we present an effort to validate a three-dimensional vapor intrusion model based on a well-controlled experimental quantification of the vapor intrusion pathways into a slab-on-ground building under varying environmental conditions. Finally, a probabilistic approach based on Monte Carlo simulations is implemented to determine the probability distribution of indoor air concentration based on the most uncertain input parameters.

  3. Toward Validation of the Diagnostic-Prescriptive Model

    ERIC Educational Resources Information Center

    Ysseldyke, James E.; Sabatino, David A.

    1973-01-01

    Criticized are recent research efforts to validate the diagnostic prescriptive model of remediating learning disabilities, and proposed is a 6-step psychoeducational model designed to ascertain links between behavioral differences and instructional outcomes. (DB)

  4. Validation subset selections for extrapolation oriented QSPAR models.

    PubMed

    Szántai-Kis, Csaba; Kövesdi, István; Kéri, György; Orfi, László

    2003-01-01

    One of the most important features of QSPAR models is their predictive ability. The predictive ability of QSPAR models should be checked by external validation. In this work we examined three different types of external validation set selection methods for their usefulness in in-silico screening. The usefulness of the selection methods was studied in such a way that: 1) We generated thousands of QSPR models and stored them in 'model banks'. 2) We selected a final top model from the model banks based on three different validation set selection methods. 3) We predicted large data sets, which we called 'chemical universe sets', and calculated the corresponding SEPs. The models were generated from small fractions of the available water solubility data during a GA Variable Subset Selection procedure. The external validation sets were constructed by random selections, uniformly distributed selections or by perimeter-oriented selections. We found that the best performing models on the perimeter-oriented external validation sets usually gave the best validation results when the remaining part of the available data was overwhelmingly large, i.e., when the model had to make a lot of extrapolations. We also compared the top final models obtained from external validation set selection methods in three independent and different sizes of 'chemical universe sets'.

  5. Translation, adaptation and validation of a Portuguese version of the Moorehead-Ardelt Quality of Life Questionnaire II.

    PubMed

    Maciel, João; Infante, Paulo; Ribeiro, Susana; Ferreira, André; Silva, Artur C; Caravana, Jorge; Carvalho, Manuel G

    2014-11-01

    The prevalence of obesity has increased worldwide. An assessment of the impact of obesity on health-related quality of life (HRQoL) requires specific instruments. The Moorehead-Ardelt Quality of Life Questionnaire II (MA-II) is a widely used instrument to assess HRQoL in morbidly obese patients. The objective of this study was to translate and validate a Portuguese version of the MA-II.The study included forward and backward translations of the original MA-II. The reliability of the Portuguese MA-II was estimated using the internal consistency and test-retest methods. For validation purposes, the Spearman's rank correlation coefficient was used to evaluate the correlation between the Portuguese MA-II and the Portuguese versions of two other questionnaires, the 36-item Short Form Health Survey (SF-36) and the Impact of Weight on Quality of Life-Lite (IWQOL-Lite).One hundred and fifty morbidly obese patients were randomly assigned to test the reliability and validity of the Portuguese MA-II. Good internal consistency was demonstrated by a Cronbach's alpha coefficient of 0.80, and a very good agreement in terms of test-retest reliability was recorded, with an overall intraclass correlation coefficient (ICC) of 0.88. The total sums of MA-II scores and each item of MA-II were significantly correlated with all domains of SF-36 and IWQOL-Lite. A statistically significant negative correlation was found between the MA-II total score and BMI. Moreover, age, gender and surgical status were independent predictors of MA-II total score.A reliable and valid Portuguese version of the MA-II was produced, thus enabling the routine use of MA-II in the morbidly obese Portuguese population. PMID:24817428

  6. Translation, adaptation and validation of a Portuguese version of the Moorehead-Ardelt Quality of Life Questionnaire II.

    PubMed

    Maciel, João; Infante, Paulo; Ribeiro, Susana; Ferreira, André; Silva, Artur C; Caravana, Jorge; Carvalho, Manuel G

    2014-11-01

    The prevalence of obesity has increased worldwide. An assessment of the impact of obesity on health-related quality of life (HRQoL) requires specific instruments. The Moorehead-Ardelt Quality of Life Questionnaire II (MA-II) is a widely used instrument to assess HRQoL in morbidly obese patients. The objective of this study was to translate and validate a Portuguese version of the MA-II.The study included forward and backward translations of the original MA-II. The reliability of the Portuguese MA-II was estimated using the internal consistency and test-retest methods. For validation purposes, the Spearman's rank correlation coefficient was used to evaluate the correlation between the Portuguese MA-II and the Portuguese versions of two other questionnaires, the 36-item Short Form Health Survey (SF-36) and the Impact of Weight on Quality of Life-Lite (IWQOL-Lite).One hundred and fifty morbidly obese patients were randomly assigned to test the reliability and validity of the Portuguese MA-II. Good internal consistency was demonstrated by a Cronbach's alpha coefficient of 0.80, and a very good agreement in terms of test-retest reliability was recorded, with an overall intraclass correlation coefficient (ICC) of 0.88. The total sums of MA-II scores and each item of MA-II were significantly correlated with all domains of SF-36 and IWQOL-Lite. A statistically significant negative correlation was found between the MA-II total score and BMI. Moreover, age, gender and surgical status were independent predictors of MA-II total score.A reliable and valid Portuguese version of the MA-II was produced, thus enabling the routine use of MA-II in the morbidly obese Portuguese population.

  7. A framework for biodynamic feedthrough analysis--part II: validation and application.

    PubMed

    Venrooij, Joost; van Paassen, Marinus M; Mulder, Mark; Abbink, David A; Mulder, Max; van der Helm, Frans C T; Bulthoff, Heinrich H

    2014-09-01

    Biodynamic feedthrough (BDFT) is a complex phenomenon, that has been studied for several decades. However, there is little consensus on how to approach the BDFT problem in terms of definitions, nomenclature, and mathematical descriptions. In this paper, the framework for BDFT analysis, as presented in Part I of this dual publication, is validated and applied. The goal of this framework is twofold. First of all, it provides some common ground between the seemingly large range of different approaches existing in BDFT literature. Secondly, the framework itself allows for gaining new insights into BDFT phenomena. Using recently obtained measurement data, parts of the framework that were not already addressed elsewhere, are validated. As an example of a practical application of the framework, it will be demonstrated how the effects of control device dynamics on BDFT can be understood and accurately predicted. Other ways of employing the framework are illustrated by interpreting the results of three selected studies from the literature using the BDFT framework. The presentation of the BDFT framework is divided into two parts. This paper, Part II, addresses the validation and application of the framework. Part I, which is also published in this journal issue, addresses the theoretical foundations of the framework. The work is presented in two separate papers to allow for a detailed discussion of both the framework's theoretical background and its validation.

  8. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  9. Validation of Numerical Shallow Water Models for Tidal Lagoons

    SciTech Connect

    Eliason, D.; Bourgeois, A.

    1999-11-01

    An analytical solution is presented for the case of a stratified, tidally forced lagoon. This solution, especially its energetics, is useful for the validation of numerical shallow water models under stratified, tidally forced conditions. The utility of the analytical solution for validation is demonstrated for a simple finite difference numerical model. A comparison is presented of the energetics of the numerical and analytical solutions in terms of the convergence of model results to the analytical solution with increasing spatial and temporal resolution.

  10. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect

  11. Validation of the Millon Clinical Multiaxial Inventory for Axis II disorders: does it meet the Daubert standard?

    PubMed

    Rogers, R; Salekin, R T; Sewell, K W

    1999-08-01

    Relevant to forensic practice, the Supreme Court in Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993) established the boundaries for the admissibility of scientific evidence that take into account its trustworthiness as assessed via evidentiary reliability. In conducting forensic evaluations, psychologists and other mental health professionals must be able to offer valid diagnoses, including Axis II disorders. The most widely available measure of personality disorders is the Million Clinical Multiaxial Inventory (MCMI) and its subsequent revisions (MCMI-II and MCMI-III). We address the critical question, "Do the MCMI-II and MCMI-III meet the requirements of Daubert?" Fundamental problems in the scientific validity and error rates for MCMI-III appear to preclude its admissibility under Daubert for the assessment of Axis II disorders. We address the construct validity for the MCMI and MCMI-II via a meta-analysis of 33 studies. The resulting multitrait-multimethod approach allowed us to address their convergent and discriminant validity through method effects (Marsh, 1990). With reference to Daubert, the results suggest a circumscribed use for the MCMI-II with good evidence of construct validity for Avoidant, Schizotypal, and Borderline personality disorders. PMID:10439726

  12. Validation of vehicle dynamics simulation models - a review

    NASA Astrophysics Data System (ADS)

    Kutluay, Emir; Winner, Hermann

    2014-02-01

    In this work, a literature survey on the validation of vehicle dynamics simulation models is presented. Estimating the dynamic responses of existing or proposed vehicles has a wide array of applications in the development of vehicle technologies, e.g. active suspensions, controller design, driver assistance systems, etc. Although simulation environments, measurement tools and mathematical theories on vehicle dynamics are well established, the methodical link between the experimental test data and validity analysis of the simulation model is still lacking. This report presents different views on the definition of validation, and its usage in vehicle dynamics simulation models.

  13. Teacher Change Beliefs: Validating a Scale with Structural Equation Modelling

    ERIC Educational Resources Information Center

    Kin, Tai Mei; Abdull Kareem, Omar; Nordin, Mohamad Sahari; Wai Bing, Khuan

    2015-01-01

    The objectives of the study were to validate a substantiated Teacher Change Beliefs Model (TCBM) and an instrument to identify critical components of teacher change beliefs (TCB) in Malaysian secondary schools. Five different pilot test approaches were applied to ensure the validity and reliability of the instrument. A total of 936 teachers from…

  14. Adolescent Personality: A Five-Factor Model Construct Validation

    ERIC Educational Resources Information Center

    Baker, Spencer T.; Victor, James B.; Chambers, Anthony L.; Halverson, Jr., Charles F.

    2004-01-01

    The purpose of this study was to investigate convergent and discriminant validity of the five-factor model of adolescent personality in a school setting using three different raters (methods): self-ratings, peer ratings, and teacher ratings. The authors investigated validity through a multitrait-multimethod matrix and a confirmatory factor…

  15. Effects of Mg II and Ca II ionization on ab-initio solar chromosphere models

    NASA Technical Reports Server (NTRS)

    Rammacher, W.; Cuntz, M.

    1991-01-01

    Acoustically heated solar chromosphere models are computed considering radiation damping by (non-LTE) emission from H(-) and by Mg II and Ca II emission lines. The radiative transfer equations for the Mg II k and Ca II K emission lines are solved using the core-saturation method with complete redistribution. The Mg II k and Ca II K cooling rates are compared with the VAL model C. Several substantial improvements over the work of Ulmschneider et al. (1987) are included. It is found that the rapid temperature rises caused by the ionization of Mg II are not formed in the middle chromosphere, but occur at larger atmospheric heights. These models represent the temperature structure of the 'real' solar chromosphere much better. This result is a major precondition for the study of ab-initio models for solar flux tubes based on MHD wave propagation and also for ab-initio models for the solar transition layer.

  16. Measurements of Humidity in the Atmosphere and Validation Experiments (Mohave, Mohave II): Results Overview

    NASA Technical Reports Server (NTRS)

    Leblanc, Thierry; McDermid, Iain S.; McGee, Thomas G.; Twigg, Laurence W.; Sumnicht, Grant K.; Whiteman, David N.; Rush, Kurt D.; Cadirola, Martin P.; Venable, Demetrius D.; Connell, R.; Demoz, Belay B.; Vomel, Holger; Miloshevich, L.

    2008-01-01

    The Measurements of Humidity in the Atmosphere and Validation Experiments (MOHAVE, MOHAVE-II) inter-comparison campaigns took place at the Jet Propulsion Laboratory (JPL) Table Mountain Facility (TMF, 34.5(sup o)N) in October 2006 and 2007 respectively. Both campaigns aimed at evaluating the capability of three Raman lidars for the measurement of water vapor in the upper troposphere and lower stratosphere (UT/LS). During each campaign, more than 200 hours of lidar measurements were compared to balloon borne measurements obtained from 10 Cryogenic Frost-point Hygrometer (CFH) flights and over 50 Vaisala RS92 radiosonde flights. During MOHAVE, fluorescence in all three lidar receivers was identified, causing a significant wet bias above 10-12 km in the lidar profiles as compared to the CFH. All three lidars were reconfigured after MOHAVE, and no such bias was observed during the MOHAVE-II campaign. The lidar profiles agreed very well with the CFH up to 13-17 km altitude, where the lidar measurements become noise limited. The results from MOHAVE-II have shown that the water vapor Raman lidar will be an appropriate technique for the long-term monitoring of water vapor in the UT/LS given a slight increase in its power-aperture, as well as careful calibration.

  17. Photon number conserving models of H II bubbles during reionization

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem; Choudhury, T. Roy; Padmanabhan, Hamsa

    2016-08-01

    Traditional excursion-set-based models of H II bubble growth during the epoch of reionization are known to violate photon number conservation, in the sense that the mass fraction in ionized bubbles in these models does not equal the ratio of the number of ionizing photons produced by sources and the number of hydrogen atoms in the intergalactic medium. E.g. for a Planck13 cosmology with electron scattering optical depth τ ≃ 0.066, the discrepancy is ˜15 per cent for x_{H II}=0.1 and ˜5 per cent for x_{H II}=0.5. We demonstrate that this problem arises from a fundamental conceptual shortcoming of the excursion-set approach (already recognized in the literature on this formalism) which only tracks average mass fractions instead of the exact, stochastic source counts. With this insight, we build an approximately photon number conserving Monte Carlo model of bubble growth based on partitioning regions of dark matter into haloes. Our model, which is formally valid for white noise initial conditions (ICs), shows dramatic improvements in photon number conservation, as well as substantial differences in the bubble size distribution, as compared to traditional models. We explore the trends obtained on applying our algorithm to more realistic ICs, finding that these improvements are robust to changes in the ICs. Since currently popular seminumerical schemes of bubble growth also violate photon number conservation, we argue that it will be worthwhile to pursue new, explicitly photon number conserving approaches. Along the way, we clarify some misconceptions regarding this problem that have appeared in the literature.

  18. Gear Windage Modeling Progress - Experimental Validation Status

    NASA Technical Reports Server (NTRS)

    Kunz, Rob; Handschuh, Robert F.

    2008-01-01

    In the Subsonics Rotary Wing (SRW) Project being funded for propulsion work at NASA Glenn Research Center, performance of the propulsion system is of high importance. In current rotorcraft drive systems many gearing components operate at high rotational speed (pitch line velocity > 24000 ft/ min). In our testing of high speed helical gear trains at NASA Glenn we have found that the work done on the air - oil mist within the gearbox can become a significant part of the power loss of the system. This loss mechanism is referred to as windage. The effort described in this presentation is to try to understand the variables that affect windage, develop a good experimental data base to validate, the analytical project being conducted at Penn State University by Dr. Rob Kunz under a NASA SRW NRA. The presentation provides an update to the status of these efforts.

  19. Economic analysis of model validation for a challenge problem

    DOE PAGES

    Paez, Paul J.; Paez, Thomas L.; Hasselman, Timothy K.

    2016-02-19

    It is now commonplace for engineers to build mathematical models of the systems they are designing, building, or testing. And, it is nearly universally accepted that phenomenological models of physical systems must be validated prior to use for prediction in consequential scenarios. Yet, there are certain situations in which testing only or no testing and no modeling may be economically viable alternatives to modeling and its associated testing. This paper develops an economic framework within which benefit–cost can be evaluated for modeling and model validation relative to other options. The development is presented in terms of a challenge problem. Asmore » a result, we provide a numerical example that quantifies when modeling, calibration, and validation yield higher benefit–cost than a testing only or no modeling and no testing option.« less

  20. Validating Computational Cognitive Process Models across Multiple Timescales

    NASA Astrophysics Data System (ADS)

    Myers, Christopher; Gluck, Kevin; Gunzelmann, Glenn; Krusmark, Michael

    2010-12-01

    Model comparison is vital to evaluating progress in the fields of artificial general intelligence (AGI) and cognitive architecture. As they mature, AGI and cognitive architectures will become increasingly capable of providing a single model that completes a multitude of tasks, some of which the model was not specifically engineered to perform. These models will be expected to operate for extended periods of time and serve functional roles in real-world contexts. Questions arise regarding how to evaluate such models appropriately, including issues pertaining to model comparison and validation. In this paper, we specifically address model validation across multiple levels of abstraction, using an existing computational process model of unmanned aerial vehicle basic maneuvering to illustrate the relationship between validity and timescales of analysis.

  1. Exploring the Validity of Valproic Acid Animal Model of Autism

    PubMed Central

    Mabunga, Darine Froy N.; Gonzales, Edson Luck T.; Kim, Ji-woon; Kim, Ki Chan

    2015-01-01

    The valproic acid (VPA) animal model of autism spectrum disorder (ASD) is one of the most widely used animal model in the field. Like any other disease models, it can't model the totality of the features seen in autism. Then, is it valid to model autism? This model demonstrates many of the structural and behavioral features that can be observed in individuals with autism. These similarities enable the model to define relevant pathways of developmental dysregulation resulting from environmental manipulation. The uncovering of these complex pathways resulted to the growing pool of potential therapeutic candidates addressing the core symptoms of ASD. Here, we summarize the validity points of VPA that may or may not qualify it as a valid animal model of ASD. PMID:26713077

  2. Validating the Mexican American Intergenerational Caregiving Model

    ERIC Educational Resources Information Center

    Escandon, Socorro

    2011-01-01

    The purpose of this study was to substantiate and further develop a previously formulated conceptual model of Role Acceptance in Mexican American family caregivers by exploring the theoretical strengths of the model. The sample consisted of women older than 21 years of age who self-identified as Hispanic, were related through consanguinal or…

  3. VERIFICATION AND VALIDATION OF THE SPARC MODEL

    EPA Science Inventory

    Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values--that is, the physical and chemical constants that govern reactivity. Although empirical structure-activity relationships that allow estimation of some ...

  4. Highlights of Transient Plume Impingement Model Validation and Applications

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael

    2011-01-01

    This paper describes highlights of an ongoing validation effort conducted to assess the viability of applying a set of analytic point source transient free molecule equations to model behavior ranging from molecular effusion to rocket plumes. The validation effort includes encouraging comparisons to both steady and transient studies involving experimental data and direct simulation Monte Carlo results. Finally, this model is applied to describe features of two exotic transient scenarios involving NASA Goddard Space Flight Center satellite programs.

  5. HEDR model validation plan. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ``tools`` for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ``validate`` these tools. In the sense of the HEDR Project, ``validation`` is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model.

  6. Validating predictions from climate envelope models

    USGS Publications Warehouse

    Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.

  7. Validating Predictions from Climate Envelope Models

    PubMed Central

    Watling, James I.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Mazzotti, Frank J.; Romañach, Stephanie S.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species. PMID

  8. Development and validation of model for sand

    NASA Astrophysics Data System (ADS)

    Church, P.; Ingamells, V.; Wood, A.; Gould, P.; Perry, J.; Jardine, A.; Tyas, A.

    2015-09-01

    There is a growing requirement within QinetiQ to develop models for assessments when there is very little experimental data. A theoretical approach to developing equations of state for geological materials has been developed using Quantitative Structure Property Modelling based on the Porter-Gould model approach. This has been applied to well-controlled sand with different moisture contents and particle shapes. The Porter-Gould model describes an elastic response and gives good agreement at high impact pressures with experiment indicating that the response under these conditions is dominated by the molecular response. However at lower pressures the compaction behaviour is dominated by a micro-mechanical response which drives the need for additional theoretical tools and experiments to separate the volumetric and shear compaction behaviour. The constitutive response is fitted to existing triaxial cell data and Quasi-Static (QS) compaction data. This data is then used to construct a model in the hydrocode. The model shows great promise in predicting plate impact, Hopkinson bar, fragment penetration and residual velocity of fragments through a finite thickness of sand.

  9. Angular Distribution Models for Top-of-Atmosphere Radiative Flux Estimation from the Clouds and the Earth's Radiant Energy System Instrument on the Tropical Rainfall Measuring Mission Satellite. Part II; Validation

    NASA Technical Reports Server (NTRS)

    Loeb, N. G.; Loukachine, K.; Wielicki, B. A.; Young, D. F.

    2003-01-01

    Top-of-atmosphere (TOA) radiative fluxes from the Clouds and the Earth s Radiant Energy System (CERES) are estimated from empirical angular distribution models (ADMs) that convert instantaneous radiance measurements to TOA fluxes. This paper evaluates the accuracy of CERES TOA fluxes obtained from a new set of ADMs developed for the CERES instrument onboard the Tropical Rainfall Measuring Mission (TRMM). The uncertainty in regional monthly mean reflected shortwave (SW) and emitted longwave (LW) TOA fluxes is less than 0.5 W/sq m, based on comparisons with TOA fluxes evaluated by direct integration of the measured radiances. When stratified by viewing geometry, TOA fluxes from different angles are consistent to within 2% in the SW and 0.7% (or 2 W/sq m) in the LW. In contrast, TOA fluxes based on ADMs from the Earth Radiation Budget Experiment (ERBE) applied to the same CERES radiance measurements show a 10% relative increase with viewing zenith angle in the SW and a 3.5% (9 W/sq m) decrease with viewing zenith angle in the LW. Based on multiangle CERES radiance measurements, 18 regional instantaneous TOA flux errors from the new CERES ADMs are estimated to be 10 W/sq m in the SW and, 3.5 W/sq m in the LW. The errors show little or no dependence on cloud phase, cloud optical depth, and cloud infrared emissivity. An analysis of cloud radiative forcing (CRF) sensitivity to differences between ERBE and CERES TRMM ADMs, scene identification, and directional models of albedo as a function of solar zenith angle shows that ADM and clear-sky scene identification differences can lead to an 8 W/sq m root-mean-square (rms) difference in 18 daily mean SW CRF and a 4 W/sq m rms difference in LW CRF. In contrast, monthly mean SW and LW CRF differences reach 3 W/sq m. CRF is found to be relatively insensitive to differences between the ERBE and CERES TRMM directional models.

  10. Shuttle Spacesuit: Fabric/LCVG Model Validation

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tweed, J.; Zeitlin, C.; Kim, M.-H. Y.; Anderson, B. M.; Cucinotta, F. A.; Ware, J.; Persans, A. E.

    2001-01-01

    A detailed spacesuit computational model is being developed at the Langley Research Center for radiation exposure evaluation studies. The details of the construction of the spacesuit are critical to estimation of exposures and assessing the risk to the astronaut on EVA. Past evaluations of spacesuit shielding properties assumed the basic fabric lay-up (Thermal Micrometeroid Garment, fabric restraints, and pressure envelope) and Liquid Cooling and Ventilation Garment (LCVG) could be homogenized as a single layer overestimating the protective properties over 60 percent of the fabric area. The present spacesuit model represents the inhomogeneous distributions of LCVG materials (mainly the water filled cooling tubes). An experimental test is performed using a 34-MeV proton beam and highresolution detectors to compare with model-predicted transmission factors. Some suggestions are made on possible improved construction methods to improve the spacesuit's protection properties.

  11. Shuttle Spacesuit: Fabric/LCVG Model Validation

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tweed, J.; Zeitflin, C.; Kim, M.-H.; Anderson, B. M.; Cucinotta, F. A.; Ware, J.; Persans, A. E.

    2001-01-01

    A detailed spacesuit computational model is being developed at the Langley Research Center for radiation exposure evaluation studies. The details of the construction of the spacesuit are critical to estimation of exposures and assessing the risk to the astronaut on EVA. Past evaluations of spacesuit shielding properties assumed the basic fabric lay-up (Thermal Micrometeroid Garment, fabric restraints, and pressure envelope) and Liquid Cooling and Ventilation Garment (LCVG) could be homogenized as a single layer overestimating the protective properties over 60 percent of the fabric area. The present spacesuit model represents the inhomogeneous distributions of LCVG materials (mainly the water filled cooling tubes). An experimental test is performed using a 34-MeV proton beam and high- resolution detectors to compare with model-predicted transmission factors. Some suggestions are made on possible improved construction methods to improve the spacesuit's protection properties.

  12. A test of a proposed method for estimating validity of a multivariate composite predictor: extending the job component validity model.

    PubMed

    Hoffman, Calvin C; Morris, David; Luck, Gypsi

    2009-12-01

    In this study, a proposed extension to the job component validity model from the Position Analysis Questionnaire was tested. Job component validity, a form of synthetic validation, allows researchers to select useful predictors and to estimate the criterion-related validity of tests based on conducting a job analysis which includes the Position Analysis Questionnaire. Morris and colleagues described a method for estimating the multiple correlation of a test battery assembled via job component validity estimates. In the current study, job component validity estimates, derived from the multiple correlation procedure proposed by Morris, et al., were compared to unit-weighted validity estimates obtained in a criterion-related validity study of six job progressions. The multivariate job component validity estimates were comparable to unit-weighted validity coefficients obtained using supervisory ratings as criteria. Multivariate job component validity estimates were conservative compared to corrected unit-weighted validity coefficients.

  13. Validation of Model Forecasts of the Ambient Solar Wind

    NASA Technical Reports Server (NTRS)

    Macneice, P. J.; Hesse, M.; Kuznetsova, M. M.; Rastaetter, L.; Taktakishvili, A.

    2009-01-01

    Independent and automated validation is a vital step in the progression of models from the research community into operational forecasting use. In this paper we describe a program in development at the CCMC to provide just such a comprehensive validation for models of the ambient solar wind in the inner heliosphere. We have built upon previous efforts published in the community, sharpened their definitions, and completed a baseline study. We also provide first results from this program of the comparative performance of the MHD models available at the CCMC against that of the Wang-Sheeley-Arge (WSA) model. An important goal of this effort is to provide a consistent validation to all available models. Clearly exposing the relative strengths and weaknesses of the different models will enable forecasters to craft more reliable ensemble forecasting strategies. Models of the ambient solar wind are developing rapidly as a result of improvements in data supply, numerical techniques, and computing resources. It is anticipated that in the next five to ten years, the MHD based models will supplant semi-empirical potential based models such as the WSA model, as the best available forecast models. We anticipate that this validation effort will track this evolution and so assist policy makers in gauging the value of past and future investment in modeling support.

  14. WEPP: Model use, calibration, and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Water Erosion Prediction Project (WEPP) model is a process-based, continuous simulation, distributed parameter, hydrologic and soil erosion prediction system. It has been developed over the past 25 years to allow for easy application to a large number of land management scenarios. Most general o...

  15. Theory and Implementation of Nuclear Safety System Codes - Part II: System Code Closure Relations, Validation, and Limitations

    SciTech Connect

    Glenn A Roth; Fatih Aydogan

    2014-09-01

    This is Part II of two articles describing the details of thermal-hydraulic sys- tem codes. In this second part of the article series, the system code closure relationships (used to model thermal and mechanical non-equilibrium and the coupling of the phases) for the governing equations are discussed and evaluated. These include several thermal and hydraulic models, such as heat transfer coefficients for various flow regimes, two phase pressure correlations, two phase friction correlations, drag coefficients and interfacial models be- tween the fields. These models are often developed from experimental data. The experiment conditions should be understood to evaluate the efficacy of the closure models. Code verification and validation, including Separate Effects Tests (SETs) and Integral effects tests (IETs) is also assessed. It can be shown from the assessments that the test cases cover a significant section of the system code capabilities, but some of the more advanced reactor designs will push the limits of validation for the codes. Lastly, the limitations of the codes are discussed by considering next generation power plants, such as Small Modular Reactors (SMRs), analyz- ing not only existing nuclear power plants, but also next generation nuclear power plants. The nuclear industry is developing new, innovative reactor designs, such as Small Modular Reactors (SMRs), High-Temperature Gas-cooled Reactors (HTGRs) and others. Sub-types of these reactor designs utilize pebbles, prismatic graphite moderators, helical steam generators, in- novative fuel types, and many other design features that may not be fully analyzed by current system codes. This second part completes the series on the comparison and evaluation of the selected reactor system codes by discussing the closure relations, val- idation and limitations. These two articles indicate areas where the models can be improved to adequately address issues with new reactor design and development.

  16. Validation of the Poisson Stochastic Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    Zhuravleva, Tatiana; Marshak, Alexander

    2004-01-01

    A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.

  17. Development and validation of a two-phase, three-dimensional model for PEM fuel cells.

    SciTech Connect

    Chen, Ken Shuang

    2010-04-01

    The objectives of this presentation are: (1) To develop and validate a two-phase, three-dimensional transport modelfor simulating PEM fuel cell performance under a wide range of operating conditions; (2) To apply the validated PEM fuel cell model to improve fundamental understanding of key phenomena involved and to identify rate-limiting steps and develop recommendations for improvements so as to accelerate the commercialization of fuel cell technology; (3) The validated PEMFC model can be employed to improve and optimize PEM fuel cell operation. Consequently, the project helps: (i) address the technical barriers on performance, cost, and durability; and (ii) achieve DOE's near-term technical targets on performance, cost, and durability in automotive and stationary applications.

  18. Experiments for foam model development and validation.

    SciTech Connect

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F.; Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  19. Analysis of Experimental Data for High Burnup PWR Spent Fuel Isotopic Validation - Vandellos II Reactor

    SciTech Connect

    Ilas, Germina; Gauld, Ian C

    2011-01-01

    This report is one of the several recent NUREG/CR reports documenting benchmark-quality radiochemical assay data and the use of the data to validate computer code predictions of isotopic composition for spent nuclear fuel, to establish the uncertainty and bias associated with code predictions. The experimental data analyzed in the current report were acquired from a high-burnup fuel program coordinated by Spanish organizations. The measurements included extensive actinide and fission product data of importance to spent fuel safety applications, including burnup credit, decay heat, and radiation source terms. Six unique spent fuel samples from three uranium oxide fuel rods were analyzed. The fuel rods had a 4.5 wt % {sup 235}U initial enrichment and were irradiated in the Vandellos II pressurized water reactor operated in Spain. The burnups of the fuel samples range from 42 to 78 GWd/MTU. The measurements were used to validate the two-dimensional depletion sequence TRITON in the SCALE computer code system.

  20. Validation of the Serpent 2 code on TRIGA Mark II benchmark experiments.

    PubMed

    Ćalić, Dušan; Žerovnik, Gašper; Trkov, Andrej; Snoj, Luka

    2016-01-01

    The main aim of this paper is the development and validation of a 3D computational model of TRIGA research reactor using Serpent 2 code. The calculated parameters were compared to the experimental results and to calculations performed with the MCNP code. The results show that the calculated normalized reaction rates and flux distribution within the core are in good agreement with MCNP and experiment, while in the reflector the flux distribution differ up to 3% from the measurements. PMID:26516989

  1. Validating Requirements for Fault Tolerant Systems Using Model Checking

    NASA Technical Reports Server (NTRS)

    Schneider, Francis; Easterbrook, Steve M.; Callahan, John R.; Holzmann, Gerard J.

    1997-01-01

    Model checking is shown to be an effective tool in validating the behavior of a fault tolerant embedded spacecraft controller. The case study presented here shows that by judiciously abstracting away extraneous complexity, the state space of the model could be exhaustively searched allowing critical functional requirements to be validated down to the design level. Abstracting away detail not germane to the problem of interest leaves by definition a partial specification behind. The success of this procedure shows that it is feasible to effectively validate a partial specification with this technique. Three anomalies were found in the system one of which is an error in the detailed requirements, and the other two are missing/ambiguous requirements. Because the method allows validation of partial specifications, it also is an effective methodology towards maintaining fidelity between a co-evolving specification and an implementation.

  2. Model-Based Verification and Validation of Spacecraft Avionics

    NASA Technical Reports Server (NTRS)

    Khan, M. Omair; Sievers, Michael; Standley, Shaun

    2012-01-01

    Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.

  3. Model validation - A connection between robust control and identification

    NASA Technical Reports Server (NTRS)

    Smith, Roy S.; Doyle, John C.

    1992-01-01

    The gap between the models used in control synthesis and those obtained from identification experiments is considered by investigating the connection between uncertain models and data. The model validation problem addressed is: given experimental data and a model with both additive noise and norm-bounded perturbations, is it possible that the model could produce the observed input-output data? This problem is studied for the standard H-infinity/mu framework models. A necessary condition for such a model to describe an experimental datum is obtained. For a large class of models in the robust control framework, this condition is computable as the solution of a quadratic optimization problem.

  4. Preliminary validation of the Spanish version of the Multiple Stimulus Types Ambiguity Tolerance Scale (MSTAT-II).

    PubMed

    Arquero, José L; McLain, David L

    2010-05-01

    Despite widespread interest in ambiguity tolerance and other information-related individual differences, existing measures are conceptually dispersed and psychometrically weak. This paper presents the Spanish version of MSTAT-II, a short, stimulus-oriented, and psychometrically improved measure of an individual's orientation toward ambiguous stimuli. Results obtained reveal adequate reliability, validity, and temporal stability. These results support the use of MSTAT-II as an adequate measure of ambiguity tolerance.

  5. Functional state modelling approach validation for yeast and bacteria cultivations

    PubMed Central

    Roeva, Olympia; Pencheva, Tania

    2014-01-01

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778

  6. Paediatric bed fall computer simulation model development and validation.

    PubMed

    Thompson, Angela K; Bertocci, Gina E

    2013-01-01

    Falls from beds and other household furniture are common scenarios stated to conceal child abuse. Knowledge of the biomechanics associated with short-distance falls may aid clinicians in distinguishing between abusive and accidental injuries. Computer simulation is a useful tool to investigate injury-producing events and to study the effect of altering event parameters on injury risk. In this study, a paediatric bed fall computer simulation model was developed and validated. The simulation was created using Mathematical Dynamic Modeling(®) software with a child restraint air bag interaction (CRABI) 12-month-old anthropomorphic test device (ATD) representing the fall victim. The model was validated using data from physical fall experiments of the same scenario with an instrumented CRABI ATD. Validation was conducted using both observational and statistical comparisons. Future parametric sensitivity studies using this model will lead to an improved understanding of relationships between child (fall victim) parameters, fall environment parameters and injury potential.

  7. Dynamic Model Validation with Governor Deadband on the Eastern Interconnection

    SciTech Connect

    Kou, Gefei; Hadley, Stanton W; Liu, Yilu

    2014-04-01

    This report documents the efforts to perform dynamic model validation on the Eastern Interconnection (EI) by modeling governor deadband. An on-peak EI dynamic model is modified to represent governor deadband characteristics. Simulation results are compared with synchrophasor measurements collected by the Frequency Monitoring Network (FNET/GridEye). The comparison shows that by modeling governor deadband the simulated frequency response can closely align with the actual system response.

  8. Testing the Testing: Validity of a State Growth Model

    ERIC Educational Resources Information Center

    Brown, Kim Trask

    2008-01-01

    Possible threats to the validity of North Carolina's accountability model used to predict academic growth were investigated in two ways: the state's regression equations were replicated but updated to utilize current testing data and not that from years past as in the state's current model; and the updated equations were expanded to include…

  9. Validating Physics-based Space Weather Models for Operational Use

    NASA Astrophysics Data System (ADS)

    Gombosi, Tamas; Singer, Howard; Millward, George; Toth, Gabor; Welling, Daniel

    2016-07-01

    The Geospace components of the Space Weather Modeling Framework developed at the University of Michigan is presently transitioned to operational use by the NOAA Space Weather Prediction Center. This talk will discuss the various ways the model is validated and skill scores are calculated.

  10. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  11. Institutional Effectiveness: A Model for Planning, Assessment & Validation.

    ERIC Educational Resources Information Center

    Truckee Meadows Community Coll., Sparks, NV.

    The report presents Truckee Meadows Community College's (Colorado) model for assessing institutional effectiveness and validating the College's mission and vision, and the strategic plan for carrying out the institutional effectiveness model. It also outlines strategic goals for the years 1999-2001. From the system-wide directive that education…

  12. Validation of 1-D transport and sawtooth models for ITER

    SciTech Connect

    Connor, J.W.; Turner, M.F.; Attenberger, S.E.; Houlberg, W.A.

    1996-12-31

    In this paper the authors describe progress on validating a number of local transport models by comparing their predictions with relevant experimental data from a range of tokamaks in the ITER profile database. This database, the testing procedure and results are discussed. In addition a model for sawtooth oscillations is used to investigate their effect in an ITER plasma with alpha-particles.

  13. Validating a Technology Enhanced Student-Centered Learning Model

    ERIC Educational Resources Information Center

    Kang, Myunghee; Hahn, Jungsun; Chung, Warren

    2015-01-01

    The Technology Enhanced Student Centered Learning (TESCL) Model in this study presents the core factors that ensure the quality of learning in a technology-supported environment. Although the model was conceptually constructed using a student-centered learning framework and drawing upon previous studies, it should be validated through real-world…

  14. Model validation for karst flow using sandbox experiments

    NASA Astrophysics Data System (ADS)

    Ye, M.; Pacheco Castro, R. B.; Tao, X.; Zhao, J.

    2015-12-01

    The study of flow in karst is complex due of the high heterogeneity of the porous media. Several approaches have been proposed in the literature to study overcome the natural complexity of karst. Some of those methods are the single continuum, double continuum and the discrete network of conduits coupled with the single continuum. Several mathematical and computing models are available in the literature for each approach. In this study one computer model has been selected for each category to validate its usefulness to model flow in karst using a sandbox experiment. The models chosen are: Modflow 2005, Modflow CFPV1 and Modflow CFPV2. A sandbox experiment was implemented in such way that all the parameters required for each model can be measured. The sandbox experiment was repeated several times under different conditions. The model validation will be carried out by comparing the results of the model simulation and the real data. This model validation will allows ud to compare the accuracy of each model and the applicability in Karst. Also we will be able to evaluate if the results of the complex models improve a lot compared to the simple models specially because some models require complex parameters that are difficult to measure in the real world.

  15. Predicting the ungauged basin: Model validation and realism assessment

    NASA Astrophysics Data System (ADS)

    van Emmerik, Tim; Mulder, Gert; Eilander, Dirk; Piet, Marijn; Savenije, Hubert

    2015-10-01

    The hydrological decade on Predictions in Ungauged Basins (PUB) led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this paper we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. This paper does not present a generic approach that can be transferred to other ungauged catchments, but it aims to show how clever model design and alternative data acquisition can result in a valuable hydrological model for an ungauged catchment.

  16. Rigorous valid ranges for optimally reduced kinetic models

    SciTech Connect

    Oluwole, Oluwayemisi O.; Bhattacharjee, Binita; Tolsma, John E.; Barton, Paul I.; Green, William H.

    2006-07-15

    Reduced chemical kinetic models are often used in place of a detailed mechanism because of the computational expense of solving the complete set of equations describing the reacting system. Mathematical methods for model reduction are usually associated with a nominal set of reaction conditions for which the model is reduced. The important effects of variability in these nominal conditions are often ignored because there is no convenient way to deal with them. In this work, we introduce a method to identify rigorous valid ranges for reduced models; i.e., the reduced models are guaranteed to replicate the full model to within an error tolerance under all conditions in the identified valid range. Previous methods have estimated valid ranges using a limited set of variables (usually temperature and a few species compositions) and cannot guarantee that the reduced model is accurate at all points in the estimated range. The new method is demonstrated by identifying valid ranges for models reduced from the GRI-Mech 3.0 mechanism with 53 species and 325 reactions, and a truncated propane mechanism with 94 species and 505 reactions based on the comprehensive mechanism of Marinov et al. A library of reduced models is also generated for several prespecified ranges composing a desired state space. The use of these reduced models with error control in reacting flow simulations is demonstrated through an Adaptive Chemistry example. By using the reduced models in the simulation only when they are valid the Adaptive Chemistry solution matches the solution obtained using the detailed mechanism. (author)

  17. A model for the separation of cloud and aerosol in SAGE II occultation data

    NASA Technical Reports Server (NTRS)

    Kent, G. S.; Winker, D. M.; Osborn, M. T.; Skeens, K. M.

    1993-01-01

    The Stratospheric Aerosol and Gas Experiment (SAGE) II satellite experiment measures the extinction due to aerosols and thin cloud, at wavelengths of 0.525 and 1.02 micrometers, down to an altitude of 6 km. The wavelength dependence of the extinction due to aerosols differs from that of the extinction due to cloud and is used as the basis of a model for separating these two components. The model is presented and its validation using airborne lidar data, obtained coincident with SAGE II observations, is described. This comparison shows that smaller SAGE II cloud extinction values correspond to the presence of subvisible cirrus cloud in the lidar record. Examples of aerosol and cloud data products obtained using this model to interpret SAGE II upper tropospheric and lower stratospheric data are also shown.

  18. On the development and validation of QSAR models.

    PubMed

    Gramatica, Paola

    2013-01-01

    The fundamental and more critical steps that are necessary for the development and validation of QSAR models are presented in this chapter as best practices in the field. These procedures are discussed in the context of predictive QSAR modelling that is focused on achieving models of the highest statistical quality and with external predictive power. The most important and most used statistical parameters needed to verify the real performances of QSAR models (of both linear regression and classification) are presented. Special emphasis is placed on the validation of models, both internally and externally, as well as on the need to define model applicability domains, which should be done when models are employed for the prediction of new external compounds.

  19. Modeling Topaz-II system performance

    SciTech Connect

    Lee, H.H.; Klein, A.C. )

    1993-01-01

    The US acquisition of the Topaz-11 in-core thermionic space reactor test system from Russia provides a good opportunity to perform a comparison of the Russian reported data and the results from computer codes such as MCNP (Ref. 3) and TFEHX (Ref. 4). The comparison study includes both neutronic and thermionic performance analyses. The Topaz II thermionic reactor is modeled with MCNP using actual Russian dimensions and parameters. The computation of the neutronic performance considers several important aspects such as the fuel enrichment and location of the thermionic fuel elements (TFES) in the reactor core. The neutronic analysis included the calculation of both radial and axial power distribution, which are then used in the TFEHX code for electrical performance. The reactor modeled consists of 37 single-cell TFEs distributed in a 13-cm-radius zirconium hydride block surrounded by 8 cm of beryllium metal reflector. The TFEs use 90% enriched [sup 235]U and molybdenum coated with a thin layer of [sup 184]W for emitter surface. Electrons emitted are captured by a collector surface with a gap filled with cesium vapor between the collector and emitter surfaces. The collector surface is electrically insulated with alumina. Liquid NaK provides the cooling system for the TFEs. The axial thermal power distribution is obtained by dividing the TFE into 40 axial nodes. Comparison of the true axial power distribution with that produced by electrical heaters was also performed.

  20. Analysis of the absorptive behavior of photopolymer materials. Part II. Experimental validation

    NASA Astrophysics Data System (ADS)

    Li, Haoyu; Qi, Yue; Tolstik, Elen; Guo, Jinxin; Sheridan, John T.

    2015-01-01

    In the first part of this paper, a model describing photopolymer materials, which incorporates both the physical electromagnetic and photochemical effects taking place, was developed. This model is now validated by applying it to fit experimental data for two different types of photopolymer materials. The first photopolymer material, acrylamide/polyvinyl alcohol, is studied when four photosensitizers are used, i.e. Erythrosine B, Eosin Y, Phloxine B and Rose Bengal. The second type of photopolymer material involves phenanthrenequinone in a polymethylmethacrylate matrix. Using our model, the values of physical parameters, are extracted by numerical fitting experimentally obtained normalized transmittance growth curves. Experimental data sets for different exposure intensities, dye concentrations, and exposure geometries are studied. The advantages of our approach are demonstrated and it is shown that the parameters proposed by us to quantify the absorptive behavior in our model are both physical and can be estimated.

  1. A prediction model for ocular damage - Experimental validation.

    PubMed

    Heussner, Nico; Vagos, Márcia; Spitzer, Martin S; Stork, Wilhelm

    2015-08-01

    With the increasing number of laser applications in medicine and technology, accidental as well as intentional exposure of the human eye to laser sources has become a major concern. Therefore, a prediction model for ocular damage (PMOD) is presented within this work and validated for long-term exposure. This model is a combination of a raytracing model with a thermodynamical model of the human and an application which determines the thermal damage by the implementation of the Arrhenius integral. The model is based on our earlier work and is here validated against temperature measurements taken with porcine eye samples. For this validation, three different powers were used: 50mW, 100mW and 200mW with a spot size of 1.9mm. Also, the measurements were taken with two different sensing systems, an infrared camera and a fibre optic probe placed within the tissue. The temperatures were measured up to 60s and then compared against simulations. The measured temperatures were found to be in good agreement with the values predicted by the PMOD-model. To our best knowledge, this is the first model which is validated for both short-term and long-term irradiations in terms of temperature and thus demonstrates that temperatures can be accurately predicted within the thermal damage regime. PMID:26267496

  2. Pharmacophore modeling studies of type I and type II kinase inhibitors of Tie2.

    PubMed

    Xie, Qing-Qing; Xie, Huan-Zhang; Ren, Ji-Xia; Li, Lin-Li; Yang, Sheng-Yong

    2009-02-01

    In this study, chemical feature based pharmacophore models of type I and type II kinase inhibitors of Tie2 have been developed with the aid of HipHop and HypoRefine modules within Catalyst program package. The best HipHop pharmacophore model Hypo1_I for type I kinase inhibitors contains one hydrogen-bond acceptor, one hydrogen-bond donor, one general hydrophobic, one hydrophobic aromatic, and one ring aromatic feature. And the best HypoRefine model Hypo1_II for type II kinase inhibitors, which was characterized by the best correlation coefficient (0.976032) and the lowest RMSD (0.74204), consists of two hydrogen-bond donors, one hydrophobic aromatic, and two general hydrophobic features, as well as two excluded volumes. These pharmacophore models have been validated by using either or both test set and cross validation methods, which shows that both the Hypo1_I and Hypo1_II have a good predictive ability. The space arrangements of the pharmacophore features in Hypo1_II are consistent with the locations of the three portions making up a typical type II kinase inhibitor, namely, the portion occupying the ATP binding region (ATP-binding-region portion, AP), that occupying the hydrophobic region (hydrophobic-region portion, HP), and that linking AP and HP (bridge portion, BP). Our study also reveals that the ATP-binding-region portion of the type II kinase inhibitors plays an important role to the bioactivity of the type II kinase inhibitors. Structural modifications on this portion should be helpful to further improve the inhibitory potency of type II kinase inhibitors. PMID:19138543

  3. Pharmacophore modeling studies of type I and type II kinase inhibitors of Tie2.

    PubMed

    Xie, Qing-Qing; Xie, Huan-Zhang; Ren, Ji-Xia; Li, Lin-Li; Yang, Sheng-Yong

    2009-02-01

    In this study, chemical feature based pharmacophore models of type I and type II kinase inhibitors of Tie2 have been developed with the aid of HipHop and HypoRefine modules within Catalyst program package. The best HipHop pharmacophore model Hypo1_I for type I kinase inhibitors contains one hydrogen-bond acceptor, one hydrogen-bond donor, one general hydrophobic, one hydrophobic aromatic, and one ring aromatic feature. And the best HypoRefine model Hypo1_II for type II kinase inhibitors, which was characterized by the best correlation coefficient (0.976032) and the lowest RMSD (0.74204), consists of two hydrogen-bond donors, one hydrophobic aromatic, and two general hydrophobic features, as well as two excluded volumes. These pharmacophore models have been validated by using either or both test set and cross validation methods, which shows that both the Hypo1_I and Hypo1_II have a good predictive ability. The space arrangements of the pharmacophore features in Hypo1_II are consistent with the locations of the three portions making up a typical type II kinase inhibitor, namely, the portion occupying the ATP binding region (ATP-binding-region portion, AP), that occupying the hydrophobic region (hydrophobic-region portion, HP), and that linking AP and HP (bridge portion, BP). Our study also reveals that the ATP-binding-region portion of the type II kinase inhibitors plays an important role to the bioactivity of the type II kinase inhibitors. Structural modifications on this portion should be helpful to further improve the inhibitory potency of type II kinase inhibitors.

  4. The hypothetical world of CoMFA and model validation

    SciTech Connect

    Oprea, T.I.

    1996-12-31

    CoMFA is a technique used to establish the three-dimensional similarity of molecular structures, in relationship to a target property. Because the risk of chance correlation is high, validation is required for all CoMFA models. The following validation steps should be performed: the choice of alignment rules (superimposition and conformer criteria) has to use experimental data when available, or different (alternate) hypotheses; statistical methods (e.g., cross-validation with randomized groups), have to emphasize simplicity, robustness, predictivity and explanatory power. When several CoMFA-QSAR models on similar targets and/or structures are available, qualitative lateral validation can be applied. This meta-analysis for CoMFA models offers a broader perspective on the similarities and differences between compared biological targets, with potential applications in rational drug design [e.g., selectivity, efficacy] and environmental toxicology. Examples that focus on validation of CoMFA models include the following steroid-binding proteins: aromatase, the estrogen and the androgen receptors, a monoclonal antibody against progesterone and two steroid binding globulins.

  5. The Validation of Climate Models: The Development of Essential Practice

    NASA Astrophysics Data System (ADS)

    Rood, R. B.

    2011-12-01

    It is possible from both a scientific and philosophical perspective to state that climate models cannot be validated. However, with the realization that the scientific investigation of climate change is as much a subject of politics as of science, maintaining this formal notion of "validation" has significant consequences. For example, it relegates the bulk of work of many climate scientists to an exercise of model evaluation that can be construed as ill-posed. Even within the science community this motivates criticism of climate modeling as an exercise of weak scientific practice. Stepping outside of the science community, statements that validation is impossible are used in political arguments to discredit the scientific investigation of climate, to maintain doubt about projections of climate change, and hence, to prohibit the development of public policy to regulate the emissions of greenhouse gases. With the acceptance of the impossibility of validation, scientists often state that the credibility of models can be established through an evaluation process. A robust evaluation process leads to the quantitative description of the modeling system against a standard set of measures. If this process is standardized as institutional practice, then this provides a measure of model performance from one modeling release to the next. It is argued, here, that such a robust and standardized evaluation of climate models can be structured and quantified as "validation." Arguments about the nuanced meaning of validation and evaluation are a subject about which the climate modeling community needs to develop a standard. It does injustice to a body of science-based knowledge to maintain that validation is "impossible." Rather than following such a premise, which immediately devalues the knowledge base, it is more useful to develop a systematic, standardized approach to robust, appropriate validation. This stands to represent the complexity of the Earth's climate and its

  6. Validation of the Health-Promoting Lifestyle Profile II for Hispanic male truck drivers in the Southwest.

    PubMed

    Mullins, Iris L; O'Day, Trish; Kan, Tsz Yin

    2013-08-01

    The aims of the study were to validate the English and Spanish Versions of the Health-Promoting Lifestyle Profile II (HPLP II) with Hispanic male truck drivers and to determine if there were any differences in drivers' responses based on driving responsibility. The methods included a descriptive correlation design, the HPLP II (English and Spanish versions), and a demographic questionnaire. Fifty-two Hispanic drivers participated in the study. There were no significant differences in long haul and short haul drivers' responses to the HPLP II. Cronbach's alpha for the Spanish version was .97 and the subscales alphas ranged from .74 to .94. The English version alpha was .92 and the subscales ranged from .68 to .84. Findings suggest the subscales of Health Responsibility, Physical Activities, Nutrition, and Spirituality Growth on the HPLP II Spanish and English versions may not adequately assess health-promoting behaviors and cultural influences for the Hispanic male population in the southwestern border region.

  7. Comparison with CLPX II airborne data using DMRT model

    USGS Publications Warehouse

    Xu, X.; Liang, D.; Andreadis, K.M.; Tsang, L.; Josberger, E.G.

    2009-01-01

    In this paper, we considered a physical-based model which use numerical solution of Maxwell Equations in three-dimensional simulations and apply into Dense Media Radiative Theory (DMRT). The model is validated in two specific dataset from the second Cold Land Processes Experiment (CLPX II) at Alaska and Colorado. The data were all obtain by the Ku-band (13.95GHz) observations using airborne imaging polarimetric scatterometer (POLSCAT). Snow is a densely packed media. To take into account the collective scattering and incoherent scattering, analytical Quasi-Crystalline Approximation (QCA) and Numerical Maxwell Equation Method of 3-D simulation (NMM3D) are used to calculate the extinction coefficient and phase matrix. DMRT equations were solved by iterative solution up to 2nd order for the case of small optical thickness and full multiple scattering solution by decomposing the diffuse intensities into Fourier series was used when optical thickness exceed unity. It was shown that the model predictions agree with the field experiment not only co-polarization but also cross-polarization. For Alaska region, the input snow structure data was obtain by the in situ ground observations, while for Colorado region, we combined the VIC model to get the snow profile. ??2009 IEEE.

  8. The GEMMA Crustal Model: First Validation and Data Distribution

    NASA Astrophysics Data System (ADS)

    Sampietro, D.; Reguzzoni, M.; Negretti, M.

    2013-12-01

    In the GEMMA project, funded by ESA-STSE and ASI, a new crustal model constrained by GOCE gravity field observations has been developed. This model has a resolution of 0.5°×0.5° and it is composed of seven layers describing geometry and density of oceans, ice sheets, upper, medium and lower sediments, crystalline crust and upper mantle. In the present work the GEMMA model is validated against other global and regional models, showing a good consistency where validation data are reliable. Apart from that the development of a WPS (Web Processing Service) for the distribution of the GEMMA model is also presented. The service gives the possibility to download, interpolate and display the whole crustal model, providing for each layer the depth of its upper and lower boundary, its density as well as its gravitational effect in terms of second radial derivative of the gravitational potential at GOCE altitude.

  9. Sub-nanometer Level Model Validation of the SIM Interferometer

    NASA Technical Reports Server (NTRS)

    Korechoff, Robert P.; Hoppe, Daniel; Wang, Xu

    2004-01-01

    The Space Interferometer Mission (SIM) flight instrument will not undergo a full performance, end-to-end system test on the ground due to a number of constraints. Thus, analysis and physics-based models will play a significant role in providing confidence that SIM will meet its science goals on orbit. The various models themselves are validated against the experimental results obtained from the MicroArcsecond Metrology (MAM) testbed adn the Diffraction testbed (DTB). The metric for validation is provided by the SIM astrometric error budget.

  10. Low-order dynamic modeling of the Experimental Breeder Reactor II

    SciTech Connect

    Berkan, R.C. . Dept. of Nuclear Engineering); Upadhyaya, B.R.; Kisner, R.A. )

    1990-07-01

    This report describes the development of a low-order, linear model of the Experimental Breeder Reactor II (EBR-II), including the primary system, intermediate heat exchanger, and steam generator subsystems. The linear model is developed to represent full-power steady state dynamics for low-level perturbations. Transient simulations are performed using model building and simulation capabilities of the computer software Matrix{sub x}. The inherently safe characteristics of the EBR-II are verified through the simulation studies. The results presented in this report also indicate an agreement between the linear model and the actual dynamics of the plant for several transients. Such models play a major role in the learning and in the improvement of nuclear reactor dynamics for control and signal validation studies. This research and development is sponsored by the Advanced Controls Program in the Instrumentation and Controls Division of the Oak Ridge National Laboratory. 17 refs., 67 figs., 15 tabs.

  11. Human surrogate models of neuropathic pain: validity and limitations.

    PubMed

    Binder, Andreas

    2016-02-01

    Human surrogate models of neuropathic pain in healthy subjects are used to study symptoms, signs, and the hypothesized underlying mechanisms. Although different models are available, different spontaneous and evoked symptoms and signs are inducible; 2 key questions need to be answered: are human surrogate models conceptually valid, ie, do they share the sensory phenotype of neuropathic pain states, and are they sufficiently reliable to allow consistent translational research?

  12. Open-source MFIX-DEM software for gas-solids flows: Part II Validation studies

    SciTech Connect

    Li, Tingwen; Garg, Rahul; Galvin, Janine; Pannala, Sreekanth

    2012-01-01

    With rapid advancements in computer hardware and numerical algorithms, computational fluid dynamics (CFD) has been increasingly employed as a useful tool for investigating the complex hydrodynamics inherent in multiphase flows. An important step during the development of a CFD model and prior to its application is conducting careful and comprehensive verification and validation studies. Accordingly, efforts to verify and validate the open-source MFIX-DEM software, which can be used for simulating the gas solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles, have been made at the National Energy Technology Laboratory (NETL). In part I of this paper, extensive verification studies were presented and in this part, detailed validation studies of MFIX-DEM are presented. A series of test cases covering a range of gas solids flow applications were conducted. In particular the numerical results for the random packing of a binary particle mixture, the repose angle of a sandpile formed during a side charge process, velocity, granular temperature, and voidage profiles from a bounded granular shear flow, lateral voidage and velocity profiles from a monodisperse bubbling fluidized bed, lateral velocity profiles from a spouted bed, and the dynamics of segregation of a binary mixture in a bubbling bed were compared with available experimental data, and in some instances with empirical correlations. In addition, sensitivity studies were conducted for various parameters to quantify the error in the numerical simulation.

  13. Open-Source MFIX-DEM Software for Gas-Solids Flows: Part II - Validation Studies

    SciTech Connect

    Li, Tingwen

    2012-04-01

    With rapid advancements in computer hardware and numerical algorithms, computational fluid dynamics (CFD) has been increasingly employed as a useful tool for investigating the complex hydrodynamics inherent in multiphase flows. An important step during the development of a CFD model and prior to its application is conducting careful and comprehensive verification and validation studies. Accordingly, efforts to verify and validate the open-source MFIX-DEM software, which can be used for simulating the gas–solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles, have been made at the National Energy Technology Laboratory (NETL). In part I of this paper, extensive verification studies were presented and in this part, detailed validation studies of MFIX-DEM are presented. A series of test cases covering a range of gas–solids flow applications were conducted. In particular the numerical results for the random packing of a binary particle mixture, the repose angle of a sandpile formed during a side charge process, velocity, granular temperature, and voidage profiles from a bounded granular shear flow, lateral voidage and velocity profiles from a monodisperse bubbling fluidized bed, lateral velocity profiles from a spouted bed, and the dynamics of segregation of a binary mixture in a bubbling bed were compared with available experimental data, and in some instances with empirical correlations. In addition, sensitivity studies were conducted for various parameters to quantify the error in the numerical simulation.

  14. Validity of NBME Parts I and II for the Selection of Residents: The Case of Orthopaedic Surgery.

    ERIC Educational Resources Information Center

    Case, Susan M.

    The predictive validity of scores on the National Board of Medical Examiners (NBME) Part I and Part II examinations for the selection of residents in orthopaedic surgery was investigated. Use of NBME scores has been criticized because of the time lag between taking Part I and entering residency and because Part I content is not directly linked to…

  15. Validating the Thinking Styles Inventory-Revised II among Chinese University Students with Hearing Impairment through Test Accommodations

    ERIC Educational Resources Information Center

    Cheng, Sanyin; Zhang, Li-Fang

    2014-01-01

    The present study pioneered in adopting test accommodations to validate the Thinking Styles Inventory-Revised II (TSI-R2; Sternberg, Wagner, & Zhang, 2007) among Chinese university students with hearing impairment. A series of three studies were conducted that drew their samples from the same two universities, in which accommodating test…

  16. Concurrent Validity of the WISC-IV and DAS-II in Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Kuriakose, Sarah

    2014-01-01

    Cognitive assessments are used for a variety of research and clinical purposes in children with autism spectrum disorder (ASD). This study establishes concurrent validity of the Wechsler Intelligence Scales for Children-fourth edition (WISC-IV) and Differential Ability Scales-second edition (DAS-II) in a sample of children with ASD with a broad…

  17. Validation techniques of agent based modelling for geospatial simulations

    NASA Astrophysics Data System (ADS)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  18. Data for model validation summary report. A summary of data for validation and benchmarking of recovery boiler models

    SciTech Connect

    Grace, T.; Lien, S.; Schmidl, W.; Salcudean, M.; Abdullah, Z.

    1997-07-01

    One of the tasks in the project was to obtain data from operating recovery boilers for the purpose of model validation. Another task was to obtain water model data and computer output from University of British Columbia for purposes of benchmarking the UBC model against other codes. In the course of discussions on recovery boiler modeling over the course of this project, it became evident that there would be value in having some common cases for carrying out benchmarking exercises with different recovery boiler models. In order to facilitate such a benchmarking exercise, the data that was obtained on this project for validation and benchmarking purposes has been brought together in a single, separate report. The intent is to make this data available to anyone who may want to use it for model validation. The report contains data from three different cases. Case 1 is an ABBCE recovery boiler which was used for model validation. The data are for a single set of operating conditions. Case 2 is a Babcock & Wilcox recovery boiler that was modified by Tampella. In this data set, several different operating conditions were employed. The third case is water flow data supplied by UBC, along with computational output using the UBC code, for benchmarking purposes.

  19. Modeling and validation of microwave ablations with internal vaporization.

    PubMed

    Chiang, Jason; Birla, Sohan; Bedoya, Mariajose; Jones, David; Subbiah, Jeyam; Brace, Christopher L

    2015-02-01

    Numerical simulation is increasingly being utilized for computer-aided design of treatment devices, analysis of ablation growth, and clinical treatment planning. Simulation models to date have incorporated electromagnetic wave propagation and heat conduction, but not other relevant physics such as water vaporization and mass transfer. Such physical changes are particularly noteworthy during the intense heat generation associated with microwave heating. In this paper, a numerical model was created that integrates microwave heating with water vapor generation and transport by using porous media assumptions in the tissue domain. The heating physics of the water vapor model was validated through temperature measurements taken at locations 5, 10, and 20 mm away from the heating zone of the microwave antenna in homogenized ex vivo bovine liver setup. Cross-sectional area of water vapor transport was validated through intraprocedural computed tomography (CT) during microwave ablations in homogenized ex vivo bovine liver. Iso-density contours from CT images were compared to vapor concentration contours from the numerical model at intermittent time points using the Jaccard index. In general, there was an improving correlation in ablation size dimensions as the ablation procedure proceeded, with a Jaccard index of 0.27, 0.49, 0.61, 0.67, and 0.69 at 1, 2, 3, 4, and 5 min, respectively. This study demonstrates the feasibility and validity of incorporating water vapor concentration into thermal ablation simulations and validating such models experimentally. PMID:25330481

  20. Using the split Hopkinson pressure bar to validate material models.

    PubMed

    Church, Philip; Cornish, Rory; Cullis, Ian; Gould, Peter; Lewtas, Ian

    2014-08-28

    This paper gives a discussion of the use of the split-Hopkinson bar with particular reference to the requirements of materials modelling at QinetiQ. This is to deploy validated material models for numerical simulations that are physically based and have as little characterization overhead as possible. In order to have confidence that the models have a wide range of applicability, this means, at most, characterizing the models at low rate and then validating them at high rate. The split Hopkinson pressure bar (SHPB) is ideal for this purpose. It is also a very useful tool for analysing material behaviour under non-shock wave loading. This means understanding the output of the test and developing techniques for reliable comparison of simulations with SHPB data. For materials other than metals comparison with an output stress v strain curve is not sufficient as the assumptions built into the classical analysis are generally violated. The method described in this paper compares the simulations with as much validation data as can be derived from deployed instrumentation including the raw strain gauge data on the input and output bars, which avoids any assumptions about stress equilibrium. One has to take into account Pochhammer-Chree oscillations and their effect on the specimen and recognize that this is itself also a valuable validation test of the material model. PMID:25071238

  1. Modeling and validation of microwave ablations with internal vaporization.

    PubMed

    Chiang, Jason; Birla, Sohan; Bedoya, Mariajose; Jones, David; Subbiah, Jeyam; Brace, Christopher L

    2015-02-01

    Numerical simulation is increasingly being utilized for computer-aided design of treatment devices, analysis of ablation growth, and clinical treatment planning. Simulation models to date have incorporated electromagnetic wave propagation and heat conduction, but not other relevant physics such as water vaporization and mass transfer. Such physical changes are particularly noteworthy during the intense heat generation associated with microwave heating. In this paper, a numerical model was created that integrates microwave heating with water vapor generation and transport by using porous media assumptions in the tissue domain. The heating physics of the water vapor model was validated through temperature measurements taken at locations 5, 10, and 20 mm away from the heating zone of the microwave antenna in homogenized ex vivo bovine liver setup. Cross-sectional area of water vapor transport was validated through intraprocedural computed tomography (CT) during microwave ablations in homogenized ex vivo bovine liver. Iso-density contours from CT images were compared to vapor concentration contours from the numerical model at intermittent time points using the Jaccard index. In general, there was an improving correlation in ablation size dimensions as the ablation procedure proceeded, with a Jaccard index of 0.27, 0.49, 0.61, 0.67, and 0.69 at 1, 2, 3, 4, and 5 min, respectively. This study demonstrates the feasibility and validity of incorporating water vapor concentration into thermal ablation simulations and validating such models experimentally.

  2. Modeling and Validation of Microwave Ablations with Internal Vaporization

    PubMed Central

    Chiang, Jason; Birla, Sohan; Bedoya, Mariajose; Jones, David; Subbiah, Jeyam; Brace, Christopher L.

    2014-01-01

    Numerical simulation is increasingly being utilized for computer-aided design of treatment devices, analysis of ablation growth, and clinical treatment planning. Simulation models to date have incorporated electromagnetic wave propagation and heat conduction, but not other relevant physics such as water vaporization and mass transfer. Such physical changes are particularly noteworthy during the intense heat generation associated with microwave heating. In this work, a numerical model was created that integrates microwave heating with water vapor generation and transport by using porous media assumptions in the tissue domain. The heating physics of the water vapor model was validated through temperature measurements taken at locations 5, 10 and 20 mm away from the heating zone of the microwave antenna in homogenized ex vivo bovine liver setup. Cross-sectional area of water vapor transport was validated through intra-procedural computed tomography (CT) during microwave ablations in homogenized ex vivo bovine liver. Iso-density contours from CT images were compared to vapor concentration contours from the numerical model at intermittent time points using the Jaccard Index. In general, there was an improving correlation in ablation size dimensions as the ablation procedure proceeded, with a Jaccard Index of 0.27, 0.49, 0.61, 0.67 and 0.69 at 1, 2, 3, 4, and 5 minutes. This study demonstrates the feasibility and validity of incorporating water vapor concentration into thermal ablation simulations and validating such models experimentally. PMID:25330481

  3. Using the split Hopkinson pressure bar to validate material models

    PubMed Central

    Church, Philip; Cornish, Rory; Cullis, Ian; Gould, Peter; Lewtas, Ian

    2014-01-01

    This paper gives a discussion of the use of the split-Hopkinson bar with particular reference to the requirements of materials modelling at QinetiQ. This is to deploy validated material models for numerical simulations that are physically based and have as little characterization overhead as possible. In order to have confidence that the models have a wide range of applicability, this means, at most, characterizing the models at low rate and then validating them at high rate. The split Hopkinson pressure bar (SHPB) is ideal for this purpose. It is also a very useful tool for analysing material behaviour under non-shock wave loading. This means understanding the output of the test and developing techniques for reliable comparison of simulations with SHPB data. For materials other than metals comparison with an output stress v strain curve is not sufficient as the assumptions built into the classical analysis are generally violated. The method described in this paper compares the simulations with as much validation data as can be derived from deployed instrumentation including the raw strain gauge data on the input and output bars, which avoids any assumptions about stress equilibrium. One has to take into account Pochhammer–Chree oscillations and their effect on the specimen and recognize that this is itself also a valuable validation test of the material model. PMID:25071238

  4. Experimental Validation of a Thermoelastic Model for SMA Hybrid Composites

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.

    2001-01-01

    This study presents results from experimental validation of a recently developed model for predicting the thermomechanical behavior of shape memory alloy hybrid composite (SMAHC) structures, composite structures with an embedded SMA constituent. The model captures the material nonlinearity of the material system with temperature and is capable of modeling constrained, restrained, or free recovery behavior from experimental measurement of fundamental engineering properties. A brief description of the model and analysis procedures is given, followed by an overview of a parallel effort to fabricate and characterize the material system of SMAHC specimens. Static and dynamic experimental configurations for the SMAHC specimens are described and experimental results for thermal post-buckling and random response are presented. Excellent agreement is achieved between the measured and predicted results, fully validating the theoretical model for constrained recovery behavior of SMAHC structures.

  5. Progress in Geant4 Electromagnetic Physics Modelling and Validation

    NASA Astrophysics Data System (ADS)

    Apostolakis, J.; Asai, M.; Bagulya, A.; Brown, J. M. C.; Burkhardt, H.; Chikuma, N.; Cortes-Giraldo, M. A.; Elles, S.; Grichine, V.; Guatelli, S.; Incerti, S.; Ivanchenko, V. N.; Jacquemier, J.; Kadri, O.; Maire, M.; Pandola, L.; Sawkey, D.; Toshito, T.; Urban, L.; Yamashita, T.

    2015-12-01

    In this work we report on recent improvements in the electromagnetic (EM) physics models of Geant4 and new validations of EM physics. Improvements have been made in models of the photoelectric effect, Compton scattering, gamma conversion to electron and muon pairs, fluctuations of energy loss, multiple scattering, synchrotron radiation, and high energy positron annihilation. The results of these developments are included in the new Geant4 version 10.1 and in patches to previous versions 9.6 and 10.0 that are planned to be used for production for run-2 at LHC. The Geant4 validation suite for EM physics has been extended and new validation results are shown in this work. In particular, the effect of gamma-nuclear interactions on EM shower shape at LHC energies is discussed.

  6. Electro-thermal modelling of a supercapacitor and experimental validation

    NASA Astrophysics Data System (ADS)

    Berrueta, Alberto; San Martín, Idoia; Hernández, Andoni; Ursúa, Alfredo; Sanchis, Pablo

    2014-08-01

    This paper reports on the electro-thermal modelling of a Maxwell supercapacitor (SC), model BMOD0083 with a rated capacitance of 83 F and rated voltage of 48 V. One electrical equivalent circuit was used to model the electrical behaviour whilst another served to simulate the thermal behaviour. The models were designed to predict the SC operating voltage and temperature, by taking the electric current and ambient temperature as input variables. A five-stage iterative method, applied to three experiments, served to obtain the parameter values for each model. The models were implemented in MATLAB-Simulink®, where they interacted to reciprocally provide information. These models were then validated through a number of tests, subjecting the SC to different current and frequency profiles. These tests included the validation of a bank of supercapacitors integrated into an electric microgrid, in a real operating environment. Satisfactory results were obtained from the electric and thermal models, with RMSE values of less than 0.65 V in all validations.

  7. Development and Validation of a Mass Casualty Conceptual Model

    PubMed Central

    Culley, Joan M.; Effken, Judith A.

    2012-01-01

    Purpose To develop and validate a conceptual model that provides a framework for the development and evaluation of information systems for mass casualty events. Design The model was designed based on extant literature and existing theoretical models. A purposeful sample of 18 experts validated the model. Open-ended questions, as well as a 7-point Likert scale, were used to measure expert consensus on the importance of each construct and its relationship in the model and the usefulness of the model to future research. Methods Computer-mediated applications were used to facilitate a modified Delphi technique through which a panel of experts provided validation for the conceptual model. Rounds of questions continued until consensus was reached, as measured by an interquartile range (no more than 1 scale point for each item); stability (change in the distribution of responses less than 15% between rounds); and percent agreement (70% or greater) for indicator questions. Findings Two rounds of the Delphi process were needed to satisfy the criteria for consensus or stability related to the constructs, relationships, and indicators in the model. The panel reached consensus or sufficient stability to retain all 10 constructs, 9 relationships, and 39 of 44 indicators. Experts viewed the model as useful (mean of 5.3 on a 7-point scale). Conclusions Validation of the model provides the first step in understanding the context in which mass casualty events take place and identifying variables that impact outcomes of care. Clinical Relevance This study provides a foundation for understanding the complexity of mass casualty care, the roles that nurses play in mass casualty events, and factors that must be considered in designing and evaluating information-communication systems to support effective triage under these conditions. PMID:20487188

  8. Radiative transfer model validations during the First ISLSCP Field Experiment

    NASA Technical Reports Server (NTRS)

    Frouin, Robert; Breon, Francois-Marie; Gautier, Catherine

    1990-01-01

    Two simple radiative transfer models, the 5S model based on Tanre et al. (1985, 1986) and the wide-band model of Morcrette (1984) are validated by comparing their outputs with results obtained during the First ISLSCP Field Experiment on concomitant radiosonde, aerosol turbidity, and radiation measurements and sky photographs. Results showed that the 5S model overestimates the short-wave irradiance by 13.2 W/sq m, whereas the Morcrette model underestimated the long-wave irradiance by 7.4 W/sq m.

  9. Verification and Validation of Model-Based Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles; Koga, Dennis (Technical Monitor)

    2001-01-01

    This paper presents a three year project (FY99 to FY01) on the verification and validation of model based autonomous systems. The topics include: 1) Project Profile; 2) Model-Based Autonomy; 3) The Livingstone MIR; 4) MPL2SMV; 5) Livingstone to SMV Translation; 6) Symbolic Model Checking; 7) From Livingstone Models to SMV Models; 8) Application In-Situ Propellant Production; 9) Closed-Loop Verification Principle; 10) Livingstone PathFinder (LPF); 11) Publications and Presentations; and 12) Future Directions. This paper is presented in viewgraph form.

  10. External model validation of binary clinical risk prediction models in cardiovascular and thoracic surgery.

    PubMed

    Hickey, Graeme L; Blackstone, Eugene H

    2016-08-01

    Clinical risk-prediction models serve an important role in healthcare. They are used for clinical decision-making and measuring the performance of healthcare providers. To establish confidence in a model, external model validation is imperative. When designing such an external model validation study, thought must be given to patient selection, risk factor and outcome definitions, missing data, and the transparent reporting of the analysis. In addition, there are a number of statistical methods available for external model validation. Execution of a rigorous external validation study rests in proper study design, application of suitable statistical methods, and transparent reporting.

  11. Climate Model Datasets on Earth System Grid II (ESG II)

    DOE Data Explorer

    Earth System Grid (ESG) is a project that combines the power and capacity of supercomputers, sophisticated analysis servers, and datasets on the scale of petabytes. The goal is to provide a seamless distributed environment that allows scientists in many locations to work with large-scale data, perform climate change modeling and simulation,and share results in innovative ways. Though ESG is more about the computing environment than the data, still there are several catalogs of data available at the web site that can be browsed or search. Most of the datasets are restricted to registered users, but several are open to any access.

  12. Validation of a 3-D hemispheric nested air pollution model

    NASA Astrophysics Data System (ADS)

    Frohn, L. M.; Christensen, J. H.; Brandt, J.; Geels, C.; Hansen, K. M.

    2003-07-01

    Several air pollution transport models have been developed at the National Environmental Research Institute in Denmark over the last decade (DREAM, DEHM, ACDEP and DEOM). A new 3-D nested Eulerian transport-chemistry model: REGIonal high resolutioN Air pollution model (REGINA) is based on modules and parameterisations from these models as well as new methods. The model covers the majority of the Northern Hemisphere with currently one nest implemented. The horizontal resolution in the mother domain is 150 km × 150 km, and the nesting factor is three. A chemical scheme (originally 51 species) has been extended with a detailed description of the ammonia chemistry and implemented in the model. The mesoscale numerical weather prediction model MM5v2 is used as meteorological driver for the model. The concentrations of air pollutants, such as sulphur and nitrogen in various forms, have been calculated, applying zero nesting and one nest. The model setup is currently being validated by comparing calculated values of concentrations to measurements from approximately 100 stations included in the European Monitoring and Evalutation Programme (EMEP). The present paper describes the physical processes and parameterisations of the model together with the modifications of the chemical scheme. Validation of the model calculations by comparison to EMEP measurements for a summer and a winter month is shown and discussed. Furthermore, results from a sensitivity study of the model performance with respect to resolution in emission and meteorology input data is presented. Finally the future prospects of the model are discussed. The overall validation shows that the model performs well with respect to correlation for both monthly and daily mean values.

  13. Validating Work Discrimination and Coping Strategy Models for Sexual Minorities

    ERIC Educational Resources Information Center

    Chung, Y. Barry; Williams, Wendi; Dispenza, Franco

    2009-01-01

    The purpose of this study was to validate and expand on Y. B. Chung's (2001) models of work discrimination and coping strategies among lesbian, gay, and bisexual persons. In semistructured individual interviews, 17 lesbians and gay men reported 35 discrimination incidents and their related coping strategies. Responses were coded based on Chung's…

  14. Validation of Geant4 hadronic physics models at intermediate energies

    NASA Astrophysics Data System (ADS)

    Banerjee, Sunanda; Geant4 Hadronic Group

    2010-04-01

    GEANT4 provides a number of physics models at intermediate energies (corresponding to incident momenta in the range 1-20 GeV/c). Recently, these models have been validated with existing data from a number of experiments: (a) inclusive proton and neutron production with a variety of beams (π-, π+, p) at different energies between 1 and 9 GeV/c on a number of nuclear targets (from beryllium to uranium); (2) inclusive pion/kaon/proton production from 14.6 GeV/c proton beams on nuclear targets (from beryllium to gold); (3) inclusive pion production from pion beams between 3-13 GeV/c on a number of nuclear targets (from beryllium to lead). The results of simulation/data comparison for different GEANT4 models are discussed in the context of validating the models and determining their usage in physics lists for high energy application. Due to the increasing number of validations becoming available, and the requirement that they be done at regular intervals corresponding to the GEANT4 release schedule, automated methods of validation are being developed.

  15. Hydrologic and water quality models: Use, calibration, and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper introduces a special collection of 22 research articles that present and discuss calibration and validation concepts in detail for hydrologic and water quality models by their developers and presents a broad framework for developing the American Society of Agricultural and Biological Engi...

  16. A Model for Investigating Predictive Validity at Highly Selective Institutions.

    ERIC Educational Resources Information Center

    Gross, Alan L.; And Others

    A statistical model for investigating predictive validity at highly selective institutions is described. When the selection ratio is small, one must typically deal with a data set containing relatively large amounts of missing data on both criterion and predictor variables. Standard statistical approaches are based on the strong assumption that…

  17. ID Model Construction and Validation: A Multiple Intelligences Case

    ERIC Educational Resources Information Center

    Tracey, Monica W.; Richey, Rita C.

    2007-01-01

    This is a report of a developmental research study that aimed to construct and validate an instructional design (ID) model that incorporates the theory and practice of multiple intelligences (MI). The study consisted of three phases. In phase one, the theoretical foundations of multiple Intelligences and ID were examined to guide the development…

  18. Bibliometric Modeling Processes and the Empirical Validity of Lotka's Law.

    ERIC Educational Resources Information Center

    Nicholls, Paul Travis

    1989-01-01

    Examines the elements involved in fitting a bibliometric model to empirical data, proposes a consistent methodology for applying Lotka's law, and presents the results of an empirical test of the methodology. The results are discussed in terms of the validity of Lotka's law and the suitability of the proposed methodology. (49 references) (CLB)

  19. MODELS FOR SUBMARINE OUTFALL - VALIDATION AND PREDICTION UNCERTAINTIES

    EPA Science Inventory

    This address reports on some efforts to verify and validate dilution models, including those found in Visual Plumes. This is done in the context of problem experience: a range of problems, including different pollutants such as bacteria; scales, including near-field and far-field...

  20. Linear Model to Assess the Scale's Validity of a Test

    ERIC Educational Resources Information Center

    Tristan, Agustin; Vidal, Rafael

    2007-01-01

    Wright and Stone had proposed three features to assess the quality of the distribution of the items difficulties in a test, on the so called "most probable response map": line, stack and gap. Once a line is accepted as a design model for a test, gaps and stacks are practically eliminated, producing an evidence of the "scale validity" of the test.…

  1. Solar swimming pool heating: Description of a validated model

    SciTech Connect

    Haaf, W.; Luboschik, U.; Tesche, B. )

    1994-07-01

    In the framework of a European Demonstration Programme, co-financed by CEC and national bodies, a model was elaborated and validated for open-air swimming pools having a minimal surface of 100 m[sup 2] and a minimal depth of 0.5 m. The model consists of two parts, the energy balance of the pool and the solar plant. The theoretical background of the energy balance of an open-air swimming pool was found to be poor. Special monitoring campaigns were used to validate the dynamic model using mathematical parameter identification methods. The final model was simplified in order to shorten calculation time and to improve the user-friendliness by reducing the input values to the most important one. The programme is commercially available. However, it requires the hourly meteorological data of a test reference year (TRY) as an input. The users are mainly designing engineers.

  2. Dynamic modelling and experimental validation of three wheeled tilting vehicles

    NASA Astrophysics Data System (ADS)

    Amati, Nicola; Festini, Andrea; Pelizza, Luigi; Tonoli, Andrea

    2011-06-01

    The present paper describes the study of the stability in the straight running of a three-wheeled tilting vehicle for urban and sub-urban mobility. The analysis was carried out by developing a multibody model in the Matlab/SimulinkSimMechanics environment. An Adams-Motorcycle model and an equivalent analytical model were developed for the cross-validation and for highlighting the similarities with the lateral dynamics of motorcycles. Field tests were carried out to validate the model and identify some critical parameters, such as the damping on the steering system. The stability analysis demonstrates that the lateral dynamic motions are characterised by vibration modes that are similar to that of a motorcycle. Additionally, it shows that the wobble mode is significantly affected by the castor trail, whereas it is only slightly affected by the dynamics of the front suspension. For the present case study, the frame compliance also has no influence on the weave and wobble.

  3. Development, Selection, and Validation of Tumor Growth Models

    NASA Astrophysics Data System (ADS)

    Shahmoradi, Amir; Lima, Ernesto; Oden, J. Tinsley

    In recent years, a multitude of different mathematical approaches have been taken to develop multiscale models of solid tumor growth. Prime successful examples include the lattice-based, agent-based (off-lattice), and phase-field approaches, or a hybrid of these models applied to multiple scales of tumor, from subcellular to tissue level. Of overriding importance is the predictive power of these models, particularly in the presence of uncertainties. This presentation describes our attempt at developing lattice-based, agent-based and phase-field models of tumor growth and assessing their predictive power through new adaptive algorithms for model selection and model validation embodied in the Occam Plausibility Algorithm (OPAL), that brings together model calibration, determination of sensitivities of outputs to parameter variances, and calculation of model plausibilities for model selection. Institute for Computational Engineering and Sciences.

  4. Finite Element Model Development and Validation for Aircraft Fuselage Structures

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Fleming, Gary A.; Pappa, Richard S.; Grosveld, Ferdinand W.

    2000-01-01

    The ability to extend the valid frequency range for finite element based structural dynamic predictions using detailed models of the structural components and attachment interfaces is examined for several stiffened aircraft fuselage structures. This extended dynamic prediction capability is needed for the integration of mid-frequency noise control technology. Beam, plate and solid element models of the stiffener components are evaluated. Attachment models between the stiffener and panel skin range from a line along the rivets of the physical structure to a constraint over the entire contact surface. The finite element models are validated using experimental modal analysis results. The increased frequency range results in a corresponding increase in the number of modes, modal density and spatial resolution requirements. In this study, conventional modal tests using accelerometers are complemented with Scanning Laser Doppler Velocimetry and Electro-Optic Holography measurements to further resolve the spatial response characteristics. Whenever possible, component and subassembly modal tests are used to validate the finite element models at lower levels of assembly. Normal mode predictions for different finite element representations of components and assemblies are compared with experimental results to assess the most accurate techniques for modeling aircraft fuselage type structures.

  5. Validation and upgrading of physically based mathematical models

    NASA Technical Reports Server (NTRS)

    Duval, Ronald

    1992-01-01

    The validation of the results of physically-based mathematical models against experimental results was discussed. Systematic techniques are used for: (1) isolating subsets of the simulator mathematical model and comparing the response of each subset to its experimental response for the same input conditions; (2) evaluating the response error to determine whether it is the result of incorrect parameter values, incorrect structure of the model subset, or unmodeled external effects of cross coupling; and (3) modifying and upgrading the model and its parameter values to determine the most physically appropriate combination of changes.

  6. Validating the BHR RANS model for variable density turbulence

    SciTech Connect

    Israel, Daniel M; Gore, Robert A; Stalsberg - Zarling, Krista L

    2009-01-01

    The BHR RANS model is a turbulence model for multi-fluid flows in which density variation plays a strong role in the turbulence processes. In this paper they demonstrate the usefulness of BHR over a wide range of flows which include the effects of shear, buoyancy, and shocks. The results are in good agreement with experimental and DNS data across the entire set of validation cases, with no need to retune model coefficients between cases. The model has potential application to a number of aerospace related flow problems.

  7. Experimentally validated finite element model of electrocaloric multilayer ceramic structures

    SciTech Connect

    Smith, N. A. S. E-mail: maciej.rokosz@npl.co.uk Correia, T. M. E-mail: maciej.rokosz@npl.co.uk; Rokosz, M. K. E-mail: maciej.rokosz@npl.co.uk

    2014-07-28

    A novel finite element model to simulate the electrocaloric response of a multilayer ceramic capacitor (MLCC) under real environment and operational conditions has been developed. The two-dimensional transient conductive heat transfer model presented includes the electrocaloric effect as a source term, as well as accounting for radiative and convective effects. The model has been validated with experimental data obtained from the direct imaging of MLCC transient temperature variation under application of an electric field. The good agreement between simulated and experimental data, suggests that the novel experimental direct measurement methodology and the finite element model could be used to support the design of optimised electrocaloric units and operating conditions.

  8. Propeller aircraft interior noise model utilization study and validation

    NASA Astrophysics Data System (ADS)

    Pope, L. D.

    1984-09-01

    Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.

  9. Propeller aircraft interior noise model utilization study and validation

    NASA Technical Reports Server (NTRS)

    Pope, L. D.

    1984-01-01

    Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.

  10. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and Space Station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  11. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  12. Attempted validation of ICRP 30 and ICRP 66 respiratory models.

    PubMed

    Harley, N H; Fisenne, I M; Robbins, E S

    2012-11-01

    The validation of human biological models for inhaled radionuclides is nearly impossible. Requirements for validation are: (1) the measurement of the relevant human tissue data and (2) valid exposure measurements over the interval known to apply to tissue uptake. Two lung models, ICRP 30(1) and ICRP 66(2), are widely used to estimate lung doses following acute occupational or environmental exposure. Both ICRP 30 and 66 lung models are structured to estimate acute rather than chronic exposure. Two sets of human tissue measurements are available: (210)Po accumulated in tissue from inhaled cigarettes and ingested in diet and airborne global fallout (239,240)Pu accumulated in the lungs from inhalation. The human tissue measurements include pulmonary and bronchial tissue in smokers, ex-smokers and non-smokers analysed radiochemically for (210)Po, and pulmonary, bronchial and lymph nodes analysed for (239,240)Pu in lung tissue collected by the New York City Medical Examiner from 1972 to 1974. Both ICRP 30 and 66 models were included in a programme to accommodate chronic uptake. Neither lung model accurately described the estimated tissue concentrations but was within a factor of 2 from measurements. ICRP 66 was the exception and consistently overestimated the bronchial concentrations probably because of its assumption of an overly long 23-d clearance half-time in the bronchi and bronchioles. PMID:22923255

  13. Validation of a finite element model of the human metacarpal.

    PubMed

    Barker, D S; Netherway, D J; Krishnan, J; Hearn, T C

    2005-03-01

    Implant loosening and mechanical failure of components are frequently reported following metacarpophalangeal (MCP) joint replacement. Studies of the mechanical environment of the MCP implant-bone construct are rare. The objective of this study was to evaluate the predictive ability of a finite element model of the intact second human metacarpal to provide a validated baseline for further mechanical studies. A right index human metacarpal was subjected to torsion and combined axial/bending loading using strain gauge (SG) and 3D finite element (FE) analysis. Four different representations of bone material properties were considered. Regression analyses were performed comparing maximum and minimum principal surface strains taken from the SG and FE models. Regression slopes close to unity and high correlation coefficients were found when the diaphyseal cortical shell was modelled as anisotropic and cancellous bone properties were derived from quantitative computed tomography. The inclusion of anisotropy for cortical bone was strongly influential in producing high model validity whereas variation in methods of assigning stiffness to cancellous bone had only a minor influence. The validated FE model provides a tool for future investigations of current and novel MCP joint prostheses.

  14. Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns.

    PubMed

    Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain

    2015-01-01

    Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model's predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917

  15. Potential of Ceilometer Networks for Validation of models

    NASA Astrophysics Data System (ADS)

    Wagner, Frank; Mattis, Ina; Flentje, Harald

    2016-04-01

    There exist various models which can treat aerosol particles within the model. Due to the limited availability of high quality profiles of particle properties most models are only validated with ground based particle measurements and/or with columnar particle amounts, e.g. aerosol optical depth, derived from satellites. Modern ceilometers are capable of providing aerosol vertical profiles and they are not too expensive and hence several national weather services operate a network of ceilometers. The Deutscher Wetterdienst operates currently a ceilometer network of about 75 devices providing aerosol profiles. Within the next few years the number of instruments will double. Each station has always several neighboring stations within 100km distance. Recently automated routines for quality checks and calibration of the devices were developed and implemented. Such automated tools together with the good spatial coverage make the DWD ceilometer network an excellent tool for model validation with respect to aerosol particle properties. The Copernicus Atmosphere service provides operational forecast of five aerosol species (sea-salt, dust, sulphate as well as organic and black carbon which are summarized as biomass burning aerosol) and the boundary layer height. These parameters can be compared with the outcome of ceilometer measurements and consequently the model can be validated. Especially long-range transported aerosol particles above the boundary layer can be investigated. At the conference the network will be presented, the validation strategy of the CAMS models by using ceilometer measurements will be explained and results will be shown. An outlook to international measuring networks will be given.

  16. Predicting the ungauged basin: model validation and realism assessment

    NASA Astrophysics Data System (ADS)

    van Emmerik, Tim; Mulder, Gert; Eilander, Dirk; Piet, Marijn; Savenije, Hubert

    2016-04-01

    The hydrological decade on Predictions in Ungauged Basins (PUB) [1] led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this study [2] we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. We do not present a generic approach that can be transferred to other ungauged catchments, but we aim to show how clever model design and alternative data acquisition can result in a valuable hydrological model for ungauged catchments. [1] Sivapalan, M., Takeuchi, K., Franks, S., Gupta, V., Karambiri, H., Lakshmi, V., et al. (2003). IAHS decade on predictions in ungauged basins (PUB), 2003-2012: shaping an exciting future for the hydrological sciences. Hydrol. Sci. J. 48, 857-880. doi: 10.1623/hysj.48.6.857.51421 [2] van Emmerik, T., Mulder, G., Eilander, D., Piet, M. and Savenije, H. (2015). Predicting the ungauged basin: model validation and realism assessment

  17. A verification and validation process for model-driven engineering

    NASA Astrophysics Data System (ADS)

    Delmas, R.; Pires, A. F.; Polacsek, T.

    2013-12-01

    Model Driven Engineering practitioners already benefit from many well established verification tools, for Object Constraint Language (OCL), for instance. Recently, constraint satisfaction techniques have been brought to Model-Driven Engineering (MDE) and have shown promising results on model verification tasks. With all these tools, it becomes possible to provide users with formal support from early model design phases to model instantiation phases. In this paper, a selection of such tools and methods is presented, and an attempt is made to define a verification and validation process for model design and instance creation centered on UML (Unified Modeling Language) class diagrams and declarative constraints, and involving the selected tools. The suggested process is illustrated with a simple example.

  18. Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns

    PubMed Central

    Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain

    2015-01-01

    Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model’s predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917

  19. KINEROS2-AGWA: Model Use, Calibration, and Validation

    NASA Technical Reports Server (NTRS)

    Goodrich, D C.; Burns, I. S.; Unkrich, C. L.; Semmens, D. J.; Guertin, D. P.; Hernandez, M.; Yatheendradas, S.; Kennedy, J. R.; Levick, L. R..

    2013-01-01

    KINEROS (KINematic runoff and EROSion) originated in the 1960s as a distributed event-based model that conceptualizes a watershed as a cascade of overland flow model elements that flow into trapezoidal channel model elements. KINEROS was one of the first widely available watershed models that interactively coupled a finite difference approximation of the kinematic overland flow equations to a physically based infiltration model. Development and improvement of KINEROS continued from the 1960s on a variety of projects for a range of purposes, which has resulted in a suite of KINEROS-based modeling tools. This article focuses on KINEROS2 (K2), a spatially distributed, event-based watershed rainfall-runoff and erosion model, and the companion ArcGIS-based Automated Geospatial Watershed Assessment (AGWA) tool. AGWA automates the time-consuming tasks of watershed delineation into distributed model elements and initial parameterization of these elements using commonly available, national GIS data layers. A variety of approaches have been used to calibrate and validate K2 successfully across a relatively broad range of applications (e.g., urbanization, pre- and post-fire, hillslope erosion, erosion from roads, runoff and recharge, and manure transport). The case studies presented in this article (1) compare lumped to stepwise calibration and validation of runoff and sediment at plot, hillslope, and small watershed scales; and (2) demonstrate an uncalibrated application to address relative change in watershed response to wildfire.

  20. Development of a Validated Model of Ground Coupling

    SciTech Connect

    Metz, P. D.

    1980-01-01

    A research program at Brookhaven National Laboratory (BNL) studies ground coupling, the use of the earth as a heat source/sink or storage element for solar heat pump space conditioning systems. This paper outlines the analytical and experimental research to date toward the development of an experimentally validated model of ground coupling and based on experimental results from December, 1978 to September, 1979, expores sensitivity of present model predictions to variations in thermal conductivity and other factors. Ways in which the model can be further refined are discussed.

  1. Verifying and Validating Proposed Models for FSW Process Optimization

    NASA Technical Reports Server (NTRS)

    Schneider, Judith

    2008-01-01

    This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms

  2. Development and validation of a building design waste reduction model.

    PubMed

    Llatas, C; Osmani, M

    2016-10-01

    Reduction in construction waste is a pressing need in many countries. The design of building elements is considered a pivotal process to achieve waste reduction at source, which enables an informed prediction of their wastage reduction levels. However the lack of quantitative methods linking design strategies to waste reduction hinders designing out waste practice in building projects. Therefore, this paper addresses this knowledge gap through the design and validation of a Building Design Waste Reduction Strategies (Waste ReSt) model that aims to investigate the relationships between design variables and their impact on onsite waste reduction. The Waste ReSt model was validated in a real-world case study involving 20 residential buildings in Spain. The validation process comprises three stages. Firstly, design waste causes were analyzed. Secondly, design strategies were applied leading to several alternative low waste building elements. Finally, their potential source reduction levels were quantified and discussed within the context of the literature. The Waste ReSt model could serve as an instrumental tool to simulate designing out strategies in building projects. The knowledge provided by the model could help project stakeholders to better understand the correlation between the design process and waste sources and subsequently implement design practices for low-waste buildings.

  3. Validation results of wind diesel simulation model TKKMOD

    NASA Astrophysics Data System (ADS)

    Manninen, L. M.

    The document summarizes the results of TKKMOD validation procedure. TKKMOD is a simulation model developed at Helsinki University of Technology for a specific wind-diesel system layout. The model has been included into the European wind-diesel modeling software package WDLTOOLS under the CEC JOULE project Engineering Design Tools for Wind-Diesel Systems (JOUR-0078). The simulation model is utilized for calculation of long-term performance of the reference system configuration for given wind and load conditions. The main results are energy flows, energy losses in the system components, diesel fuel consumption, and the number of diesel engine starts. The work has been funded through the Finnish Advanced Energy System R&D Programme (NEMO). The validation has been performed using the data from EFI (Norwegian Electric Power Institute), since data from the Finnish reference system is not yet available. The EFI system has a slightly different configuration with similar overall operating principles and approximately same battery capacity. The validation data set, 394 hours of measured data, is from the first prototype wind-diesel system on the island FROYA off the Norwegian coast.

  4. Development and validation of a building design waste reduction model.

    PubMed

    Llatas, C; Osmani, M

    2016-10-01

    Reduction in construction waste is a pressing need in many countries. The design of building elements is considered a pivotal process to achieve waste reduction at source, which enables an informed prediction of their wastage reduction levels. However the lack of quantitative methods linking design strategies to waste reduction hinders designing out waste practice in building projects. Therefore, this paper addresses this knowledge gap through the design and validation of a Building Design Waste Reduction Strategies (Waste ReSt) model that aims to investigate the relationships between design variables and their impact on onsite waste reduction. The Waste ReSt model was validated in a real-world case study involving 20 residential buildings in Spain. The validation process comprises three stages. Firstly, design waste causes were analyzed. Secondly, design strategies were applied leading to several alternative low waste building elements. Finally, their potential source reduction levels were quantified and discussed within the context of the literature. The Waste ReSt model could serve as an instrumental tool to simulate designing out strategies in building projects. The knowledge provided by the model could help project stakeholders to better understand the correlation between the design process and waste sources and subsequently implement design practices for low-waste buildings. PMID:27292581

  5. Calibration of Predictor Models Using Multiple Validation Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  6. Finite element modeling for validation of structural damage identification experimentation.

    SciTech Connect

    Stinemates, D. W.; Bennett, J. G.

    2001-01-01

    The project described in this report was performed to couple experimental and analytical techniques in the field of structural health monitoring and darnage identification. To do this, a finite dement model was Constructed of a simulated three-story building used for damage identification experiments. The model was used in conjunction with data from thie physical structure to research damage identification algorithms. Of particular interest was modeling slip in joints as a function of bolt torque and predicting the smallest change of torque that could be detected experimentally. After being validated with results from the physical structure, the model was used to produce data to test the capabilities of damage identification algorithms. This report describes the finite element model constructed, the results obtained, and proposed future use of the model.

  7. A benchmark for the validation of solidification modelling algorithms

    NASA Astrophysics Data System (ADS)

    Kaschnitz, E.; Heugenhauser, S.; Schumacher, P.

    2015-06-01

    This work presents two three-dimensional solidification models, which were solved by several commercial solvers (MAGMASOFT, FLOW-3D, ProCAST, WinCast, ANSYS, and OpenFOAM). Surprisingly, the results show noticeable differences. The results are analyzed similar to a round-robin test procedure to obtain reference values for temperatures and their uncertainties at selected positions in the model. The first model is similar to an adiabatic calorimeter with an aluminum alloy solidifying in a copper block. For this model, an analytical solution for the overall temperature at steady state can be calculated. The second model implements additional heat transfer boundary conditions at outer faces. The geometry of the models, the initial and boundary conditions as well as the material properties are kept as simple as possible but, nevertheless, close to a realistic solidification situation. The gained temperature results can be used to validate self-written solidification solvers and check the accuracy of commercial solidification programs.

  8. Rationality Validation of a Layered Decision Model for Network Defense

    SciTech Connect

    Wei, Huaqiang; Alves-Foss, James; Zhang, Du; Frincke, Deb

    2007-08-31

    We propose a cost-effective network defense strategy built on three key: three decision layers: security policies, defense strategies, and real-time defense tactics for countering immediate threats. A layered decision model (LDM) can be used to capture this decision process. The LDM helps decision-makers gain insight into the hierarchical relationships among inter-connected entities and decision types, and supports the selection of cost-effective defense mechanisms to safeguard computer networks. To be effective as a business tool, it is first necessary to validate the rationality of model before applying it to real-world business cases. This paper describes our efforts in validating the LDM rationality through simulation.

  9. Approaches to Validation of Models for Low Gravity Fluid Behavior

    NASA Technical Reports Server (NTRS)

    Chato, David J.; Marchetta, Jeffery; Hochstein, John I.; Kassemi, Mohammad

    2005-01-01

    This paper details the author experiences with the validation of computer models to predict low gravity fluid behavior. It reviews the literature of low gravity fluid behavior as a starting point for developing a baseline set of test cases. It examines authors attempts to validate their models against these cases and the issues they encountered. The main issues seem to be that: Most of the data is described by empirical correlation rather than fundamental relation; Detailed measurements of the flow field have not been made; Free surface shapes are observed but through thick plastic cylinders, and therefore subject to a great deal of optical distortion; and Heat transfer process time constants are on the order of minutes to days but the zero-gravity time available has been only seconds.

  10. Tucker core consistency for validation of restricted Tucker3 models.

    PubMed

    Kompany-Zareh, Mohsen; Akhlaghi, Yousef; Bro, Rasmus

    2012-04-20

    In Tucker3 analysis of three-way data array obtained from a chemical or biological system, it is sometimes possible to use a priori knowledge about the system to specify what is called a restricted Tucker3 model. Often, the restricted Tucker3 model is characterized by having some elements of the core forced to zero. As a simple example, an F-component PARAFAC model can be seen as a restricted (F, F, F) Tucker3 model in which only superdiagonal elements of the core are allowed to be nonzero. The core consistency diagnostic was previously introduced by Bro and Kiers for determining the proper number of components in PARAFAC analysis. In the current study, this diagnostic is extended to other restricted Tucker3 models to validate the appropriateness of the applied constraints. The new diagnostic is named Tucker core consistency (TuckCorCon). When the dimensionality and the pattern of the restricted core is valid, the simple core of restricted Tucker3 model and a corresponding unrestricted core will be similar and in this case the TuckCorCon will be close to maximum (100%). A simulated chemical equilibrium data set and two experimental data sets were used to evaluate the applicability of the TuckCorCon to decide about the appropriateness of dimensionality and pattern of the core nonzero elements in the restricted Tucker3 models.

  11. IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS

    SciTech Connect

    D.M. Jolley

    2001-12-18

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data.

  12. In-Drift Microbial Communities Model Validation Calculations

    SciTech Connect

    D. M. Jolley

    2001-09-24

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  13. In-Drift Microbial Communities Model Validation Calculation

    SciTech Connect

    D. M. Jolley

    2001-10-31

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  14. ESEEM Analysis of Multi-Histidine Cu(II)-Coordination in Model Complexes, Peptides, and Amyloid-β

    PubMed Central

    2015-01-01

    We validate the use of ESEEM to predict the number of 14N nuclei coupled to a Cu(II) ion by the use of model complexes and two small peptides with well-known Cu(II) coordination. We apply this method to gain new insight into less explored aspects of Cu(II) coordination in amyloid-β (Aβ). Aβ has two coordination modes of Cu(II) at physiological pH. A controversy has existed regarding the number of histidine residues coordinated to the Cu(II) ion in component II, which is dominant at high pH (∼8.7) values. Importantly, with an excess amount of Zn(II) ions, as is the case in brain tissues affected by Alzheimer’s disease, component II becomes the dominant coordination mode, as Zn(II) selectively substitutes component I bound to Cu(II). We confirm that component II only contains single histidine coordination, using ESEEM and set of model complexes. The ESEEM experiments carried out on systematically 15N-labeled peptides reveal that, in component II, His 13 and His 14 are more favored as equatorial ligands compared to His 6. Revealing molecular level details of subcomponents in metal ion coordination is critical in understanding the role of metal ions in Alzheimer’s disease etiology. PMID:25014537

  15. ESEEM analysis of multi-histidine Cu(II)-coordination in model complexes, peptides, and amyloid-β.

    PubMed

    Silva, K Ishara; Michael, Brian C; Geib, Steven J; Saxena, Sunil

    2014-07-31

    We validate the use of ESEEM to predict the number of (14)N nuclei coupled to a Cu(II) ion by the use of model complexes and two small peptides with well-known Cu(II) coordination. We apply this method to gain new insight into less explored aspects of Cu(II) coordination in amyloid-β (Aβ). Aβ has two coordination modes of Cu(II) at physiological pH. A controversy has existed regarding the number of histidine residues coordinated to the Cu(II) ion in component II, which is dominant at high pH (∼8.7) values. Importantly, with an excess amount of Zn(II) ions, as is the case in brain tissues affected by Alzheimer's disease, component II becomes the dominant coordination mode, as Zn(II) selectively substitutes component I bound to Cu(II). We confirm that component II only contains single histidine coordination, using ESEEM and set of model complexes. The ESEEM experiments carried out on systematically (15)N-labeled peptides reveal that, in component II, His 13 and His 14 are more favored as equatorial ligands compared to His 6. Revealing molecular level details of subcomponents in metal ion coordination is critical in understanding the role of metal ions in Alzheimer's disease etiology.

  16. PARALLEL MEASUREMENT AND MODELING OF TRANSPORT IN THE DARHT II BEAMLINE ON ETA II

    SciTech Connect

    Chambers, F W; Raymond, B A; Falabella, S; Lee, B S; Richardson, R A; Weir, J T; Davis, H A; Schultze, M E

    2005-05-31

    To successfully tune the DARHT II transport beamline requires the close coupling of a model of the beam transport and the measurement of the beam observables as the beam conditions and magnet settings are varied. For the ETA II experiment using the DARHT II beamline components this was achieved using the SUICIDE (Simple User Interface Connecting to an Integrated Data Environment) data analysis environment and the FITS (Fully Integrated Transport Simulation) model. The SUICIDE environment has direct access to the experimental beam transport data at acquisition and the FITS predictions of the transport for immediate comparison. The FITS model is coupled into the control system where it can read magnet current settings for real time modeling. We find this integrated coupling is essential for model verification and the successful development of a tuning aid for the efficient convergence on a useable tune. We show the real time comparisons of simulation and experiment and explore the successes and limitations of this close coupled approach.

  17. Methods for Geometric Data Validation of 3d City Models

    NASA Astrophysics Data System (ADS)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges

  18. Estimating the predictive validity of diabetic animal models in rosiglitazone studies.

    PubMed

    Varga, O E; Zsíros, N; Olsson, I A S

    2015-06-01

    For therapeutic studies, predictive validity of animal models - arguably the most important feature of animal models in terms of human relevance - can be calculated retrospectively by obtaining data on treatment efficacy from human and animal trials. Using rosiglitazone as a case study, we aim to determine the predictive validity of animal models of diabetes, by analysing which models perform most similarly to humans during rosiglitazone treatment in terms of changes in standard diabetes diagnosis parameters (glycosylated haemoglobin [HbA1c] and fasting glucose levels). A further objective of this paper was to explore the impact of four covariates on the predictive capacity: (i) diabetes induction method; (ii) drug administration route; (iii) sex of animals and (iv) diet during the experiments. Despite the variable consistency of animal species-based models with the human reference for glucose and HbA1c treatment effects, our results show that glucose and HbA1c treatment effects in rats agreed better with the expected values based on human data than in other species. Induction method was also found to be a substantial factor affecting animal model performance. The study concluded that regular reassessment of animal models can help to identify human relevance of each model and adapt research design for actual research goals.

  19. Full-scale validation of a model of algal productivity.

    PubMed

    Béchet, Quentin; Shilton, Andy; Guieysse, Benoit

    2014-12-01

    While modeling algal productivity outdoors is crucial to assess the economic and environmental performance of full-scale cultivation, most of the models hitherto developed for this purpose have not been validated under fully relevant conditions, especially with regard to temperature variations. The objective of this study was to independently validate a model of algal biomass productivity accounting for both light and temperature and constructed using parameters experimentally derived using short-term indoor experiments. To do this, the accuracy of a model developed for Chlorella vulgaris was assessed against data collected from photobioreactors operated outdoor (New Zealand) over different seasons, years, and operating conditions (temperature-control/no temperature-control, batch, and fed-batch regimes). The model accurately predicted experimental productivities under all conditions tested, yielding an overall accuracy of ±8.4% over 148 days of cultivation. For the purpose of assessing the feasibility of full-scale algal cultivation, the use of the productivity model was therefore shown to markedly reduce uncertainty in cost of biofuel production while also eliminating uncertainties in water demand, a critical element of environmental impact assessments. Simulations at five climatic locations demonstrated that temperature-control in outdoor photobioreactors would require tremendous amounts of energy without considerable increase of algal biomass. Prior assessments neglecting the impact of temperature variations on algal productivity in photobioreactors may therefore be erroneous.

  20. Validation of an Urban Parameterization in a Mesoscale Model

    SciTech Connect

    Leach, M.J.; Chin, H.

    2001-07-19

    The Atmospheric Science Division at Lawrence Livermore National Laboratory uses the Naval Research Laboratory's Couple Ocean-Atmosphere Mesoscale Prediction System (COAMPS) for both operations and research. COAMPS is a non-hydrostatic model, designed as a multi-scale simulation system ranging from synoptic down to meso, storm and local terrain scales. As model resolution increases, the forcing due to small-scale complex terrain features including urban structures and surfaces, intensifies. An urban parameterization has been added to the Naval Research Laboratory's mesoscale model, COAMPS. The parameterization attempts to incorporate the effects of buildings and urban surfaces without explicitly resolving them, and includes modeling the mean flow to turbulence energy exchange, radiative transfer, the surface energy budget, and the addition of anthropogenic heat. The Chemical and Biological National Security Program's (CBNP) URBAN field experiment was designed to collect data to validate numerical models over a range of length and time scales. The experiment was conducted in Salt Lake City in October 2000. The scales ranged from circulation around single buildings to flow in the entire Salt Lake basin. Data from the field experiment includes tracer data as well as observations of mean and turbulence atmospheric parameters. Wind and turbulence predictions from COAMPS are used to drive a Lagrangian particle model, the Livermore Operational Dispersion Integrator (LODI). Simulations with COAMPS and LODI are used to test the sensitivity to the urban parameterization. Data from the field experiment, including the tracer data and the atmospheric parameters, are also used to validate the urban parameterization.

  1. Validation of a transparent decision model to rate drug interactions

    PubMed Central

    2012-01-01

    Background Multiple databases provide ratings of drug-drug interactions. The ratings are often based on different criteria and lack background information on the decision making process. User acceptance of rating systems could be improved by providing a transparent decision path for each category. Methods We rated 200 randomly selected potential drug-drug interactions by a transparent decision model developed by our team. The cases were generated from ward round observations and physicians’ queries from an outpatient setting. We compared our ratings to those assigned by a senior clinical pharmacologist and by a standard interaction database, and thus validated the model. Results The decision model rated consistently with the standard database and the pharmacologist in 94 and 156 cases, respectively. In two cases the model decision required correction. Following removal of systematic model construction differences, the DM was fully consistent with other rating systems. Conclusion The decision model reproducibly rates interactions and elucidates systematic differences. We propose to supply validated decision paths alongside the interaction rating to improve comprehensibility and to enable physicians to interpret the ratings in a clinical context. PMID:22950884

  2. Validation of detailed thermal hydraulic models used for LMR safety and for improvement of technical specifications

    SciTech Connect

    Dunn, F.E.

    1995-12-31

    Detailed steady-state and transient coolant temperatures and flow rates from an operating reactor have been used to validate the multiple pin model in the SASSYS-1 liquid metal reactor systems analysis code. This multiple pin capability can be used for explicit calculations of axial and lateral temperature distributions within individual subassemblies. Thermocouples at a number of axial locations and in a number of different coolant sub-channels m the XXO9 instrumented subassembly in the EBR-II reactor provided temperature data from the Shutdown Heat Removal Test (SHRT) series. Flow meter data for XXO9 and for the overall system are also available from these tests. Results of consistent SASSYS-1 multiple pin analyses for both the SHRT-45 loss-of-flow-without-scram-test and the S14RT-17 protected loss-of-flow test agree well with the experimental data, providing validation of the SASSYS-1 code over a wide range of conditions.

  3. Multilevel spatiotemporal validation of snow/ice mass balance and runoff modeling in glacierized catchments

    NASA Astrophysics Data System (ADS)

    Hanzer, Florian; Helfricht, Kay; Marke, Thomas; Strasser, Ulrich

    2016-08-01

    In this study, the fully distributed, physically based hydroclimatological model AMUNDSEN is set up for catchments in the highly glacierized Ötztal Alps (Austria, 558 km2 in total). The model is applied for the period 1997-2013, using a spatial resolution of 50 m and a temporal resolution of 1 h. A novel parameterization for lateral snow redistribution based on topographic openness is presented to account for the highly heterogeneous snow accumulation patterns in the complex topography of the study region. Multilevel spatiotemporal validation is introduced as a systematic, independent, complete, and redundant validation procedure based on the observation scale of temporal and spatial support, spacing, and extent. This new approach is demonstrated using a comprehensive set of eight independent validation sources: (i) mean areal precipitation over the period 1997-2006 derived by conserving mass in the closure of the water balance, (ii) time series of snow depth recordings at the plot scale, (iii-iv) multitemporal snow extent maps derived from Landsat and MODIS satellite data products, (v) the snow accumulation distribution for the winter season 2010/2011 derived from airborne laser scanning data, (vi) specific surface mass balances for three glaciers in the study area, (vii) spatially distributed glacier surface elevation changes for the entire area over the period 1997-2006, and (viii) runoff recordings for several subcatchments. The results indicate a high overall model skill and especially demonstrate the benefit of the new validation approach. The method can serve as guideline for systematically validating the coupled components in integrated snow-hydrological and glacio-hydrological models.

  4. Validation of the WATEQ4 geochemical model for uranium

    SciTech Connect

    Krupka, K.M.; Jenne, E.A.; Deutsch, W.J.

    1983-09-01

    As part of the Geochemical Modeling and Nuclide/Rock/Groundwater Interactions Studies Program, a study was conducted to partially validate the WATEQ4 aqueous speciation-solubility geochemical model for uranium. The solubility controls determined with the WATEQ4 geochemical model were in excellent agreement with those laboratory studies in which the solids schoepite (UO/sub 2/(OH)/sub 2/ . H/sub 2/O), UO/sub 2/(OH)/sub 2/, and rutherfordine ((UO/sub 2/CO/sub 3/) were identified as actual solubility controls for uranium. The results of modeling solution analyses from laboratory studies of uranyl phosphate solids, however, identified possible errors in the characterization of solids in the original solubility experiments. As part of this study, significant deficiencies in the WATEQ4 thermodynamic data base for uranium solutes and solids were corrected. Revisions included recalculation of selected uranium reactions. Additionally, thermodynamic data for the hydroxyl complexes of U(VI), including anionic (VI) species, were evaluated (to the extent permitted by the available data). Vanadium reactions were also added to the thermodynamic data base because uranium-vanadium solids can exist in natural ground-water systems. This study is only a partial validation of the WATEQ4 geochemical model because the available laboratory solubility studies do not cover the range of solid phases, alkaline pH values, and concentrations of inorganic complexing ligands needed to evaluate the potential solubility of uranium in ground waters associated with various proposed nuclear waste repositories. Further validation of this or other geochemical models for uranium will require careful determinations of uraninite solubility over the pH range of 7 to 10 under highly reducing conditions and of uranyl hydroxide and phosphate solubilities over the pH range of 7 to 10 under oxygenated conditions.

  5. Experimental Validation and Applications of a Fluid Infiltration Model

    PubMed Central

    Kao, Cindy S.; Hunt, James R.

    2010-01-01

    Horizontal infiltration experiments were performed to validate a plug flow model that minimizes the number of parameters that must be measured. Water and silicone oil at three different viscosities were infiltrated into glass beads, desert alluvium, and silica powder. Experiments were also performed with negative inlet heads on air-dried silica powder, and with water and oil infiltrating into initially water moist silica powder. Comparisons between the data and model were favorable in most cases, with predictions usually within 40% of the measured data. The model is extended to a line source and small areal source at the ground surface to analytically predict the shape of two-dimensional wetting fronts. Furthermore, a plug flow model for constant flux infiltration agrees well with field data and suggests that the proposed model for a constant-head boundary condition can be effectively used to predict wetting front movement at heterogeneous field sites if averaged parameter values are used. PMID:20428480

  6. Model selection, identification and validation in anaerobic digestion: a review.

    PubMed

    Donoso-Bravo, Andres; Mailier, Johan; Martin, Cristina; Rodríguez, Jorge; Aceves-Lara, César Arturo; Vande Wouwer, Alain

    2011-11-01

    Anaerobic digestion enables waste (water) treatment and energy production in the form of biogas. The successful implementation of this process has lead to an increasing interest worldwide. However, anaerobic digestion is a complex biological process, where hundreds of microbial populations are involved, and whose start-up and operation are delicate issues. In order to better understand the process dynamics and to optimize the operating conditions, the availability of dynamic models is of paramount importance. Such models have to be inferred from prior knowledge and experimental data collected from real plants. Modeling and parameter identification are vast subjects, offering a realm of approaches and methods, which can be difficult to fully understand by scientists and engineers dedicated to the plant operation and improvements. This review article discusses existing modeling frameworks and methodologies for parameter estimation and model validation in the field of anaerobic digestion processes. The point of view is pragmatic, intentionally focusing on simple but efficient methods. PMID:21920578

  7. Validation of a Hertzian contact model with nonlinear damping

    NASA Astrophysics Data System (ADS)

    Sierakowski, Adam

    2015-11-01

    Due to limited spatial resolution, most disperse particle simulation methods rely on simplified models for incorporating short-range particle interactions. In this presentation, we introduce a contact model that combines the Hertz elastic restoring force with a nonlinear damping force, requiring only material properties and no tunable parameters. We have implemented the model in a resolved-particle flow solver that implements the Physalis method, which accurately captures hydrodynamic interactions by analytically enforcing the no-slip condition on the particle surface. We summarize the results of a few numerical studies that suggest the validity of the contact model over a range of particle interaction intensities (i.e., collision Stokes numbers) when compared with experimental data. This work was supported by the National Science Foundation under Grant Number CBET1335965 and the Johns Hopkins University Modeling Complex Systems IGERT program.

  8. Experimental validation of flexible robot arm modeling and control

    NASA Technical Reports Server (NTRS)

    Ulsoy, A. Galip

    1989-01-01

    Flexibility is important for high speed, high precision operation of lightweight manipulators. Accurate dynamic modeling of flexible robot arms is needed. Previous work has mostly been based on linear elasticity with prescribed rigid body motions (i.e., no effect of flexible motion on rigid body motion). Little or no experimental validation of dynamic models for flexible arms is available. Experimental results are also limited for flexible arm control. Researchers include the effects of prismatic as well as revolute joints. They investigate the effect of full coupling between the rigid and flexible motions, and of axial shortening, and consider the control of flexible arms using only additional sensors.

  9. Cosmological Parameter Uncertainties from SALT-II Type Ia Supernova Light Curve Models

    SciTech Connect

    Mosher, J.; Guy, J.; Kessler, R.; Astier, P.; Marriner, J.; Betoule, M.; Sako, M.; El-Hage, P.; Biswas, R.; Pain, R.; Kuhlmann, S.; Regnault, N.; Frieman, J. A.; Schneider, D. P.

    2014-08-29

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ~120 low-redshift (z < 0.1) SNe Ia, ~255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ~290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w (input) – w (recovered)) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty, the average bias on w is –0.014 ± 0.007.

  10. Cosmological parameter uncertainties from SALT-II type Ia supernova light curve models

    SciTech Connect

    Mosher, J.; Sako, M.; Guy, J.; Astier, P.; Betoule, M.; El-Hage, P.; Pain, R.; Regnault, N.; Marriner, J.; Biswas, R.; Kuhlmann, S.; Schneider, D. P.

    2014-09-20

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ∼120 low-redshift (z < 0.1) SNe Ia, ∼255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ∼290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w {sub input} – w {sub recovered}) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty; the average bias on w is –0.014 ± 0.007.

  11. Validation Analysis of the Shoal Groundwater Flow and Transport Model

    SciTech Connect

    A. Hassan; J. Chapman

    2008-11-01

    Environmental restoration at the Shoal underground nuclear test is following a process prescribed by a Federal Facility Agreement and Consent Order (FFACO) between the U.S. Department of Energy, the U.S. Department of Defense, and the State of Nevada. Characterization of the site included two stages of well drilling and testing in 1996 and 1999, and development and revision of numerical models of groundwater flow and radionuclide transport. Agreement on a contaminant boundary for the site and a corrective action plan was reached in 2006. Later that same year, three wells were installed for the purposes of model validation and site monitoring. The FFACO prescribes a five-year proof-of-concept period for demonstrating that the site groundwater model is capable of producing meaningful results with an acceptable level of uncertainty. The corrective action plan specifies a rigorous seven step validation process. The accepted groundwater model is evaluated using that process in light of the newly acquired data. The conceptual model of ground water flow for the Project Shoal Area considers groundwater flow through the fractured granite aquifer comprising the Sand Springs Range. Water enters the system by the infiltration of precipitation directly on the surface of the mountain range. Groundwater leaves the granite aquifer by flowing into alluvial deposits in the adjacent basins of Fourmile Flat and Fairview Valley. A groundwater divide is interpreted as coinciding with the western portion of the Sand Springs Range, west of the underground nuclear test, preventing flow from the test into Fourmile Flat. A very low conductivity shear zone east of the nuclear test roughly parallels the divide. The presence of these lateral boundaries, coupled with a regional discharge area to the northeast, is interpreted in the model as causing groundwater from the site to flow in a northeastward direction into Fairview Valley. Steady-state flow conditions are assumed given the absence of

  12. Packed bed heat storage: Continuum mechanics model and validation

    NASA Astrophysics Data System (ADS)

    Knödler, Philipp; Dreißigacker, Volker; Zunft, Stefan

    2016-05-01

    Thermal energy storage (TES) systems are key elements for various types of new power plant concepts. As possible cost-effective storage inventory option, packed beds of miscellaneous material come into consideration. However, high technical risks arise due to thermal expansion and shrinking of the packed bed's particles during cyclic thermal operation, possibly leading to material failure. Therefore, suitable tools for designing the heat storage system are mandatory. While particle discrete models offer detailed simulation results, the computing time for large scale applications is inefficient. In contrast, continuous models offer time-efficient simulation results but are in need of effective packed bed parameters. This work focuses on providing insight into some basic methods and tools on how to obtain such parameters and on how they are implemented into a continuum model. In this context, a particle discrete model as well as a test rig for carrying out uniaxial compression tests (UCT) is introduced. Performing of experimental validation tests indicate good agreement with simulated UCT results. In this process, effective parameters required for a continuous packed bed model were identified and used for continuum simulation. This approach is validated by comparing the simulated results with experimental data from another test rig. The presented method significantly simplifies subsequent design studies.

  13. Higgs potential in the type II seesaw model

    SciTech Connect

    Arhrib, A.; Benbrik, R.; Chabab, M.; Rahili, L.; Ramadan, J.; Moultaka, G.; Peyranere, M. C.

    2011-11-01

    The standard model Higgs sector, extended by one weak gauge triplet of scalar fields with a very small vacuum expectation value, is a very promising setting to account for neutrino masses through the so-called type II seesaw mechanism. In this paper we consider the general renormalizable doublet/triplet Higgs potential of this model. We perform a detailed study of its main dynamical features that depend on five dimensionless couplings and two mass parameters after spontaneous symmetry breaking, and highlight the implications for the Higgs phenomenology. In particular, we determine (i) the complete set of tree-level unitarity constraints on the couplings of the potential and (ii) the exact tree-level boundedness from below constraints on these couplings, valid for all directions. When combined, these constraints delineate precisely the theoretically allowed parameter space domain within our perturbative approximation. Among the seven physical Higgs states of this model, the mass of the lighter (heavier) CP{sub even} state h{sup 0} (H{sup 0}) will always satisfy a theoretical upper (lower) bound that is reached for a critical value {mu}{sub c} of {mu} (the mass parameter controlling triple couplings among the doublet/triplet Higgses). Saturating the unitarity bounds, we find an upper bound m{sub h}{sup 0} or approx. {mu}{sub c} and {mu} < or approx. {mu}{sub c}. In the first regime the Higgs sector is typically very heavy, and only h{sup 0} that becomes SM-like could be accessible to the LHC. In contrast, in the second regime, somewhat overlooked in the literature, most of the Higgs sector is light. In particular, the heaviest state H{sup 0} becomes SM-like, the lighter states being the CP{sub odd} Higgs, the (doubly) charged Higgses, and a decoupled h{sup 0}, possibly

  14. Model of the Expansion of H II Region RCW 82

    NASA Astrophysics Data System (ADS)

    Krasnobaev, K. V.; Tagirova, R. R.; Kotova, G. Yu.

    2014-05-01

    This paper aims to resolve the problem of formation of young objects observed in the RCW 82 H II region. In the framework of a classical trigger model the estimated time of fragmentation is larger than the estimated age of the H II region. Thus the young objects could not have formed during the dynamical evolution of the H II region. We propose a new model that helps resolve this problem. This model suggests that the H II region RCW 82 is embedded in a cloud of limited size that is denser than the surrounding interstellar medium. According to this model, when the ionization-shock front leaves the cloud it causes the formation of an accelerating dense gas shell. In the accelerated shell, the effects of the Rayleigh-Taylor (R-T) instability dominate and the characteristic time of the growth of perturbations with the observed magnitude of about 3 pc is 0.14 Myr, which is less than the estimated age of the H II region. The total time t ∑, which is the sum of the expansion time of the H II region to the edge of the cloud, the time of the R-T instability growth, and the free fall time, is estimated as 0.44 < t ∑ < 0.78 Myr. We conclude that the young objects in the H II region RCW 82 could be formed as a result of the R-T instability with subsequent fragmentation into large-scale condensations.

  15. Robust design and model validation of nonlinear compliant micromechanisms.

    SciTech Connect

    Howell, Larry L.; Baker, Michael Sean; Wittwer, Jonathan W.

    2005-02-01

    Although the use of compliance or elastic flexibility in microelectromechanical systems (MEMS) helps eliminate friction, wear, and backlash, compliant MEMS are known to be sensitive to variations in material properties and feature geometry, resulting in large uncertainties in performance. This paper proposes an approach for design stage uncertainty analysis, model validation, and robust optimization of nonlinear MEMS to account for critical process uncertainties including residual stress, layer thicknesses, edge bias, and material stiffness. A fully compliant bistable micromechanism (FCBM) is used as an example, demonstrating that the approach can be used to handle complex devices involving nonlinear finite element models. The general shape of the force-displacement curve is validated by comparing the uncertainty predictions to measurements obtained from in situ force gauges. A robust design is presented, where simulations show that the estimated force variation at the point of interest may be reduced from {+-}47 {micro}N to {+-}3 {micro}N. The reduced sensitivity to process variations is experimentally validated by measuring the second stable position at multiple locations on a wafer.

  16. A 2-stage phase II design with direct assignment option in stage II for initial marker validation.

    PubMed

    An, Ming-Wen; Mandrekar, Sumithra J; Sargent, Daniel J

    2012-08-15

    Biomarkers are critical to targeted therapies, as they may identify patients more likely to benefit from a treatment. Several prospective designs for biomarker-directed therapy have been previously proposed, differing primarily in the study population, randomization scheme, or both. Recognizing the need for randomization, yet acknowledging the possibility of promising but inconclusive results after a stage I cohort of randomized patients, we propose a 2-stage phase II design on marker-positive patients that allows for direct assignment in a stage II cohort. In stage I, marker-positive patients are equally randomized to receive experimental treatment or control. Stage II has the option to adopt "direct assignment" whereby all patients receive experimental treatment. Through simulation, we studied the power and type I error rate of our design compared with a balanced randomized two-stage design, and conducted sensitivity analyses to study the effect of timing of stage I analysis, population shift effects, and unbalanced randomization. Our proposed design has minimal loss in power (<1.8%) and increased type I error rate (<2.1%) compared with a balanced randomized design. The maximum increase in type I error rate in the presence of a population shift was between 3.1% and 5%, and the loss in power across possible timings of stage I analysis was less than 1.2%. Our proposed design has desirable statistical properties with potential appeal in practice. The direct assignment option, if adopted, provides for an "extended confirmation phase" as an alternative to stopping the trial early for evidence of efficacy in stage I.

  17. Modeling the Arm II core in MicroCap IV

    SciTech Connect

    Dalton, A.C.

    1996-11-01

    This paper reports on how an electrical model for the core of the Arm II machine was created and how to use this model. We wanted to get a model for the electrical characteristics of the ARM II core, in order to simulate this machine and to assist in the design of a future machine. We wanted this model to be able to simulate saturation, variable loss, and reset. Using the Hodgdon model and the circuit analysis program MicroCap IV, this was accomplished. This paper is written in such a way as to allow someone not familiar with the project to understand it.

  18. Organic acid modeling and model validation: Workshop summary

    SciTech Connect

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.'' The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  19. Organic acid modeling and model validation: Workshop summary. Final report

    SciTech Connect

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E&S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled ``Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.`` The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  20. A turbulence model for iced airfoils and its validation

    NASA Technical Reports Server (NTRS)

    Shin, Jaiwon; Chen, Hsun H.; Cebeci, Tuncer

    1992-01-01

    A turbulence model based on the extension of the algebraic eddy viscosity formulation of Cebeci and Smith developed for two dimensional flows over smooth and rough surfaces is described for iced airfoils and validated for computed ice shapes obtained for a range of total temperatures varying from 28 to -15 F. The validation is made with an interactive boundary layer method which uses a panel method to compute the inviscid flow and an inverse finite difference boundary layer method to compute the viscous flow. The interaction between inviscid and viscous flows is established by the use of the Hilbert integral. The calculated drag coefficients compare well with recent experimental data taken at the NASA-Lewis Icing Research Tunnel (IRT) and show that, in general, the drag increase due to ice accretion can be predicted well and efficiently.

  1. Validating and Verifying Biomathematical Models of Human Fatigue

    NASA Technical Reports Server (NTRS)

    Martinez, Siera Brooke; Quintero, Luis Ortiz; Flynn-Evans, Erin

    2015-01-01

    Airline pilots experience acute and chronic sleep deprivation, sleep inertia, and circadian desynchrony due to the need to schedule flight operations around the clock. This sleep loss and circadian desynchrony gives rise to cognitive impairments, reduced vigilance and inconsistent performance. Several biomathematical models, based principally on patterns observed in circadian rhythms and homeostatic drive, have been developed to predict a pilots levels of fatigue or alertness. These models allow for the Federal Aviation Administration (FAA) and commercial airlines to make decisions about pilot capabilities and flight schedules. Although these models have been validated in a laboratory setting, they have not been thoroughly tested in operational environments where uncontrolled factors, such as environmental sleep disrupters, caffeine use and napping, may impact actual pilot alertness and performance. We will compare the predictions of three prominent biomathematical fatigue models (McCauley Model, Harvard Model, and the privately-sold SAFTE-FAST Model) to actual measures of alertness and performance. We collected sleep logs, movement and light recordings, psychomotor vigilance task (PVT), and urinary melatonin (a marker of circadian phase) from 44 pilots in a short-haul commercial airline over one month. We will statistically compare with the model predictions to lapses on the PVT and circadian phase. We will calculate the sensitivity and specificity of each model prediction under different scheduling conditions. Our findings will aid operational decision-makers in determining the reliability of each model under real-world scheduling situations.

  2. Validation of coupled atmosphere-fire behavior models

    SciTech Connect

    Bossert, J.E.; Reisner, J.M.; Linn, R.R.; Winterkamp, J.L.; Schaub, R.; Riggan, P.J.

    1998-12-31

    Recent advances in numerical modeling and computer power have made it feasible to simulate the dynamical interaction and feedback between the heat and turbulence induced by wildfires and the local atmospheric wind and temperature fields. At Los Alamos National Laboratory, the authors have developed a modeling system that includes this interaction by coupling a high resolution atmospheric dynamics model, HIGRAD, with a fire behavior model, BEHAVE, to predict the spread of wildfires. The HIGRAD/BEHAVE model is run at very high resolution to properly resolve the fire/atmosphere interaction. At present, these coupled wildfire model simulations are computationally intensive. The additional complexity of these models require sophisticated methods for assuring their reliability in real world applications. With this in mind, a substantial part of the research effort is directed at model validation. Several instrumented prescribed fires have been conducted with multi-agency support and participation from chaparral, marsh, and scrub environments in coastal areas of Florida and inland California. In this paper, the authors first describe the data required to initialize the components of the wildfire modeling system. Then they present results from one of the Florida fires, and discuss a strategy for further testing and improvement of coupled weather/wildfire models.

  3. On the validity of the nonholonomic model of the rattleback

    NASA Astrophysics Data System (ADS)

    Kuznetsov, S. P.

    2015-12-01

    In connection with the problem of a convex-shaped solid body on a rough horizontal plane (the rattleback or Celtic stone), the paper discusses the validity of the nonholonomic model which postulates that the contact point has zero velocity and, hence, friction performs no mechanical work. While abstract, this model is undoubtedly constructive, similar to many idealizations commonly used in science. Despite its energy-conserving nature, the model does not obey Liouville's theorem on phase volume conservation, thus allowing the occurrence in the phase space of objects characteristic of dissipative dynamics (attractors) and thereby leading to phenomena like the spontaneous reversal of rotations. Nonholonomic models, intermediate between conservative and dissipative systems, should take their deserved place in the general picture of the modern theory of dynamical systems.

  4. Validation of high displacement piezoelectric actuator finite element models

    NASA Astrophysics Data System (ADS)

    Taleghani, Barmac K.

    2000-08-01

    The paper presents the results obtained by using NASTRAN and ANSYS finite element codes to predict doming of the THUNDER piezoelectric actuators during the manufacturing process and subsequent straining due to an applied input voltage. To effectively use such devices in engineering applications, modeling and characterization are essential. Length, width, dome height, and thickness and important parameters for users of such devices. Therefore, finite element models were used to assess the effects of these parameters. NASTRAN and ANSYS used different methods for modeling piezoelectric effects. In NASTRAN, a thermal analogy was used to represent voltage at nodes as equivalent temperatures, while ANSYS processed the voltage directly using piezoelectric finite elements. The results of finite element models were validated by using the experimental results.

  5. Validation of High Displacement Piezoelectric Actuator Finite Element Models

    NASA Technical Reports Server (NTRS)

    Taleghani, B. K.

    2000-01-01

    The paper presents the results obtained by using NASTRAN(Registered Trademark) and ANSYS(Regitered Trademark) finite element codes to predict doming of the THUNDER piezoelectric actuators during the manufacturing process and subsequent straining due to an applied input voltage. To effectively use such devices in engineering applications, modeling and characterization are essential. Length, width, dome height, and thickness are important parameters for users of such devices. Therefore, finite element models were used to assess the effects of these parameters. NASTRAN(Registered Trademark) and ANSYS(Registered Trademark) used different methods for modeling piezoelectric effects. In NASTRAN(Registered Trademark), a thermal analogy was used to represent voltage at nodes as equivalent temperatures, while ANSYS(Registered Trademark) processed the voltage directly using piezoelectric finite elements. The results of finite element models were validated by using the experimental results.

  6. Coupled Thermal-Chemical-Mechanical Modeling of Validation Cookoff Experiments

    SciTech Connect

    ERIKSON,WILLIAM W.; SCHMITT,ROBERT G.; ATWOOD,A.I.; CURRAN,P.D.

    2000-11-27

    The cookoff of energetic materials involves the combined effects of several physical and chemical processes. These processes include heat transfer, chemical decomposition, and mechanical response. The interaction and coupling between these processes influence both the time-to-event and the violence of reaction. The prediction of the behavior of explosives during cookoff, particularly with respect to reaction violence, is a challenging task. To this end, a joint DoD/DOE program has been initiated to develop models for cookoff, and to perform experiments to validate those models. In this paper, a series of cookoff analyses are presented and compared with data from a number of experiments for the aluminized, RDX-based, Navy explosive PBXN-109. The traditional thermal-chemical analysis is used to calculate time-to-event and characterize the heat transfer and boundary conditions. A reaction mechanism based on Tarver and McGuire's work on RDX{sup 2} was adjusted to match the spherical one-dimensional time-to-explosion data. The predicted time-to-event using this reaction mechanism compares favorably with the validation tests. Coupled thermal-chemical-mechanical analysis is used to calculate the mechanical response of the confinement and the energetic material state prior to ignition. The predicted state of the material includes the temperature, stress-field, porosity, and extent of reaction. There is little experimental data for comparison to these calculations. The hoop strain in the confining steel tube gives an estimation of the radial stress in the explosive. The inferred pressure from the measured hoop strain and calculated radial stress agree qualitatively. However, validation of the mechanical response model and the chemical reaction mechanism requires more data. A post-ignition burn dynamics model was applied to calculate the confinement dynamics. The burn dynamics calculations suffer from a lack of characterization of the confinement for the flaw

  7. Modeling, Robust Control, and Experimental Validation of a Supercavitating Vehicle

    NASA Astrophysics Data System (ADS)

    Escobar Sanabria, David

    This dissertation considers the mathematical modeling, control under uncertainty, and experimental validation of an underwater supercavitating vehicle. By traveling inside a gas cavity, a supercavitating vehicle reduces hydrodynamic drag, increases speed, and minimizes power consumption. The attainable speed and power efficiency make these vehicles attractive for undersea exploration, high-speed transportation, and defense. However, the benefits of traveling inside a cavity come with difficulties in controlling the vehicle dynamics. The main challenge is the nonlinear force that arises when the back-end of the vehicle pierces the cavity. This force, referred to as planing, leads to oscillatory motion and instability. Control technologies that are robust to planing and suited for practical implementation need to be developed. To enable these technologies, a low-order vehicle model that accounts for inaccuracy in the characterization of planing is required. Additionally, an experimental method to evaluate possible pitfalls in the models and controllers is necessary before undersea testing. The major contribution of this dissertation is a unified framework for mathematical modeling, robust control synthesis, and experimental validation of a supercavitating vehicle. First, we introduce affordable experimental methods for mathematical modeling and controller testing under planing and realistic flow conditions. Then, using experimental observations and physical principles, we create a low-order nonlinear model of the longitudinal vehicle motion. This model quantifies the planing uncertainty and is suitable for robust controller synthesis. Next, based on the vehicle model, we develop automated tools for synthesizing controllers that deliver a certificate of performance in the face of nonlinear and uncertain planing forces. We demonstrate theoretically and experimentally that the proposed controllers ensure higher performance when the uncertain planing dynamics are

  8. Validation of thermal models for a prototypical MEMS thermal actuator.

    SciTech Connect

    Gallis, Michail A.; Torczynski, John Robert; Piekos, Edward Stanley; Serrano, Justin Raymond; Gorby, Allen D.; Phinney, Leslie Mary

    2008-09-01

    This report documents technical work performed to complete the ASC Level 2 Milestone 2841: validation of thermal models for a prototypical MEMS thermal actuator. This effort requires completion of the following task: the comparison between calculated and measured temperature profiles of a heated stationary microbeam in air. Such heated microbeams are prototypical structures in virtually all electrically driven microscale thermal actuators. This task is divided into four major subtasks. (1) Perform validation experiments on prototypical heated stationary microbeams in which material properties such as thermal conductivity and electrical resistivity are measured if not known and temperature profiles along the beams are measured as a function of electrical power and gas pressure. (2) Develop a noncontinuum gas-phase heat-transfer model for typical MEMS situations including effects such as temperature discontinuities at gas-solid interfaces across which heat is flowing, and incorporate this model into the ASC FEM heat-conduction code Calore to enable it to simulate these effects with good accuracy. (3) Develop a noncontinuum solid-phase heat transfer model for typical MEMS situations including an effective thermal conductivity that depends on device geometry and grain size, and incorporate this model into the FEM heat-conduction code Calore to enable it to simulate these effects with good accuracy. (4) Perform combined gas-solid heat-transfer simulations using Calore with these models for the experimentally investigated devices, and compare simulation and experimental temperature profiles to assess model accuracy. These subtasks have been completed successfully, thereby completing the milestone task. Model and experimental temperature profiles are found to be in reasonable agreement for all cases examined. Modest systematic differences appear to be related to uncertainties in the geometric dimensions of the test structures and in the thermal conductivity of the

  9. Multicomponent aerosol dynamics model UHMA: model development and validation

    NASA Astrophysics Data System (ADS)

    Korhonen, H.; Lehtinen, K. E. J.; Kulmala, M.

    2004-05-01

    A size-segregated aerosol dynamics model UHMA (University of Helsinki Multicomponent Aerosol model) was developed for studies of multicomponent tropospheric aerosol particles. The model includes major aerosol microphysical processes in the atmosphere with a focus on new particle formation and growth; thus it incorporates particle coagulation and multicomponent condensation, applying a revised treatment of condensation flux onto free molecular regime particles and the activation of nanosized clusters by organic vapours (Nano-Köhler theory), as well as recent parameterizations for binary H2SO4-H2O and ternary H2SO4-NH3-H2O homogeneous nucleation and dry deposition. The representation of particle size distribution can be chosen from three sectional methods: the hybrid method, the moving center method, and the retracking method in which moving sections are retracked to a fixed grid after a certain time interval. All these methods can treat particle emissions and atmospheric transport consistently, and are therefore suitable for use in large scale atmospheric models. In a test simulation against an accurate high resolution solution, all the methods showed reasonable treatment of new particle formation with 20 size sections although the hybrid and the retracking methods suffered from artificial widening of the distribution. The moving center approach, on the other hand, showed extra dents in the particle size distribution and failed to predict the onset of detectable particle formation. In a separate test simulation of an observed nucleation event, the model captured the key qualitative behaviour of the system well. Furthermore, its prediction of the organic volume fraction in newly formed particles, suggesting values as high as 0.5 for 3-4 nm particles and approximately 0.8 for 10 nm particles, agrees with recent indirect composition measurements.

  10. Instructional Support System--Occupational Education II. ISSOE Automotive Mechanics Content Validation.

    ERIC Educational Resources Information Center

    Abramson, Theodore

    A study was conducted to validate the Instructional Support System-Occupational Education (ISSOE) automotive mechanics curriculum. The following four steps were undertaken: (1) review of the ISSOE materials in terms of their "validity" as task statements; (2) a comparison of the ISSOE tasks to the tasks included in the V-TECS Automotive Mechanics…

  11. Validated models for predicting skin penetration from different vehicles.

    PubMed

    Ghafourian, Taravat; Samaras, Eleftherios G; Brooks, James D; Riviere, Jim E

    2010-12-23

    The permeability of a penetrant though skin is controlled by the properties of the penetrants and the mixture components, which in turn relates to the molecular structures. Despite the well-investigated models for compound permeation through skin, the effect of vehicles and mixture components has not received much attention. The aim of this Quantitative Structure Activity Relationship (QSAR) study was to develop a statistically validated model for the prediction of skin permeability coefficients of compounds dissolved in different vehicles. Furthermore, the model can help with the elucidation of the mechanisms involved in the permeation process. With this goal in mind, the skin permeability of four different penetrants each blended in 24 different solvent mixtures were determined from diffusion cell studies using porcine skin. The resulting 96 kp values were combined with a previous dataset of 288 kp data for QSAR analysis. Stepwise regression analysis was used for the selection of the most significant molecular descriptors and development of several regression models. The selected QSAR employed two penetrant descriptors of Wiener topological index and total lipole moment, boiling point of the solvent and the difference between the melting point of the penetrant and the melting point of the solvent. The QSAR was validated internally, using a leave-many-out procedure, giving a mean absolute error of 0.454 for the logkp value of the test set.

  12. Validation of two air quality models for Indian mining conditions.

    PubMed

    Chaulya, S K; Ahmad, M; Singh, R S; Bandopadhyay, L K; Bondyopadhay, C; Mondal, G C

    2003-02-01

    All major mining activity particularly opencast mining contributes to the problem of suspended particulate matter (SPM) directly or indirectly. Therefore, assessment and prediction are required to prevent and minimize the deterioration of SPM due to various opencast mining operations. Determination of emission rate of SPM for these activities and validation of air quality models are the first and foremost concern. In view of the above, the study was taken up for determination of emission rate for SPM to calculate emission rate of various opencast mining activities and validation of commonly used two air quality models for Indian mining conditions. To achieve the objectives, eight coal and three iron ore mining sites were selected to generate site specific emission data by considering type of mining, method of working, geographical location, accessibility and above all resource availability. The study covers various mining activities and locations including drilling, overburden loading and unloading, coal/mineral loading and unloading, coal handling or screening plant, exposed overburden dump, stock yard, workshop, exposed pit surface, transport road and haul road. Validation of the study was carried out through Fugitive Dust Model (FDM) and Point, Area and Line sources model (PAL2) by assigning the measured emission rate for each mining activity, meteorological data and other details of the respective mine as an input to the models. Both the models were run separately for the same set of input data for each mine to get the predicted SPM concentration at three receptor locations for each mine. The receptor locations were selected such a way that at the same places the actual filed measurement were carried out for SPM concentration. Statistical analysis was carried out to assess the performance of the models based on a set measured and predicted SPM concentration data. The value of coefficient of correlation for PAL2 and FDM was calculated to be 0.990-0.994 and 0

  13. Vibrations inside buildings due to subway railway traffic. Experimental validation of a comprehensive prediction model.

    PubMed

    Lopes, Patrícia; Ruiz, Jésus Fernández; Alves Costa, Pedro; Medina Rodríguez, L; Cardoso, António Silva

    2016-10-15

    The present paper focuses on the experimental validation of a numerical approach previously proposed by the authors for the prediction of vibrations inside buildings due to railway traffic in tunnels. The numerical model is based on the concept of dynamic substructuring and is composed by three autonomous models to simulate the following main parts of the problem: i) generation of vibrations (train-track interaction); ii) propagation of vibrations (track-tunnel-ground system); iii) reception of vibrations (building coupled to the ground). The experimental validation consists in the comparison between the results predicted by the proposed numerical model and the measurements performed inside a building due to the railway traffic in a shallow tunnel located in Madrid. Apart from the brief description of the numerical model and of the case study, the main options and simplifications adopted on the numerical modeling strategy are discussed. The balance adopted between accuracy and simplicity of the numerical approach proved to be a path to follow in order to transfer knowledge to engineering practice. Finally, the comparison between numerical and experimental results allowed finding a good agreement between both, fact that ensures the ability of the proposed modeling strategy to deal with real engineering practical problems.

  14. Long-term ELBARA-II Assistance to SMOS Land Product and Algorithm Validation at the Valencia Anchor Station (MELBEX Experiment 2010-2013)

    NASA Astrophysics Data System (ADS)

    Lopez-Baeza, Ernesto; Wigneron, Jean-Pierre; Schwank, Mike; Miernecki, Maciej; Kerr, Yann; Casal, Tania; Delwart, Steven; Fernandez-Moran, Roberto; Mecklenburg, Susanne; Coll Pajaron, M. Amparo; Salgado Hernanz, Paula

    The main activity of the Valencia Anchor Station (VAS) is currently now to support the validation of SMOS (Soil Moisture and Ocean Salinity) Level 2 and 3 land products (soil moisture, SM, and vegetation optical depth, TAU). With this aim, the European Space Agency (ESA) has provided the Climatology from Satellites Group of the University of Valencia with an ELBARA-II microwave radiometer under a loan agreement since September 2009. During this time, brightness temperatures (TB) have continuously been acquired, except during normal maintenance or minor repair interruptions. ELBARA-II is an L-band dual-polarization radiometer with two channels (1400-1418 MHz, 1409-1427 MHz). It is continuously measuring over a vineyard field (El Renegado, Caudete de las Fuentes, Valencia) from a 15 m platform with a constant protocol for calibration and angular scanning measurements with the aim to assisting the validation of SMOS land products and the calibration of the L-MEB (L-Band Emission of the Biosphere) -basis for the SMOS Level 2 Land Processor- over the VAS validation site. One of the advantages of using the VAS site is the possibility of studying two different environmental conditions along the year. While the vine cycle extends mainly between April and October, during the rest of the year the area remains under bare soil conditions, adequate for the calibration of the soil model. The measurement protocol currently running has shown to be robust during the whole operation time and will be extended in time as much as possible to continue providing a long-term data set of ELBARA-II TB measurements and retrieved SM and TAU. This data set is also showing to be useful in support of SMOS scientific activities: the VAS area and, specifically the ELBARA-II site, offer good conditions to control the long-term evolution of SMOS Level 2 and Level 3 land products and interpret eventual anomalies that may obscure sensor hidden biases. In addition, SM and TAU that are currently

  15. Model validation in aquatic toxicity testing: implications for regulatory practice.

    PubMed

    McCarty, L S

    2012-08-01

    Toxicity test validity is contingent on whether models and assumptions are appropriate and sufficient. A quality control evaluation of the acute toxicity testing protocol using the US. EPA fathead minnow database focused around three key assumptions that ensure results represent valid toxicological metrics: 1, it must be possible to estimate steady-state LC50s; 2, LC50s should occur at equivalent exposure durations; 3, all substantive toxicity modifying factors should be adequately controlled. About 8% of the tests failed the first assumption and are invalid and unusable. Examination of remaining data indicated variance from unquantified effects of toxicity modifying factors remained in LC50s, thereby failing assumption three. Such flaws in toxicity data generated via recommended LC50 testing protocols means resultant data do not represent consistent, comparable measures of relative toxicity. Current regulations employing LC50 testing data are acceptable due to the use of semiquantitative, policy-driven development guidance that considers such data uncertainty. Quantitative applications such as QSARs, mixture toxicity, and regulatory chemical grouping can be compromised. These validation failures justify a formal quality control review of the LC50 toxicity testing protocol. Interim improvements in the design, execution, interpretation, and regulatory applications of LC50 and related protocols using exposure-based dose surrogates are warranted.

  16. Contaminant transport model validation: The Oak Ridge Reservation

    SciTech Connect

    Lee, R.R.; Ketelle, R.H.

    1988-09-01

    In the complex geologic setting on the Oak Ridge Reservation, hydraulic conductivity is anisotropic and flow is strongly influenced by an extensive and largely discontinuous fracture network. Difficulties in describing and modeling the aquifer system prompted a study to obtain aquifer property data to be used in a groundwater flow model validation experiment. Characterization studies included the performance of an extensive suite of aquifer test within a 600-square-meter area to obtain aquifer property values to describe the flow field in detail. Following aquifer test, a groundwater tracer test was performed under ambient conditions to verify the aquifer analysis. Tracer migration data in the near-field were used in model calibration to predict tracer arrival time and concentration in the far-field. Despite the extensive aquifer testing, initial modeling inaccurately predicted tracer migration direction. Initial tracer migration rates were consistent with those predicted by the model; however, changing environmental conditions resulted in an unanticipated decay in tracer movement. Evaluation of the predictive accuracy of groundwater flow and contaminant transport models on the Oak Ridge Reservation depends on defining the resolution required, followed by field testing and model grid definition at compatible scales. The use of tracer tests, both as a characterization method and to verify model results, provides the highest level of resolution of groundwater flow characteristics. 3 refs., 4 figs.

  17. Validation of landsurface processes in the AMIP models

    SciTech Connect

    Phillips, T J

    1999-10-01

    The Atmospheric Model Intercomparison Project (AMIP) is a commonly accepted protocol for testing the performance of the world's atmospheric general circulation models (AGCMs) under common specifications of radiative forcings (in solar constant and carbon dioxide concentration) and observed ocean boundary conditions (Gates 1992, Gates et al. 1999). From the standpoint of landsurface specialists, the AMIP affords an opportunity to investigate the behaviors of a wide variety of land-surface schemes (LSS) that are coupled to their ''native'' AGCMs (Phillips et al. 1995, Phillips 1999). In principle, therefore, the AMIP permits consideration of an overarching question: ''To what extent does an AGCM's performance in simulating continental climate depend on the representations of land-surface processes by the embedded LSS?'' There are, of course, some formidable obstacles to satisfactorily addressing this question. First, there is the dilemna of how to effectively validate simulation performance, given the present dearth of global land-surface data sets. Even if this data problem were to be alleviated, some inherent methodological difficulties would remain: in the context of the AMIP, it is not possible to validate a given LSS per se, since the associated land-surface climate simulation is a product of the coupled AGCM/LSS system. Moreover, aside from the intrinsic differences in LSS across the AMIP models, the varied representations of land-surface characteristics (e.g. vegetation properties, surface albedos and roughnesses, etc.) and related variations in land-surface forcings further complicate such an attribution process. Nevertheless, it may be possible to develop validation methodologies/statistics that are sufficiently penetrating to reveal ''signatures'' of particular ISS representations (e.g. ''bucket'' vs more complex parameterizations of hydrology) in the AMIP land-surface simulations.

  18. [Development of the Portuguese version of MOS SF-36. Part II --Validation tests].

    PubMed

    Ferreira, P L

    2000-01-01

    This paper describes the study aimed at validating the Portuguese version of the MOS SF-36 instrument of assessment. It starts by presenting the results of the implementation of this instrument in a sample of 930 pregnant women and the results of scaling tests, including the values of internal consistency and reliability. However, since a reliable instrument is not necessarily a valid one, the results of several validity tests are also presented. Finally, this paper ends by recommending the use of the Portuguese version of the SF-36 instrument of assessment.

  19. Validated numerical simulation model of a dielectric elastomer generator

    NASA Astrophysics Data System (ADS)

    Foerster, Florentine; Moessinger, Holger; Schlaak, Helmut F.

    2013-04-01

    Dielectric elastomer generators (DEG) produce electrical energy by converting mechanical into electrical energy. Efficient operation requires homogeneous deformation of each single layer. However, by different internal and external influences like supports or the shape of a DEG the deformation will be inhomogeneous and hence negatively affect the amount of the generated electrical energy. Optimization of the deformation behavior leads to improved efficiency of the DEG and consequently to higher energy gain. In this work a numerical simulation model of a multilayer dielectric elastomer generator is developed using the FEM software ANSYS. The analyzed multilayer DEG consists of 49 active dielectric layers with layer thicknesses of 50 μm. The elastomer is silicone (PDMS) while the compliant electrodes are made of graphite powder. In the simulation the real material parameters of the PDMS and the graphite electrodes need to be included. Therefore, the mechanical and electrical material parameters of the PDMS are determined by experimental investigations of test samples while the electrode parameters are determined by numerical simulations of test samples. The numerical simulation of the DEG is carried out as coupled electro-mechanical simulation for the constant voltage energy harvesting cycle. Finally, the derived numerical simulation model is validated by comparison with analytical calculations and further simulated DEG configurations. The comparison of the determined results show good accordance with regard to the deformation of the DEG. Based on the validated model it is now possible to optimize the DEG layout for improved deformation behavior with further simulations.

  20. Modeling and Validation of Damped Plexiglas Windows for Noise Control

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Gibbs, Gary P.; Klos, Jacob; Mazur, Marina

    2003-01-01

    Windows are a significant path for structure-borne and air-borne noise transmission in general aviation aircraft. In this paper, numerical and experimental results are used to evaluate damped plexiglas windows for the reduction of structure-borne and air-borne noise transmitted into the interior of an aircraft. In contrast to conventional homogeneous windows, the damped plexiglas windows were fabricated using two or three layers of plexiglas with transparent viscoelastic damping material sandwiched between the layers. Transmission loss and radiated sound power measurements were used to compare different layups of the damped plexiglas windows with uniform windows of the same nominal thickness. This vibro-acoustic test data was also used for the verification and validation of finite element and boundary element models of the damped plexiglas windows. Numerical models are presented for the prediction of radiated sound power for a point force excitation and transmission loss for diffuse acoustic excitation. Radiated sound power and transmission loss predictions are in good agreement with experimental data. Once validated, the numerical models were used to perform a parametric study to determine the optimum configuration of the damped plexiglas windows for reducing the radiated sound power for a point force excitation.

  1. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology

    PubMed Central

    Krishnamoorthi, Shankarjee; Perotti, Luigi E.; Borgstrom, Nils P.; Ajijola, Olujimi A.; Frid, Anna; Ponnaluri, Aditya V.; Weiss, James N.; Qu, Zhilin; Klug, William S.; Ennis, Daniel B.; Garfinkel, Alan

    2014-01-01

    We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG) and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations. PMID:25493967

  2. Analytical thermal model validation for Cassini radioisotope thermoelectric generator

    SciTech Connect

    Lin, E.I.

    1997-12-31

    The Saturn-bound Cassini spacecraft is designed to rely, without precedent, on the waste heat from its three radioisotope thermoelectric generators (RTGs) to warm the propulsion module subsystem, and the RTG end dome temperature is a key determining factor of the amount of waste heat delivered. A previously validated SINDA thermal model of the RTG was the sole guide to understanding its complex thermal behavior, but displayed large discrepancies against some initial thermal development test data. A careful revalidation effort led to significant modifications and adjustments of the model, which result in a doubling of the radiative heat transfer from the heat source support assemblies to the end domes and bring up the end dome and flange temperature predictions to within 2 C of the pertinent test data. The increased inboard end dome temperature has a considerable impact on thermal control of the spacecraft central body. The validation process offers an example of physically-driven analytical model calibration with test data from not only an electrical simulator but also a nuclear-fueled flight unit, and has established the end dome temperatures of a flight RTG where no in-flight or ground-test data existed before.

  3. Low frequency eddy current benchmark study for model validation

    SciTech Connect

    Mooers, R. D.; Boehnlein, T. R.; Cherry, M. R.; Knopp, J. S.; Aldrin, J. C.; Sabbagh, H. A.

    2011-06-23

    This paper presents results of an eddy current model validation study. Precise measurements were made using an impedance analyzer to investigate changes in impedance due to Electrical Discharge Machining (EDM) notches in aluminum plates. Each plate contained one EDM notch at an angle of 0, 10, 20, or 30 degrees from the normal of the plate surface. Measurements were made with the eddy current probe both scanning parallel and perpendicular to the notch length. The experimental response from the vertical and oblique notches will be reported and compared to results from different numerical simulation codes.

  4. Statistical validation of structured population models for Daphnia magna

    PubMed Central

    Adoteye, Kaska; Banks, H.T.; Cross, Karissa; Eytcheson, Stephanie; Flores, Kevin B.; LeBlanc, Gerald A.; Nguyen, Timothy; Ross, Chelsea; Smith, Emmaline; Stemkovski, Michael; Stokely, Sarah

    2016-01-01

    In this study we use statistical validation techniques to verify density-dependent mechanisms hypothesized for populations of Daphnia magna. We develop structured population models that exemplify specific mechanisms, and use multi-scale experimental data in order to test their importance. We show that fecundity and survival rates are affected by both time-varying density-independent factors, such as age, and density-dependent factors, such as competition. We perform uncertainty analysis and show that our parameters are estimated with a high degree of confidence. Further, we perform a sensitivity analysis to understand how changes in fecundity and survival rates affect population size and age-structure. PMID:26092608

  5. Low Frequency Eddy Current Benchmark Study for Model Validation

    NASA Astrophysics Data System (ADS)

    Mooers, R. D.; Cherry, M. R.; Knopp, J. S.; Aldrin, J. C.; Sabbagh, H. A.; Boehnlein, T. R.

    2011-06-01

    This paper presents results of an eddy current model validation study. Precise measurements were made using an impedance analyzer to investigate changes in impedance due to Electrical Discharge Machining (EDM) notches in aluminum plates. Each plate contained one EDM notch at an angle of 0, 10, 20, or 30 degrees from the normal of the plate surface. Measurements were made with the eddy current probe both scanning parallel and perpendicular to the notch length. The experimental response from the vertical and oblique notches will be reported and compared to results from different numerical simulation codes.

  6. A validation study of a stochastic model of human interaction

    NASA Astrophysics Data System (ADS)

    Burchfield, Mitchel Talmadge

    The purpose of this dissertation is to validate a stochastic model of human interactions which is part of a developmentalism paradigm. Incorporating elements of ancient and contemporary philosophy and science, developmentalism defines human development as a progression of increasing competence and utilizes compatible theories of developmental psychology, cognitive psychology, educational psychology, social psychology, curriculum development, neurology, psychophysics, and physics. To validate a stochastic model of human interactions, the study addressed four research questions: (a) Does attitude vary over time? (b) What are the distributional assumptions underlying attitudes? (c) Does the stochastic model, {-}N{intlimitssbsp{-infty}{infty}}varphi(chi,tau)\\ Psi(tau)dtau, have utility for the study of attitudinal distributions and dynamics? (d) Are the Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein theories applicable to human groups? Approximately 25,000 attitude observations were made using the Semantic Differential Scale. Positions of individuals varied over time and the logistic model predicted observed distributions with correlations between 0.98 and 1.0, with estimated standard errors significantly less than the magnitudes of the parameters. The results bring into question the applicability of Fisherian research designs (Fisher, 1922, 1928, 1938) for behavioral research based on the apparent failure of two fundamental assumptions-the noninteractive nature of the objects being studied and normal distribution of attributes. The findings indicate that individual belief structures are representable in terms of a psychological space which has the same or similar properties as physical space. The psychological space not only has dimension, but individuals interact by force equations similar to those described in theoretical physics models. Nonlinear regression techniques were used to estimate Fermi-Dirac parameters from the data. The model explained a high degree

  7. Aqueous Solution Vessel Thermal Model Development II

    SciTech Connect

    Buechler, Cynthia Eileen

    2015-10-28

    The work presented in this report is a continuation of the work described in the May 2015 report, “Aqueous Solution Vessel Thermal Model Development”. This computational fluid dynamics (CFD) model aims to predict the temperature and bubble volume fraction in an aqueous solution of uranium. These values affect the reactivity of the fissile solution, so it is important to be able to calculate them and determine their effects on the reaction. Part A of this report describes some of the parameter comparisons performed on the CFD model using Fluent. Part B describes the coupling of the Fluent model with a Monte-Carlo N-Particle (MCNP) neutron transport model. The fuel tank geometry is the same as it was in the May 2015 report, annular with a thickness-to-height ratio of 0.16. An accelerator-driven neutron source provides the excitation for the reaction, and internal and external water cooling channels remove the heat. The model used in this work incorporates the Eulerian multiphase model with lift, wall lubrication, turbulent dispersion and turbulence interaction. The buoyancy-driven flow is modeled using the Boussinesq approximation, and the flow turbulence is determined using the k-ω Shear-Stress-Transport (SST) model. The dispersed turbulence multiphase model is employed to capture the multiphase turbulence effects.

  8. Sound Transmission Validation and Sensitivity Studies in Numerical Models.

    PubMed

    Oberrecht, Steve P; Krysl, Petr; Cranford, Ted W

    2016-01-01

    In 1974, Norris and Harvey published an experimental study of sound transmission into the head of the bottlenose dolphin. We used this rare source of data to validate our Vibroacoustic Toolkit, an array of numerical modeling simulation tools. Norris and Harvey provided measurements of received sound pressure in various locations within the dolphin's head from a sound source that was moved around the outside of the head. Our toolkit was used to predict the curves of pressure with the best-guess input data (material properties, transducer and hydrophone locations, and geometry of the animal's head). In addition, we performed a series of sensitivity analyses (SAs). SA is concerned with understanding how input changes to the model influence the outputs. SA can enhance understanding of a complex model by finding and analyzing unexpected model behavior, discriminating which inputs have a dominant effect on particular outputs, exploring how inputs combine to affect outputs, and gaining insight as to what additional information improves the model's ability to predict. Even when a computational model does not adequately reproduce the behavior of a physical system, its sensitivities may be useful for developing inferences about key features of the physical system. Our findings may become a valuable source of information for modeling the interactions between sound and anatomy.

  9. Volumetric Intraoperative Brain Deformation Compensation: Model Development and Phantom Validation

    PubMed Central

    DeLorenzo, Christine; Papademetris, Xenophon; Staib, Lawrence H.; Vives, Kenneth P.; Spencer, Dennis D.; Duncan, James S.

    2012-01-01

    During neurosurgery, nonrigid brain deformation may affect the reliability of tissue localization based on preoperative images. To provide accurate surgical guidance in these cases, preoperative images must be updated to reflect the intraoperative brain. This can be accomplished by warping these preoperative images using a biomechanical model. Due to the possible complexity of this deformation, intraoperative information is often required to guide the model solution. In this paper, a linear elastic model of the brain is developed to infer volumetric brain deformation associated with measured intraoperative cortical surface displacement. The developed model relies on known material properties of brain tissue, and does not require further knowledge about intraoperative conditions. To provide an initial estimation of volumetric model accuracy, as well as determine the model’s sensitivity to the specified material parameters and surface displacements, a realistic brain phantom was developed. Phantom results indicate that the linear elastic model significantly reduced localization error due to brain shift, from >16 mm to under 5 mm, on average. In addition, though in vivo quantitative validation is necessary, preliminary application of this approach to images acquired during neocortical epilepsy cases confirms the feasibility of applying the developed model to in vivo data. PMID:22562728

  10. Evaluation and cross-validation of Environmental Models

    NASA Astrophysics Data System (ADS)

    Lemaire, Joseph

    Before scientific models (statistical or empirical models based on experimental measurements; physical or mathematical models) can be proposed and selected as ISO Environmental Standards, a Commission of professional experts appointed by an established International Union or Association (e.g. IAGA for Geomagnetism and Aeronomy, . . . ) should have been able to study, document, evaluate and validate the best alternative models available at a given epoch. Examples will be given, indicating that different values for the Earth radius have been employed in different data processing laboratories, institutes or agencies, to process, analyse or retrieve series of experimental observations. Furthermore, invariant magnetic coordinates like B and L, commonly used in the study of Earth's radiation belts fluxes and for their mapping, differ from one space mission data center to the other, from team to team, and from country to country. Worse, users of empirical models generally fail to use the original magnetic model which had been employed to compile B and L , and thus to build these environmental models. These are just some flagrant examples of inconsistencies and misuses identified so far; there are probably more of them to be uncovered by careful, independent examination and benchmarking. A meter prototype, the standard unit length that has been determined on 20 May 1875, during the Diplomatic Conference of the Meter, and deposited at the BIPM (Bureau International des Poids et Mesures). In the same token, to coordinate and safeguard progress in the field of Space Weather, similar initiatives need to be undertaken, to prevent wild, uncontrolled dissemination of pseudo Environmental Models and Standards. Indeed, unless validation tests have been performed, there is guaranty, a priori, that all models on the market place have been built consistently with the same units system, and that they are based on identical definitions for the coordinates systems, etc... Therefore

  11. The Atmospheric Radionuclide Transport Model (ARTM) - Validation of a long-term atmospheric dispersion model

    NASA Astrophysics Data System (ADS)

    Hettrich, Sebastian; Wildermuth, Hans; Strobl, Christopher; Wenig, Mark

    2016-04-01

    In the last couple of years, the Atmospheric Radionuclide Transport Model (ARTM) has been developed by the German Federal Office for Radiation Protection (BfS) and the Society for Plant and Reactor Security (GRS). ARTM is an atmospheric dispersion model for continuous long-term releases of radionuclides into the atmosphere, based on the Lagrangian particle model. This model, developed in the first place as a more realistic replacement for the out-dated Gaussian plume models, is currently being optimised for further scientific purposes to study atmospheric dispersion in short-range scenarios. It includes a diagnostic wind field model, allows for the application of building structures and multiple sources (including linear, 2-and 3-dimensional source geometries), and considers orography and surface roughness. As an output it calculates the activity concentration, dry and wet deposition and can model also the radioactive decay of Rn-222. As such, ARTM requires to undergo an intense validation process. While for short-term and short-range models, which were mainly developed for examining nuclear accidents or explosions, a few measurement data-sets are available for validation, data-sets for validating long-term models are very sparse and the existing ones mostly prove to be not applicable for validation. Here we present a strategy for the validation of long-term Lagrangian particle models based on the work with ARTM. In our validation study, the first part we present is a comprehensive analysis of the model sensitivities on different parameters like e.g. (simulation grid size resolution, starting random number, amount of simulation particles, etc.). This study provides a good estimation for the uncertainties of the simulation results and consequently can be used to generate model outputs comparable to the available measurements data at various distances from the emission source. This comparison between measurement data from selected scenarios and simulation results

  12. Experimental validation of a numerical model for subway induced vibrations

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Degrande, G.; Lombaert, G.

    2009-04-01

    This paper presents the experimental validation of a coupled periodic finite element-boundary element model for the prediction of subway induced vibrations. The model fully accounts for the dynamic interaction between the train, the track, the tunnel and the soil. The periodicity or invariance of the tunnel and the soil in the longitudinal direction is exploited using the Floquet transformation, which allows for an efficient formulation in the frequency-wavenumber domain. A general analytical formulation is used to compute the response of three-dimensional invariant or periodic media that are excited by moving loads. The numerical model is validated by means of several experiments that have been performed at a site in Regent's Park on the Bakerloo line of London Underground. Vibration measurements have been performed on the axle boxes of the train, on the rail, the tunnel invert and the tunnel wall, and in the free field, both at the surface and at a depth of 15 m. Prior to these vibration measurements, the dynamic soil characteristics and the track characteristics have been determined. The Bakerloo line tunnel of London Underground has been modelled using the coupled periodic finite element-boundary element approach and free field vibrations due to the passage of a train at different speeds have been predicted and compared to the measurements. The correspondence between the predicted and measured response in the tunnel is reasonably good, although some differences are observed in the free field. The discrepancies are explained on the basis of various uncertainties involved in the problem. The variation in the response with train speed is similar for the measurements as well as the predictions. This study demonstrates the applicability of the coupled periodic finite element-boundary element model to make realistic predictions of the vibrations from underground railways.

  13. Short-Term Mortality Prediction for Acute Lung Injury Patients: External Validation of the ARDSNet Prediction Model

    PubMed Central

    Damluji, Abdulla; Colantuoni, Elizabeth; Mendez-Tellez, Pedro A.; Sevransky, Jonathan E.; Fan, Eddy; Shanholtz, Carl; Wojnar, Margaret; Pronovost, Peter J.; Needham, Dale M.

    2011-01-01

    Objective An independent cohort of acute lung injury (ALI) patients was used to evaluate the external validity of a simple prediction model for short-term mortality previously developed using data from ARDS Network (ARDSNet) trials. Design, Setting, and Patients Data for external validation were obtained from a prospective cohort study of ALI patients from 13 ICUs at four teaching hospitals in Baltimore, Maryland. Measurements and Main Results Of the 508 non-trauma, ALI patients eligible for this analysis, 234 (46%) died in-hospital. Discrimination of the ARDSNet prediction model for inhospital mortality, evaluated by the area under the receiver operator characteristics curves (AUC), was 0.67 for our external validation dataset versus 0.70 and 0.68 using APACHE II and the ARDSNet validation dataset, respectively. In evaluating calibration of the model, predicted versus observed in-hospital mortality for the external validation dataset was similar for both low risk (ARDSNet model score = 0) and high risk (score = 3 or 4+) patient strata. However, for intermediate risk (score = 1 or 2) patients, observed in-hospital mortality was substantially higher than predicted mortality (25.3% vs. 16.5% and 40.6% vs. 31.0% for score = 1 and 2, respectively). Sensitivity analyses limiting our external validation data set to only those patients meeting the ARDSNet trial eligibility criteria and to those who received mechanical ventilation in compliance with the ARDSNet ventilation protocol, did not substantially change the model’s discrimination or improve its calibration. Conclusions Evaluation of the ARDSNet prediction model using an external ALI cohort demonstrated similar discrimination of the model as was observed with the ARDSNet validation dataset. However, there were substantial differences in observed versus predicted mortality among intermediate risk ALI patients. The ARDSNet model provided reasonable, but imprecise, estimates of predicted mortality when applied to our

  14. Non-Linear Slosh Damping Model Development and Validation

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; West, Jeff

    2015-01-01

    Propellant tank slosh dynamics are typically represented by a mechanical model of spring mass damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control (GN&C) analysis. For a partially-filled smooth wall propellant tank, the critical damping based on classical empirical correlation is as low as 0.05%. Due to this low value of damping, propellant slosh is potential sources of disturbance critical to the stability of launch and space vehicles. It is postulated that the commonly quoted slosh damping is valid only under the linear regime where the slosh amplitude is small. With the increase of slosh amplitude, the critical damping value should also increase. If this nonlinearity can be verified and validated, the slosh stability margin can be significantly improved, and the level of conservatism maintained in the GN&C analysis can be lessened. The purpose of this study is to explore and to quantify the dependence of slosh damping with slosh amplitude. Accurately predicting the extremely low damping value of a smooth wall tank is very challenging for any Computational Fluid Dynamics (CFD) tool. One must resolve thin boundary layers near the wall and limit numerical damping to minimum. This computational study demonstrates that with proper grid resolution, CFD can indeed accurately predict the low damping physics from smooth walls under the linear regime. Comparisons of extracted damping values with experimental data for different tank sizes show very good agreements. Numerical simulations confirm that slosh damping is indeed a function of slosh amplitude. When slosh amplitude is low, the damping ratio is essentially constant, which is consistent with the empirical correlation. Once the amplitude reaches a critical value, the damping ratio becomes a linearly increasing function of the slosh amplitude. A follow-on experiment validated the developed nonlinear damping relationship. This discovery can

  15. Nonlinear ultrasound modelling and validation of fatigue damage

    NASA Astrophysics Data System (ADS)

    Fierro, G. P. Malfense; Ciampa, F.; Ginzburg, D.; Onder, E.; Meo, M.

    2015-05-01

    Nonlinear ultrasound techniques have shown greater sensitivity to microcracks and they can be used to detect structural damages at their early stages. However, there is still a lack of numerical models available in commercial finite element analysis (FEA) tools that are able to simulate the interaction of elastic waves with the materials nonlinear behaviour. In this study, a nonlinear constitutive material model was developed to predict the structural response under continuous harmonic excitation of a fatigued isotropic sample that showed anharmonic effects. Particularly, by means of Landau's theory and Kelvin tensorial representation, this model provided an understanding of the elastic nonlinear phenomena such as the second harmonic generation in three-dimensional solid media. The numerical scheme was implemented and evaluated using a commercially available FEA software LS-DYNA, and it showed a good numerical characterisation of the second harmonic amplitude generated by the damaged region known as the nonlinear response area (NRA). Since this process requires only the experimental second-order nonlinear parameter and rough damage size estimation as an input, it does not need any baseline testing with the undamaged structure or any dynamic modelling of the fatigue crack growth. To validate this numerical model, the second-order nonlinear parameter was experimentally evaluated at various points over the fatigue life of an aluminium (AA6082-T6) coupon and the crack propagation was measured using an optical microscope. A good correlation was achieved between the experimental set-up and the nonlinear constitutive model.

  16. Validation of document image defect models for optical character recognition

    SciTech Connect

    Li, Y.; Lopresti, D.; Tomkins, A.

    1994-12-31

    In this paper we consider the problem of evaluating models for physical defects affecting the optical character recognition (OCR) process. While a number of such models have been proposed, the contention that they produce the desired result is typically argued in an ad hoc and informal way. We introduce a rigorous and more pragmatic definition of when a model is accurate: we say a defect model is validated if the OCR errors induced by the model are effectively indistinguishable from the errors encountered when using real scanned documents. We present two measures to quantify this similarity: the Vector Space method and the Coin Bias method. The former adapts an approach used in information retrieval, the latter simulates an observer attempting to do better than a {open_quotes}random{close_quotes} guesser. We compare and contrast the two techniques based on experimental data; both seem to work well, suggesting this is an appropriate formalism for the development and evaluation of document image defect models.

  17. Modeling and Validation of a Propellant Mixer for Controller Design

    NASA Technical Reports Server (NTRS)

    Richter, Hanz; Barbieri, Enrique; Figueroa, Fernando

    2003-01-01

    A mixing chamber used in rocket engine testing at the NASA Stennis Space Center is modelled by a system of two nonlinear ordinary differential equations. The mixer is used to condition the thermodynamic properties of cryogenic liquid propellant by controlled injection of the same substance in the gaseous phase. The three inputs of the mixer are the positions of the valves regulating the liquid and gas flows at the inlets, and the position of the exit valve regulating the flow of conditioned propellant. Mixer operation during a test requires the regulation of its internal pressure, exit mass flow, and exit temperature. A mathematical model is developed to facilitate subsequent controller designs. The model must be simple enough to lend itself to subsequent feedback controller design, yet its accuracy must be tested against real data. For this reason, the model includes function calls to thermodynamic property data. Some structural properties of the resulting model that pertain to controller design, such as uniqueness of the equilibrium point, feedback linearizability and local stability are shown to hold under conditions having direct physical interpretation. The existence of fixed valve positions that attain a desired operating condition is also shown. Validation of the model against real data is likewise provided.

  18. Markov models of molecular kinetics: generation and validation.

    PubMed

    Prinz, Jan-Hendrik; Wu, Hao; Sarich, Marco; Keller, Bettina; Senne, Martin; Held, Martin; Chodera, John D; Schütte, Christof; Noé, Frank

    2011-05-01

    Markov state models of molecular kinetics (MSMs), in which the long-time statistical dynamics of a molecule is approximated by a Markov chain on a discrete partition of configuration space, have seen widespread use in recent years. This approach has many appealing characteristics compared to straightforward molecular dynamics simulation and analysis, including the potential to mitigate the sampling problem by extracting long-time kinetic information from short trajectories and the ability to straightforwardly calculate expectation values and statistical uncertainties of various stationary and dynamical molecular observables. In this paper, we summarize the current state of the art in generation and validation of MSMs and give some important new results. We describe an upper bound for the approximation error made by modeling molecular dynamics with a MSM and we show that this error can be made arbitrarily small with surprisingly little effort. In contrast to previous practice, it becomes clear that the best MSM is not obtained by the most metastable discretization, but the MSM can be much improved if non-metastable states are introduced near the transition states. Moreover, we show that it is not necessary to resolve all slow processes by the state space partitioning, but individual dynamical processes of interest can be resolved separately. We also present an efficient estimator for reversible transition matrices and a robust test to validate that a MSM reproduces the kinetics of the molecular dynamics data.

  19. Validation of hydrogen gas stratification and mixing models

    DOE PAGES

    Wu, Hsingtzu; Zhao, Haihua

    2015-05-26

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for amore » large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.« less

  20. Validation of hydrogen gas stratification and mixing models

    SciTech Connect

    Wu, Hsingtzu; Zhao, Haihua

    2015-05-26

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for a large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.

  1. Modelling and validation of multiple reflections for enhanced laser welding

    NASA Astrophysics Data System (ADS)

    Milewski, J.; Sklar, E.

    1996-05-01

    The effects of multiple internal reflections within a laser weld joint as functions of joint geometry and processing conditions have been characterized. A computer-based ray tracing model is used to predict the reflective propagation of laser beam energy focused into the narrow gap of a metal joint for the purpose of predicting the location of melting and coalescence to form a weld. Quantitative comparisons are made between simulation cases. Experimental results are provided for qualitative model validation. This method is proposed as a way to enhance process efficiency and design laser welds which display deep penetration and high depth-to-width aspect ratios without high powered systems or keyhole mode melting.

  2. Interactive simulation of embolization coils: modeling and experimental validation.

    PubMed

    Dequidt, Jérémie; Marchal, Maud; Duriez, Christian; Kerien, Erwan; Cotin, Stéphane

    2008-01-01

    Coil embolization offers a new approach to treat aneurysms. This medical procedure is namely less invasive than an open-surgery as it relies on the deployment of very thin platinum-based wires within the aneurysm through the arteries. When performed intracranially, this procedure must be particularly accurate and therefore carefully planned and performed by experienced radiologists. A simulator of the coil deployment represents an interesting and helpful tool for the physician by providing information on the coil behavior. In this paper, an original modeling is proposed to obtain interactive and accurate simulations of coil deployment. The model takes into account geometric nonlinearities and uses a shape memory formulation to describe its complex geometry. An experimental validation is performed in a contact-free environment to identify the mechanical properties of the coil and to quantitatively compare the simulation with real data. Computational performances are also measured to insure an interactive simulation. PMID:18979807

  3. Defect distribution model validation and effective process control

    NASA Astrophysics Data System (ADS)

    Zhong, Lei

    2003-07-01

    Assumption of the underlying probability distribution is an essential part of effective process control. In this article, we demonstrate how to improve the effectiveness of equipment monitoring and process induced defect control through properly selecting, validating and using the hypothetical distribution models. The testing method is based on probability plotting, which is made possible through order statistics. Since each ordered sample data point has a cumulative probability associated with it, which is calculated as a function of sample size, the assumption validity is readily judged by the linearity of the ordered sample data versus the deviate predicted by the assumed statistical model from the cumulative probability. A comparison is made between normal and lognormal distributions to illustrate how dramatically the distribution model could affect the control limit setting. Examples presented include defect data collected on SP1 the dark field inspection tool on a variety of deposited and polished metallic and dielectric films. We find that the defect count distribution is in most cases approximately lognormal. We show that normal distribution is an inadequate assumption, as clearly indicated by the non-linearity of the probability plots. Misuse of normal distribution leads to a too optimistic process control limit, typically 50% tighter than suggested by the lognormal distribution. The inappropriate control limit setting consequently results in an excursion rate at a level too high to be manageable. Lognormal distribution is a valid assumption because it is positively skewed, which adequately takes into account the fact that defect count distribution is typically characteristic of a long tail. In essence, use of lognormal distribution is a suggestion that the long tail be treated as part of the process entitlement (capability) instead of process excursion. The adjustment of the expected process entitlement is reflected and quantified by the skewness of

  4. On The Modeling of Educational Systems: II

    ERIC Educational Resources Information Center

    Grauer, Robert T.

    1975-01-01

    A unified approach to model building is developed from the separate techniques of regression, simulation, and factorial design. The methodology is applied in the context of a suburban school district. (Author/LS)

  5. Nyala and Bushbuck II: A Harvesting Model.

    ERIC Educational Resources Information Center

    Fay, Temple H.; Greeff, Johanna C.

    1999-01-01

    Adds a cropping or harvesting term to the animal overpopulation model developed in Part I of this article. Investigates various harvesting strategies that might suggest a solution to the overpopulation problem without actually culling any animals. (ASK)

  6. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  7. Validating the Thinking Styles Inventory-Revised II among Chinese university students with hearing impairment through test accommodations.

    PubMed

    Cheng, Sanyin; Zhang, Li-Fang

    2014-01-01

    The present study pioneered in adopting test accommodations to validate the Thinking Styles Inventory-Revised II (TSI-R2; Sternberg, Wagner, & Zhang, 2007) among Chinese university students with hearing impairment. A series of three studies were conducted that drew their samples from the same two universities, in which accommodating test directions (N = 213), combining test directions with language accommodations from students' perspectives (N = 366), and integrating test directions with language accommodations from teachers' perspectives (N = 129) were used. The accommodated TSI-R2 generally indicated acceptable internal scale reliabilities and factorial validity for Chinese university students with hearing loss. Limitations in relation to the study participants are discussed, as well as test accommodations and the significance and implications of the study.

  8. Literature-derived bioaccumulation models for earthworms: Development and validation

    SciTech Connect

    Sample, B.E.; Suter, G.W. II; Beauchamp, J.J.; Efroymson, R.A.

    1999-09-01

    Estimation of contaminant concentrations in earthworms is a critical component in many ecological risk assessments. Without site-specific data, literature-derived uptake factors or models are frequently used. Although considerable research has been conducted on contaminant transfer from soil to earthworms, most studies focus on only a single location. External validation of transfer models has not been performed. The authors developed a database of soil and tissue concentrations for nine inorganic and two organic chemicals. Only studies that presented total concentrations in departed earthworms were included. Uptake factors and simple and multiple regression models of natural-log-transformed concentrations of each analyte in soil and earthworms were developed using data from 26 studies. These models were then applied to data from six additional studies. Estimated and observed earthworm concentrations were compared using nonparametric Wilcoxon signed-rank tests. Relative accuracy and quality of different estimation methods were evaluated by calculating the proportional deviation of the estimate from the measured value. With the exception of Cr, significant, single-variable (e.g., soil concentration) regression models were fit for each analyte. Inclusion of soil Ca improved model fits for Cd and Pb. Soil pH only marginally improved model fits. The best general estimates of chemical concentrations in earthworms were generated by simple ln-ln regression models for As, Cd, Cu, Hg, Mn, Pb, Zn, and polychlorinated biphenyls. No method accurately estimated Cr or Ni in earthworms. Although multiple regression models including pH generated better estimates for a few analytes, in general, the predictive utility gained by incorporating environmental variables was marginal.

  9. Systematic approach to verification and validation: High explosive burn models

    SciTech Connect

    Menikoff, Ralph; Scovel, Christina A.

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code

  10. Validation of DWPF Melter Off-Gas Combustion Model

    SciTech Connect

    Choi, A.S.

    2000-08-23

    The empirical melter off-gas combustion model currently used in the DWPF safety basis calculations is valid at melter vapor space temperatures above 570 degrees C, as measured in the thermowell. This lower temperature bound coincides with that of the off-gas data used as the basis of the model. In this study, the applicability of the empirical model in a wider temperature range was assessed using the off-gas data collected during two small-scale research melter runs. The first data set came from the Small Cylindrical Melter-2 run in 1985 with the sludge feed coupled with the precipitate hydrolysis product. The second data set came from the 774-A melter run in 1996 with the sludge-only feed prepared with the modified acid addition strategy during the feed pretreatment step. The results of the assessment showed that the data from these two melter runs agreed well with the existing model, and further provided the basis for extending the lower temperature bound of the model to the measured melter vapor space temperature of 445 degrees C.

  11. Validation of two-equation turbulence models for propulsion flowfields

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Venkateswaran, S.; Merkle, Charles L.

    1994-01-01

    The objective of the study is to assess the capability of two-equation turbulence models for simulating propulsion-related flowfields. The standard kappa-epsilon model with Chien's low Reynolds number formulation for near-wall effects is used as the baseline turbulence model. Several experimental test cases, representative of rocket combustor internal flowfields, are used to catalog the performance of the baseline model. Specific flowfields considered here include recirculating flow behind a backstep, mixing between coaxial jets and planar shear layers. Since turbulence solutions are notoriously dependent on grid and numerical methodology, the effects of grid refinement and artificial dissipation on numerical accuracy are studied. In the latter instance, computational results obtained with several central-differenced and upwind-based formulations are compared. Based on these results, improved turbulence modes such as enhanced kappa-epsilon models as well as other two-equation formulations (e.g., kappa-omega) are being studied. In addition, validation of swirling and reacting flowfields are also currently underway.

  12. Canyon building ventilation system dynamic model -- Parameters and validation

    SciTech Connect

    Moncrief, B.R. ); Chen, F.F.K. )

    1993-01-01

    Plant system simulation crosses many disciplines. At the core is the mimic of key components in the form of mathematical models.'' These component models are functionally integrated to represent the plant. With today's low cost high capacity computers, the whole plant can be truly and effectively reproduced in a computer model. Dynamic simulation has its roots in single loop'' design, which is still a common objective in the employment of simulation. The other common objectives are the ability to preview plant operation, to anticipate problem areas, and to test the impact of design options. As plant system complexity increases and our ability to simulate the entire plant grows, the objective to optimize plant system design becomes practical. This shift in objectives from problem avoidance to total optimization by far offers the most rewarding potential. Even a small reduction in bulk materials and space can sufficiently justify the application of this technology. Furthermore, to realize an optimal plant starts from a tight and disciplined design. We believe the assurance required to execute such a design strategy can partly be derived from a plant model. This paper reports on the application of a dynamic model to evaluate the capacity of an existing production plant ventilation system. This study met the practical objectives of capacity evaluation under present and future conditions, and under normal and accidental situations. More importantly, the description of this application, in its methods and its utility, aims to validate the technology of dynamic simulation in the environment of plant system design and safe operation.

  13. Validation of a Global Hydrodynamic Flood Inundation Model

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Smith, A.; Sampson, C. C.; Alfieri, L.; Neal, J. C.

    2014-12-01

    In this work we present first validation results for a hyper-resolution global flood inundation model. We use a true hydrodynamic model (LISFLOOD-FP) to simulate flood inundation at 1km resolution globally and then use downscaling algorithms to determine flood extent and depth at 90m spatial resolution. Terrain data are taken from a custom version of the SRTM data set that has been processed specifically for hydrodynamic modelling. Return periods of flood flows along the entire global river network are determined using: (1) empirical relationships between catchment characteristics and index flood magnitude in different hydroclimatic zones derived from global runoff data; and (2) an index flood growth curve, also empirically derived. Bankful return period flow is then used to set channel width and depth, and flood defence impacts are modelled using empirical relationships between GDP, urbanization and defence standard of protection. The results of these simulations are global flood hazard maps for a number of different return period events from 1 in 5 to 1 in 1000 years. We compare these predictions to flood hazard maps developed by national government agencies in the UK and Germany using similar methods but employing detailed local data, and to observed flood extent at a number of sites including St. Louis, USA and Bangkok in Thailand. Results show that global flood hazard models can have considerable skill given careful treatment to overcome errors in the publicly available data that are used as their input.

  14. A Model for Estimating the Reliability and Validity of Criterion-Referenced Measures.

    ERIC Educational Resources Information Center

    Edmonston, Leon P.; Randall, Robert S.

    A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in…

  15. NAIRAS aircraft radiation model development, dose climatology, and initial validation

    NASA Astrophysics Data System (ADS)

    Mertens, Christopher J.; Meier, Matthias M.; Brown, Steven; Norman, Ryan B.; Xu, Xiaojing

    2013-10-01

    The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis suggests

  16. The Internal Validation of Level II and Level III Respiratory Therapy Examinations. Final Report.

    ERIC Educational Resources Information Center

    Jouett, Michael L.

    This project began with the delineation of the roles and functions of respiratory therapy personnel by the American Association for Respiratory Therapy. In Phase II, The Psychological Corporation used this delineation to develop six proficiency examinations, three at each of two levels. One exam at each level was designated for the purpose of the…

  17. Results of site validation experiments. Volume II. Supporting documents 5 through 14

    SciTech Connect

    Not Available

    1983-01-01

    Volume II contains the following supporting documents: Summary of Geologic Mapping of Underground Investigations; Logging of Vertical Coreholes - ''Double Box'' Area and Exploratory Drift; WIPP High Precision Gravity Survey; Basic Data Reports for Drillholes, Brine Content of Facility Internal Strata; Mineralogical Content of Facility Interval Strata; Location and Characterization of Interbedded Materials; Characterization of Aquifers at Shaft Locations; and Permeability of Facility Interval Strate.

  18. Science Motivation Questionnaire II: Validation with Science Majors and Nonscience Majors

    ERIC Educational Resources Information Center

    Glynn, Shawn M.; Brickman, Peggy; Armstrong, Norris; Taasoobshirazi, Gita

    2011-01-01

    From the perspective of social cognitive theory, the motivation of students to learn science in college courses was examined. The students--367 science majors and 313 nonscience majors--responded to the Science Motivation Questionnaire II, which assessed five motivation components: intrinsic motivation, self-determination, self-efficacy, career…

  19. PIV validation of blood-heart valve leaflet interaction modelling.

    PubMed

    Kaminsky, R; Dumont, K; Weber, H; Schroll, M; Verdonck, P

    2007-07-01

    The aim of this study was to validate the 2D computational fluid dynamics (CFD) results of a moving heart valve based on a fluid-structure interaction (FSI) algorithm with experimental measurements. Firstly, a pulsatile laminar flow through a monoleaflet valve model with a stiff leaflet was visualized by means of Particle Image Velocimetry (PIV). The inflow data sets were applied to a CFD simulation including blood-leaflet interaction. The measurement section with a fixed leaflet was enclosed into a standard mock loop in series with a Harvard Apparatus Pulsatile Blood Pump, a compliance chamber and a reservoir. Standard 2D PIV measurements were made at a frequency of 60 bpm. Average velocity magnitude results of 36 phase-locked measurements were evaluated at every 10 degrees of the pump cycle. For the CFD flow simulation, a commercially available package from Fluent Inc. was used in combination with inhouse developed FSI code based on the Arbitrary Lagrangian-Eulerian (ALE) method. Then the CFD code was applied to the leaflet to quantify the shear stress on it. Generally, the CFD results are in agreement with the PIV evaluated data in major flow regions, thereby validating the FSI simulation of a monoleaflet valve with a flexible leaflet. The applicability of the new CFD code for quantifying the shear stress on a flexible leaflet is thus demonstrated.

  20. Comparing Validity and Reliability in Special Education Title II and IDEA Data

    ERIC Educational Resources Information Center

    Steinbrecher, Trisha D.; McKeown, Debra; Walther-Thomas, Chriss

    2013-01-01

    Previous researchers have found that special education teacher shortages are pervasive and exacerbated by federal policies regarding "highly qualified" teacher requirements. The authors examined special education teacher personnel data from 2 federal data sources to determine if these sources offer a reliable and valid means of…

  1. Effects of Risperidone on Aberrant Behavior in Persons with Developmental Disabilities: II. Social Validity Measures.

    ERIC Educational Resources Information Center

    McAdam, David B.; Zarcone, Jennifer R.; Hellings, Jessica; Napolitano, Deborah A.; Schroeder, Stephen R.

    2002-01-01

    Consumer satisfaction and social validity were measured during a double-blind, placebo-controlled evaluation of risperidone in treating aberrant behaviors of persons with developmental disabilities. A survey showed all 17 caregivers felt participation was positive. Community members (n=52) also indicated that when on medication, the 5 participants…

  2. Validity of Social, Moral and Emotional Facets of Self-Description Questionnaire II

    ERIC Educational Resources Information Center

    Leung, Kim Chau; Marsh, Herbert W.; Yeung, Alexander Seeshing; Abduljabbar, Adel S.

    2015-01-01

    Studies adopting a construct validity approach can be categorized into within- and between-network studies. Few studies have applied between-network approach and tested the correlations of the social (same-sex relations, opposite-sex relations, parent relations), moral (honesty-trustworthiness), and emotional (emotional stability) facets of the…

  3. The African American Acculturation Scale II: Cross-Validation and Short Form.

    ERIC Educational Resources Information Center

    Landrine, Hope; Klonoff, Elizabeth A.

    1995-01-01

    Studied African American culture, using a new, shortened, 33-item African American Acculturation Scale (AAAS-33) to assess the scale's validity and reliability. Comparisons between the original form and AAAS-33 reveal high correlations, however, the longer form may be sensitive to some beliefs, practices, and attitudes not assessed by the short…

  4. Criterion Validity, Severity Cut Scores, and Test-Retest Reliability of the Beck Depression Inventory-II in a University Counseling Center Sample

    ERIC Educational Resources Information Center

    Sprinkle, Stephen D.; Lurie, Daphne; Insko, Stephanie L.; Atkinson, George; Jones, George L.; Logan, Arthur R.; Bissada, Nancy N.

    2002-01-01

    The criterion validity of the Beck Depression Inventory-II (BDI-II; A. T. Beck, R. A. Steer, & G. K. Brown, 1996) was investigated by pairing blind BDI-II administrations with the major depressive episode portion of the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I; M. B. First, R. L. Spitzer, M. Gibbon, & J. B. W. Williams,…

  5. Incentive theory: II. Models for choice.

    PubMed

    Killeen, P R

    1982-09-01

    Incentive theory is extended to account for concurrent chained schedules of reinforcement. The basic model consists of additive contributions from the primary and secondary effects of reinforcers, which serve to direct the behavior activated by reinforcement. The activation is proportional to the rate of reinforcement and interacts multiplicatively with the directive effects. The two free parameters are q, the slope of the delay of reinforcement gradient, whose value is constant across many experiments, and b, a bias parameter. The model is shown to provide an excellent description of all results from studies that have varied the terminal-link schedules, and of many of the results from studies that have varied initial-link schedules. The model is extended to diverse modifications of the terminal links, such as varied amount of reinforcement, varied signaling of the terminal-link schedules, and segmentation of the terminal-link schedules. It is demonstrated that incentive theory provides an accurate and integrated account of many of the phenomena of choice.

  6. Bioaerosol optical sensor model development and initial validation

    NASA Astrophysics Data System (ADS)

    Campbell, Steven D.; Jeys, Thomas H.; Eapen, Xuan Le

    2007-04-01

    This paper describes the development and initial validation of a bioaerosol optical sensor model. This model was used to help determine design parameters and estimate performance of a new low-cost optical sensor for detecting bioterrorism agents. In order to estimate sensor performance in detecting biowarfare simulants and rejecting environmental interferents, use was made of a previously reported catalog of EEM (excitation/emission matrix) fluorescence cross-section measurements and previously reported multiwavelength-excitation biosensor modeling work. In the present study, the biosensor modeled employs a single high-power 365 nm UV LED source plus an IR laser diode for particle size determination. The sensor has four output channels: IR size channel, UV elastic channel and two fluorescence channels. The sensor simulation was used to select the fluorescence channel wavelengths of 400-450 and 450-600 nm. Using these selected fluorescence channels, the performance of the sensor in detecting simulants and rejecting interferents was estimated. Preliminary measurements with the sensor are presented which compare favorably with the simulation results.

  7. ADOPT: A Historically Validated Light Duty Vehicle Consumer Choice Model

    SciTech Connect

    Brooker, A.; Gonder, J.; Lopp, S.; Ward, J.

    2015-05-04

    The Automotive Deployment Option Projection Tool (ADOPT) is a light-duty vehicle consumer choice and stock model supported by the U.S. Department of Energy’s Vehicle Technologies Office. It estimates technology improvement impacts on U.S. light-duty vehicles sales, petroleum use, and greenhouse gas emissions. ADOPT uses techniques from the multinomial logit method and the mixed logit method estimate sales. Specifically, it estimates sales based on the weighted value of key attributes including vehicle price, fuel cost, acceleration, range and usable volume. The average importance of several attributes changes nonlinearly across its range and changes with income. For several attributes, a distribution of importance around the average value is used to represent consumer heterogeneity. The majority of existing vehicle makes, models, and trims are included to fully represent the market. The Corporate Average Fuel Economy regulations are enforced. The sales feed into the ADOPT stock model. It captures key aspects for summing petroleum use and greenhouse gas emissions This includes capturing the change in vehicle miles traveled by vehicle age, the creation of new model options based on the success of existing vehicles, new vehicle option introduction rate limits, and survival rates by vehicle age. ADOPT has been extensively validated with historical sales data. It matches in key dimensions including sales by fuel economy, acceleration, price, vehicle size class, and powertrain across multiple years. A graphical user interface provides easy and efficient use. It manages the inputs, simulation, and results.

  8. Passive millimeter-wave imaging model application and validation

    NASA Astrophysics Data System (ADS)

    Blume, Bradley T.; Chenault, David B.

    1997-06-01

    The military use of millimeter wave radiometers has been studied since the 1960's. It is only recently that advances in the technology have made passive millimeter wave (PMMW) systems practical. It is well established that metal targets will have a large contrast ratio versus the background in the millimeter wave (MMW) regime and that atmospheric propagation through clouds, fog and light rain is possible. The limitations have been the noise figures of the detectors, the size of the systems, and the cost of the systems. Through the advent of millimeter wave monolithic integrated circuits technology, MMW devices are becoming smaller, more sensitive, and less expensive. In addition many efforts are currently under way to develop PMMW array imaging devices. This renewed interest has likewise brought forth the need for passive millimeter wave system modeling capabilities. To fill this need, Nichols Research Corporation has developed for Eglin AFB a physics-based image synthesis code, capable of modeling the dominant effects in the MMW regime. This code has been developed to support the development of the next generation of PMMW seeker systems. This paper will describe the phenomenology of PMMW signatures, the Irma software, validation of the Irma models and the application of the models to both Air Force and Navy problems.

  9. First principles Candu fuel model and validation experimentation

    SciTech Connect

    Corcoran, E.C.; Kaye, M.H.; Lewis, B.J.; Thompson, W.T.; Akbari, F.; Higgs, J.D.; Verrall, R.A.; He, Z.; Mouris, J.F.

    2007-07-01

    are added mixing terms associated with the appearance of the component species in particular phases. To validate the model, coulometric titration experiments relating the chemical potential of oxygen to the moles of oxygen introduced to SIMFUEL are underway. A description of the apparatus to oxidize and reduce samples in a controlled way in a H{sub 2}/H{sub 2}O mixture is presented. Existing measurements for irradiated fuel, both defected and non-defected, are also being incorporated into the validation process. (authors)

  10. Development, validation and application of numerical space environment models

    NASA Astrophysics Data System (ADS)

    Honkonen, Ilja

    2013-10-01

    Currently the majority of space-based assets are located inside the Earth's magnetosphere where they must endure the effects of the near-Earth space environment, i.e. space weather, which is driven by the supersonic flow of plasma from the Sun. Space weather refers to the day-to-day changes in the temperature, magnetic field and other parameters of the near-Earth space, similarly to ordinary weather which refers to changes in the atmosphere above ground level. Space weather can also cause adverse effects on the ground, for example, by inducing large direct currents in power transmission systems. The performance of computers has been growing exponentially for many decades and as a result the importance of numerical modeling in science has also increased rapidly. Numerical modeling is especially important in space plasma physics because there are no in-situ observations of space plasmas outside of the heliosphere and it is not feasible to study all aspects of space plasmas in a terrestrial laboratory. With the increasing number of computational cores in supercomputers, the parallel performance of numerical models on distributed memory hardware is also becoming crucial. This thesis consists of an introduction, four peer reviewed articles and describes the process of developing numerical space environment/weather models and the use of such models to study the near-Earth space. A complete model development chain is presented starting from initial planning and design to distributed memory parallelization and optimization, and finally testing, verification and validation of numerical models. A grid library that provides good parallel scalability on distributed memory hardware and several novel features, the distributed cartesian cell-refinable grid (DCCRG), is designed and developed. DCCRG is presently used in two numerical space weather models being developed at the Finnish Meteorological Institute. The first global magnetospheric test particle simulation based on the

  11. PEP-II vacuum system pressure profile modeling using EXCEL

    SciTech Connect

    Nordby, M.; Perkins, C.

    1994-06-01

    A generic, adaptable Microsoft EXCEL program to simulate molecular flow in beam line vacuum systems is introduced. Modeling using finite-element approximation of the governing differential equation is discussed, as well as error estimation and program capabilities. The ease of use and flexibility of the spreadsheet-based program is demonstrated. PEP-II vacuum system models are reviewed and compared with analytical models.

  12. Case study for model validation : assessing a model for thermal decomposition of polyurethane foam.

    SciTech Connect

    Dowding, Kevin J.; Leslie, Ian H.; Hobbs, Michael L.; Rutherford, Brian Milne; Hills, Richard Guy; Pilch, Martin M.

    2004-10-01

    A case study is reported to document the details of a validation process to assess the accuracy of a mathematical model to represent experiments involving thermal decomposition of polyurethane foam. The focus of the report is to work through a validation process. The process addresses the following activities. The intended application of mathematical model is discussed to better understand the pertinent parameter space. The parameter space of the validation experiments is mapped to the application parameter space. The mathematical models, computer code to solve the models and its (code) verification are presented. Experimental data from two activities are used to validate mathematical models. The first experiment assesses the chemistry model alone and the second experiment assesses the model of coupled chemistry, conduction, and enclosure radiation. The model results of both experimental activities are summarized and uncertainty of the model to represent each experimental activity is estimated. The comparison between the experiment data and model results is quantified with various metrics. After addressing these activities, an assessment of the process for the case study is given. Weaknesses in the process are discussed and lessons learned are summarized.

  13. Systematic validation of disease models for pharmacoeconomic evaluations. Swiss HIV Cohort Study.

    PubMed

    Sendi, P P; Craig, B A; Pfluger, D; Gafni, A; Bucher, H C

    1999-08-01

    Pharmacoeconomic evaluations are often based on computer models which simulate the course of disease with and without medical interventions. The purpose of this study is to propose and illustrate a rigorous approach for validating such disease models. For illustrative purposes, we applied this approach to a computer-based model we developed to mimic the history of HIV-infected subjects at the greatest risk for Mycobacterium avium complex (MAC) infection in Switzerland. The drugs included as a prophylactic intervention against MAC infection were azithromycin and clarithromycin. We used a homogenous Markov chain to describe the progression of an HIV-infected patient through six MAC-free states, one MAC state, and death. Probability estimates were extracted from the Swiss HIV Cohort Study database (1993-95) and randomized controlled trials. The model was validated testing for (1) technical validity (2) predictive validity (3) face validity and (4) modelling process validity. Sensitivity analysis and independent model implementation in DATA (PPS) and self-written Fortran 90 code (BAC) assured technical validity. Agreement between modelled and observed MAC incidence confirmed predictive validity. Modelled MAC prophylaxis at different starting conditions affirmed face validity. Published articles by other authors supported modelling process validity. The proposed validation procedure is a useful approach to improve the validity of the model. PMID:10461580

  14. Modeling and Validating Chronic Pharmacological Manipulation of Circadian Rhythms

    PubMed Central

    Kim, J K; Forger, D B; Marconi, M; Wood, D; Doran, A; Wager, T; Chang, C; Walton, K M

    2013-01-01

    Circadian rhythms can be entrained by a light-dark (LD) cycle and can also be reset pharmacologically, for example, by the CK1δ/ε inhibitor PF-670462. Here, we determine how these two independent signals affect circadian timekeeping from the molecular to the behavioral level. By developing a systems pharmacology model, we predict and experimentally validate that chronic CK1δ/ε inhibition during the earlier hours of a LD cycle can produce a constant stable delay of rhythm. However, chronic dosing later during the day, or in the presence of longer light intervals, is not predicted to yield an entrained rhythm. We also propose a simple method based on phase response curves (PRCs) that predicts the effects of a LD cycle and chronic dosing of a circadian drug. This work indicates that dosing timing and environmental signals must be carefully considered for accurate pharmacological manipulation of circadian phase. PMID:23863866

  15. Validation of a Deterministic Vibroacoustic Response Prediction Model

    NASA Technical Reports Server (NTRS)

    Caimi, Raoul E.; Margasahayam, Ravi

    1997-01-01

    This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.

  16. Utilizing Chamber Data for Developing and Validating Climate Change Models

    NASA Technical Reports Server (NTRS)

    Monje, Oscar

    2012-01-01

    Controlled environment chambers (e.g. growth chambers, SPAR chambers, or open-top chambers) are useful for measuring plant ecosystem responses to climatic variables and CO2 that affect plant water relations. However, data from chambers was found to overestimate responses of C fluxes to CO2 enrichment. Chamber data may be confounded by numerous artifacts (e.g. sidelighting, edge effects, increased temperature and VPD, etc) and this limits what can be measured accurately. Chambers can be used to measure canopy level energy balance under controlled conditions and plant transpiration responses to CO2 concentration can be elucidated. However, these measurements cannot be used directly in model development or validation. The response of stomatal conductance to CO2 will be the same as in the field, but the measured response must be recalculated in such a manner to account for differences in aerodynamic conductance, temperature and VPD between the chamber and the field.

  17. Some Hamiltonian models of friction II

    SciTech Connect

    Egli, Daniel; Gang Zhou

    2012-10-15

    In the present paper we consider the motion of a very heavy tracer particle in a medium of a very dense, non-interacting Bose gas. We prove that, in a certain mean-field limit, the tracer particle will be decelerated and come to rest somewhere in the medium. Friction is caused by emission of Cerenkov radiation of gapless modes into the gas. Mathematically, a system of semilinear integro-differential equations, introduced in Froehlich et al. ['Some hamiltonian models of friction,' J. Math. Phys. 52(8), 083508 (2011)], describing a tracer particle in a dispersive medium is investigated, and decay properties of the solution are proven. This work is an extension of Froehlich et al. ['Friction in a model of hamiltonian dynamics,' Commun. Math. Phys. 315(2), 401-444 (2012)]; it is an extension because no weak coupling limit for the interaction between tracer particle and medium is assumed. The technical methods used are dispersive estimates and a contraction principle.

  18. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  19. Experimental validation of sheath models at intermediate radio frequencies

    NASA Astrophysics Data System (ADS)

    Sobolewski, Mark

    2013-09-01

    Sheaths in radio-frequency (rf) discharges play a dominant role in determining important properties such as the efficiency of power delivery and utilization, plasma spatial uniformity, and ion energy distributions (IEDs). To obtain high quality predictions for these properties requires sheath models that have been rigorously tested and validated. We have performed such tests in capacitively coupled and rf-biased inductively coupled discharges, for inert as well as reactive gases, over two or more orders of magnitude in frequency, voltage, and plasma density. We measured a complete set of model input and output parameters including rf current and voltage waveforms, rf plasma potential measured by a capacitive probe, electron temperature and ion saturation current measured by Langmuir probe and other techniques, and IEDs measured by mass spectrometers and gridded energy analyzers. Experiments concentrated on the complicated, intermediate-frequency regime of ion dynamics, where the ion transit time is comparable to the rf period and the ion current oscillates strongly during the rf cycle. The first models tested used several simplifying assumptions including fluid treatment of ions, neglect of electron inertia, and the oscillating step approximation for the electron profile. These models were nevertheless able to yield rather accurate predictions for current waveforms, sheath impedance, and the peak energies in IEDs. More recently, the oscillating step has been replaced by an exact solution of Poisson's equation. This results in a modest improvement in the agreement with measured electrical characteristics and IED peak amplitudes. The new model also eliminates the need for arbitrary or nonphysical boundary conditions that arises in step models, replacing them with boundary conditions that can be obtained directly from measurements or theories of the presheath.

  20. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  1. The validity of four definitions of endogenous depression. II. Clinical, demographic, familial, and psychosocial correlates.

    PubMed

    Zimmerman, M; Coryell, W; Pfohl, B; Stangl, D

    1986-03-01

    Based on a survey of the classic literature and studies examining the correlates of a clinical diagnosis of endogenous or nonendogenous depression, we found 14 variables that should discriminate endogenous and nonendogenous depressives. We applied four definitions of endogenous depression (Feinberg and Carroll, DSM-III, Research Diagnostic Criteria, and Newcastle) to a consecutive series of 152 unipolar major depressive inpatients. We examined the concordance between the definitions and the relationship between each definition and clinical, demographic, family history, and psychosocial factors. The DSM-III and Newcastle definitions were less inclusive than the other two definitions. We found some support for the validity of each of the four definitions. The validity of the Newcastle scale was the most frequently supported, with the endogenous depressives having a lower rate of personality disorder, marital separations and divorces, familial alcoholism, life events, and nonserious suicide attempts. PMID:3954543

  2. A technique for global monitoring of net solar irradiance at the ocean surface. II - Validation

    NASA Technical Reports Server (NTRS)

    Chertock, Beth; Frouin, Robert; Gautier, Catherine

    1992-01-01

    The generation and validation of the first satellite-based long-term record of surface solar irradiance over the global oceans are addressed. The record is generated using Nimbus-7 earth radiation budget (ERB) wide-field-of-view plentary-albedo data as input to a numerical algorithm designed and implemented based on radiative transfer theory. The mean monthly values of net surface solar irradiance are computed on a 9-deg latitude-longitude spatial grid for November 1978-October 1985. The new data set is validated in comparisons with short-term, regional, high-resolution, satellite-based records. The ERB-based values of net surface solar irradiance are compared with corresponding values based on radiance measurements taken by the Visible-Infrared Spin Scan Radiometer aboard GOES series satellites. Errors in the new data set are estimated to lie between 10 and 20 W/sq m on monthly time scales.

  3. Modal testing for model validation of structures with discrete nonlinearities

    PubMed Central

    Ewins, D. J.; Weekes, B.; delli Carri, A.

    2015-01-01

    Model validation using data from modal tests is now widely practiced in many industries for advanced structural dynamic design analysis, especially where structural integrity is a primary requirement. These industries tend to demand highly efficient designs for their critical structures which, as a result, are increasingly operating in regimes where traditional linearity assumptions are no longer adequate. In particular, many modern structures are found to contain localized areas, often around joints or boundaries, where the actual mechanical behaviour is far from linear. Such structures need to have appropriate representation of these nonlinear features incorporated into the otherwise largely linear models that are used for design and operation. This paper proposes an approach to this task which is an extension of existing linear techniques, especially in the testing phase, involving only just as much nonlinear analysis as is necessary to construct a model which is good enough, or ‘valid’: i.e. capable of predicting the nonlinear response behaviour of the structure under all in-service operating and test conditions with a prescribed accuracy. A short-list of methods described in the recent literature categorized using our framework is given, which identifies those areas in which further development is most urgently required. PMID:26303924

  4. Vibroacoustic Model Validation for a Curved Honeycomb Composite Panel

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Robinson, Jay H.; Grosveld, Ferdinand W.

    2001-01-01

    Finite element and boundary element models are developed to investigate the vibroacoustic response of a curved honeycomb composite sidewall panel. Results from vibroacoustic tests conducted in the NASA Langley Structural Acoustic Loads and Transmission facility are used to validate the numerical predictions. The sidewall panel is constructed from a flexible honeycomb core sandwiched between carbon fiber reinforced composite laminate face sheets. This type of construction is being used in the development of an all-composite aircraft fuselage. In contrast to conventional rib-stiffened aircraft fuselage structures, the composite panel has nominally uniform thickness resulting in a uniform distribution of mass and stiffness. Due to differences in the mass and stiffness distribution, the noise transmission mechanisms for the composite panel are expected to be substantially different from those of a conventional rib-stiffened structure. The development of accurate vibroacoustic models will aide in the understanding of the dominant noise transmission mechanisms and enable optimization studies to be performed that will determine the most beneficial noise control treatments. Finite element and boundary element models of the sidewall panel are described. Vibroacoustic response predictions are presented for forced vibration input and the results are compared with experimental data.

  5. Validation of an Acoustic Impedance Prediction Model for Skewed Resonators

    NASA Technical Reports Server (NTRS)

    Howerton, Brian M.; Parrott, Tony L.

    2009-01-01

    An impedance prediction model was validated experimentally to determine the composite impedance of a series of high-aspect ratio slot resonators incorporating channel skew and sharp bends. Such structures are useful for packaging acoustic liners into constrained spaces for turbofan noise control applications. A formulation of the Zwikker-Kosten Transmission Line (ZKTL) model, incorporating the Richards correction for rectangular channels, is used to calculate the composite normalized impedance of a series of six multi-slot resonator arrays with constant channel length. Experimentally, acoustic data was acquired in the NASA Langley Normal Incidence Tube over the frequency range of 500 to 3500 Hz at 120 and 140 dB OASPL. Normalized impedance was reduced using the Two-Microphone Method for the various combinations of channel skew and sharp 90o and 180o bends. Results show that the presence of skew and/or sharp bends does not significantly alter the impedance of a slot resonator as compared to a straight resonator of the same total channel length. ZKTL predicts the impedance of such resonators very well over the frequency range of interest. The model can be used to design arrays of slot resonators that can be packaged into complex geometries heretofore unsuitable for effective acoustic treatment.

  6. Circulation Control Model Experimental Database for CFD Validation

    NASA Technical Reports Server (NTRS)

    Paschal, Keith B.; Neuhart, Danny H.; Beeler, George B.; Allan, Brian G.

    2012-01-01

    A 2D circulation control wing was tested in the Basic Aerodynamic Research Tunnel at the NASA Langley Research Center. A traditional circulation control wing employs tangential blowing along the span over a trailing-edge Coanda surface for the purpose of lift augmentation. This model has been tested extensively at the Georgia Tech Research Institute for the purpose of performance documentation at various blowing rates. The current study seeks to expand on the previous work by documenting additional flow-field data needed for validation of computational fluid dynamics. Two jet momentum coefficients were tested during this entry: 0.047 and 0.114. Boundary-layer transition was investigated and turbulent boundary layers were established on both the upper and lower surfaces of the model. Chordwise and spanwise pressure measurements were made, and tunnel sidewall pressure footprints were documented. Laser Doppler Velocimetry measurements were made on both the upper and lower surface of the model at two chordwise locations (x/c = 0.8 and 0.9) to document the state of the boundary layers near the spanwise blowing slot.

  7. Gravitropic responses of the Avena coleoptile in space and on clinostats. II. Is reciprocity valid?

    NASA Technical Reports Server (NTRS)

    Johnsson, A.; Brown, A. H.; Chapman, D. K.; Heathcote, D.; Karlsson, C.

    1995-01-01

    Experiments were undertaken to determine if the reciprocity rule is valid for gravitropic responses of oat coleoptiles in the acceleration region below 1 g. The rule predicts that the gravitropic response should be proportional to the product of the applied acceleration and the stimulation time. Seedlings were cultivated on 1 g centrifuges and transferred to test centrifuges to apply a transverse g-stimulation. Since responses occurred in microgravity, the uncertainties about the validity of clinostat simulation of weightlessness was avoided. Plants at two stages of coleoptile development were tested. Plant responses were obtained using time-lapse video recordings that were analyzed after the flight. Stimulus intensities and durations were varied and ranged from 0.1 to 1.0 g and from 2 to 130 min, respectively. For threshold g-doses the reciprocity rule was obeyed. The threshold dose was of the order of 55 g s and 120 g s, respectively, for two groups of plants investigated. Reciprocity was studied also at bending responses which are from just above the detectable level to about 10 degrees. The validity of the rule could not be confirmed for higher g-doses, chiefly because the data were more variable. It was investigated whether the uniformity of the overall response data increased when the gravitropic dose was defined as (gm x t) with m-values different from unity. This was not the case and the reciprocity concept is, therefore, valid also in the hypogravity region. The concept of gravitropic dose, the product of the transverse acceleration and the stimulation time, is also well-defined in the acceleration region studied. With the same hardware, tests were done on earth where responses occurred on clinostats. The results did not contradict the reciprocity rule but scatter in the data was large.

  8. EEG-neurofeedback for optimising performance. II: creativity, the performing arts and ecological validity.

    PubMed

    Gruzelier, John H

    2014-07-01

    As a continuation of a review of evidence of the validity of cognitive/affective gains following neurofeedback in healthy participants, including correlations in support of the gains being mediated by feedback learning (Gruzelier, 2014a), the focus here is on the impact on creativity, especially in the performing arts including music, dance and acting. The majority of research involves alpha/theta (A/T), sensory-motor rhythm (SMR) and heart rate variability (HRV) protocols. There is evidence of reliable benefits from A/T training with advanced musicians especially for creative performance, and reliable benefits from both A/T and SMR training for novice music performance in adults and in a school study with children with impact on creativity, communication/presentation and technique. Making the SMR ratio training context ecologically relevant for actors enhanced creativity in stage performance, with added benefits from the more immersive training context. A/T and HRV training have benefitted dancers. The neurofeedback evidence adds to the rapidly accumulating validation of neurofeedback, while performing arts studies offer an opportunity for ecological validity in creativity research for both creative process and product.

  9. Stratospheric Heterogeneous Chemistry and Microphysics: Model Development, Validation and Applications

    NASA Technical Reports Server (NTRS)

    Turco, Richard P.

    1996-01-01

    The objectives of this project are to: define the chemical and physical processes leading to stratospheric ozone change that involve polar stratospheric clouds (PSCS) and the reactions occurring on the surfaces of PSC particles; study the formation processes, and the physical and chemical properties of PSCS, that are relevant to atmospheric chemistry and to the interpretation of field measurements taken during polar stratosphere missions; develop quantitative models describing PSC microphysics and heterogeneous chemical processes; assimilate laboratory and field data into these models; and calculate the extent of chemical processing on PSCs and the impact of specific microphysical processes on polar composition and ozone depletion. During the course of the project, a new coupled microphysics/physical-chemistry/ photochemistry model for stratospheric sulfate aerosols and nitric acid and ice PSCs was developed and applied to analyze data collected during NASA's Arctic Airborne Stratospheric Expedition-II (AASE-II) and other missions. In this model, detailed treatments of multicomponent sulfate aerosol physical chemistry, sulfate aerosol microphysics, polar stratospheric cloud microphysics, PSC ice surface chemistry, as well as homogeneous gas-phase chemistry were included for the first time. In recent studies focusing on AASE measurements, the PSC model was used to analyze specific measurements from an aircraft deployment of an aerosol impactor, FSSP, and NO(y) detector. The calculated results are in excellent agreement with observations for particle volumes as well as NO(y) concentrations, thus confirming the importance of supercooled sulfate/nitrate droplets in PSC formation. The same model has been applied to perform a statistical study of PSC properties in the Northern Hemisphere using several hundred high-latitude air parcel trajectories obtained from Goddard. The rates of ozone depletion along trajectories with different meteorological histories are presently

  10. A new model-based RSA method validated using CAD models and models from reversed engineering.

    PubMed

    Kaptein, B L; Valstar, E R; Stoel, B C; Rozing, P M; Reiber, J H C

    2003-06-01

    Roentgen stereophotogrammetric analysis (RSA) was developed to measure micromotion of an orthopaedic implant with respect to its surrounding bone. A disadvantage of conventional RSA is that it requires the implant to be marked with tantalum beads. This disadvantage can potentially be resolved with model-based RSA, whereby a 3D model of the implant is used for matching with the actual images and the assessment of position and rotation of the implant. In this study, a model-based RSA algorithm is presented and validated in phantom experiments. To investigate the influence of the accuracy of the implant models that were used for model-based RSA, we studied both computer aided design (CAD) models as well as models obtained by means of reversed engineering (RE) of the actual implant. The results demonstrate that the RE models provide more accurate results than the CAD models. If these RE models are derived from the very same implant, it is possible to achieve a maximum standard deviation of the error in the migration calculation of 0.06 mm for translations in x- and y-direction and 0.14 mm for the out of plane z-direction, respectively. For rotations about the y-axis, the standard deviation was about 0.1 degrees and for rotations about the x- and z-axis 0.05 degrees. Studies with clinical RSA-radiographs must prove that these results can also be reached in a clinical setting, making model-based RSA a possible alternative for marker-based RSA.

  11. Canyon building ventilation system dynamic model -- Parameters and validation

    SciTech Connect

    Moncrief, B.R.; Chen, F.F.K.

    1993-02-01

    Plant system simulation crosses many disciplines. At the core is the mimic of key components in the form of mathematical ``models.`` These component models are functionally integrated to represent the plant. With today`s low cost high capacity computers, the whole plant can be truly and effectively reproduced in a computer model. Dynamic simulation has its roots in ``single loop`` design, which is still a common objective in the employment of simulation. The other common objectives are the ability to preview plant operation, to anticipate problem areas, and to test the impact of design options. As plant system complexity increases and our ability to simulate the entire plant grows, the objective to optimize plant system design becomes practical. This shift in objectives from problem avoidance to total optimization by far offers the most rewarding potential. Even a small reduction in bulk materials and space can sufficiently justify the application of this technology. Furthermore, to realize an optimal plant starts from a tight and disciplined design. We believe the assurance required to execute such a design strategy can partly be derived from a plant model. This paper reports on the application of a dynamic model to evaluate the capacity of an existing production plant ventilation system. This study met the practical objectives of capacity evaluation under present and future conditions, and under normal and accidental situations. More importantly, the description of this application, in its methods and its utility, aims to validate the technology of dynamic simulation in the environment of plant system design and safe operation.

  12. Competitive sorption of Pb(II), Cu(II) and Ni(II) on carbonaceous nanofibers: A spectroscopic and modeling approach.

    PubMed

    Ding, Congcong; Cheng, Wencai; Wang, Xiangxue; Wu, Zhen-Yu; Sun, Yubing; Chen, Changlun; Wang, Xiangke; Yu, Shu-Hong

    2016-08-01

    The competitive sorption of Pb(II), Cu(II) and Ni(II) on the uniform carbonaceous nanofibers (CNFs) was investigated in binary/ternary-metal systems. The pH-dependent sorption of Pb(II), Cu(II) and Ni(II) on CNFs was independent of ionic strength, indicating that inner-sphere surface complexation dominated sorption Pb(II), Cu(II) and Ni(II) on CNFs. The maximum sorption capacities of Pb(II), Cu(II) and Ni(II) on CNFs in single-metal systems at a pH 5.5±0.2 and 25±1°C were 3.84 (795.65mg/g), 3.21 (204.00mg/g) and 2.67 (156.70mg/g)mmol/g, respectively. In equimolar binary/ternary-metal systems, Pb(II) exhibited greater inhibition of the sorption of Cu(II) and Ni(II), demonstrating the stronger affinity of CNFs for Pb(II). The competitive sorption of heavy metals in ternary-metal systems was predicted quite well by surface complexation modeling derived from single-metal data. According to FTIR, XPS and EXAFS analyses, Pb(II), Cu(II) and Ni(II) were specifically adsorbed on CNFs via covalent bonding. These observations should provide an essential start in simultaneous removal of multiple heavy metals from aquatic environments by CNFs, and open the doorways for the application of CNFs. PMID:27108273

  13. The Role of Structural Models in the Solar Sail Flight Validation Process

    NASA Technical Reports Server (NTRS)

    Johnston, John D.

    2004-01-01

    NASA is currently soliciting proposals via the New Millennium Program ST-9 opportunity for a potential Solar Sail Flight Validation (SSFV) experiment to develop and operate in space a deployable solar sail that can be steered and provides measurable acceleration. The approach planned for this experiment is to test and validate models and processes for solar sail design, fabrication, deployment, and flight. These models and processes would then be used to design, fabricate, and operate scaleable solar sails for future space science missions. There are six validation objectives planned for the ST9 SSFV experiment: 1) Validate solar sail design tools and fabrication methods; 2) Validate controlled deployment; 3) Validate in space structural characteristics (focus of poster); 4) Validate solar sail attitude control; 5) Validate solar sail thrust performance; 6) Characterize the sail's electromagnetic interaction with the space environment. This poster presents a top-level assessment of the role of structural models in the validation process for in-space structural characteristics.

  14. An approach to model validation and model-based prediction -- polyurethane foam case study.

    SciTech Connect

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical analyses and

  15. User's guide for the stock-recruitment model validation program. Environmental Sciences Division Publication No. 1985

    SciTech Connect

    Christensen, S.W.; Kirk, B.L.; Goodyear, C.P.

    1982-06-01

    SRVAL is a FORTRAN IV computer code designed to aid in assessing the validity of curve-fits of the linearized Ricker stock-recruitment model, modified to incorporate multiple-age spawners and to include an environmental variable, to variously processed annual catch-per-unit effort statistics for a fish population. It is sometimes asserted that curve-fits of this kind can be used to determine the sensitivity of fish populations to such man-induced stresses as entrainment and impingement at power plants. The SRVAL code was developed to test such assertions. It was utilized in testimony written in connection with the Hudson River Power Case (US Environmental Protection Agency, Region II). This testimony was recently published as a NUREG report. Here, a user's guide for SRVAL is presented.

  16. THE HYDRODYNAMICAL MODELS OF THE COMETARY COMPACT H ii REGION

    SciTech Connect

    Zhu, Feng-Yao; Zhu, Qing-Feng; Li, Juan; Wang, Jun-Zhi; Zhang, Jiang-Shui E-mail: zhuqf@ustc.edu.cn E-mail: jzwang@shao.ac.cn

    2015-10-10

    We have developed a full numerical method to study the gas dynamics of cometary ultracompact H ii regions, and associated photodissociation regions (PDRs). The bow-shock and champagne-flow models with a 40.9/21.9 M{sub ⊙} star are simulated. In the bow-shock models, the massive star is assumed to move through dense (n = 8000 cm{sup −3}) molecular material with a stellar velocity of 15 km s{sup −1}. In the champagne-flow models, an exponential distribution of density with a scale height of 0.2 pc is assumed. The profiles of the [Ne ii] 12.81 μm and H{sub 2} S(2) lines from the ionized regions and PDRs are compared for two sets of models. In champagne-flow models, emission lines from the ionized gas clearly show the effect of acceleration along the direction toward the tail due to the density gradient. The kinematics of the molecular gas inside the dense shell are mainly due to the expansion of the H ii region. However, in bow-shock models the ionized gas mainly moves in the same direction as the stellar motion. The kinematics of the molecular gas inside the dense shell simply reflects the motion of the dense shell with respect to the star. These differences can be used to distinguish two sets of models.

  17. Validation model for Raman based skin carotenoid detection.

    PubMed

    Ermakov, Igor V; Gellermann, Werner

    2010-12-01

    measurement, and which can be easily removed for subsequent biochemical measurements. Excellent correlation (coefficient R=0.95) is obtained for this tissue site which could serve as a model site for scaled up future validation studies of large populations. The obtained results provide proof that resonance Raman spectroscopy is a valid non-invasive objective methodology for the quantitative assessment of carotenoid antioxidants in human skin in vivo. PMID:20678465

  18. Validation model for Raman based skin carotenoid detection.

    PubMed

    Ermakov, Igor V; Gellermann, Werner

    2010-12-01

    measurement, and which can be easily removed for subsequent biochemical measurements. Excellent correlation (coefficient R=0.95) is obtained for this tissue site which could serve as a model site for scaled up future validation studies of large populations. The obtained results provide proof that resonance Raman spectroscopy is a valid non-invasive objective methodology for the quantitative assessment of carotenoid antioxidants in human skin in vivo.

  19. NAIRAS aircraft radiation model development, dose climatology, and initial validation

    PubMed Central

    Mertens, Christopher J; Meier, Matthias M; Brown, Steven; Norman, Ryan B; Xu, Xiaojing

    2013-01-01

    [1] The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis

  20. A Test of Model Validation from Observed Temperature Trends

    NASA Astrophysics Data System (ADS)

    Singer, S. F.

    2006-12-01

    How much of current warming is due to natural causes and how much is manmade? This requires a comparison of the patterns of observed warming with the best available models that incorporate both anthropogenic (greenhouse gases and aerosols) as well as natural climate forcings (solar and volcanic). Fortunately, we have the just published U.S.-Climate Change Science Program (CCSP) report (www.climatescience.gov/Library/sap/sap1-1/finalreport/default.htm), based on best current information. As seen in Fig. 1.3F of the report, modeled surface temperature trends change little with latitude, except for a stronger warming in the Arctic. The observations, however, show a strong surface warming in the northern hemisphere but not in the southern hemisphere (see Fig. 3.5C and 3.6D). The Antarctic is found to be cooling and Arctic temperatures, while currently rising, were higher in the 1930s than today. Although the Executive Summary of the CCSP report claims "clear evidence" for anthropogenic warming, based on comparing tropospheric and surface temperature trends, the report itself does not confirm this. Greenhouse models indicate that the tropics should provide the most sensitive location for their validation; trends there should increase by 200-300 percent with altitude, peaking at around 10 kilometers. The observations, however, show the opposite: flat or even decreasing tropospheric trend values (see Fig. 3.7 and also Fig. 5.7E). This disparity is demonstrated most strikingly in Fig. 5.4G, which shows the difference between surface and troposphere trends for a collection of models (displayed as a histogram) and for balloon and satellite data. [The disparities are less apparent in the Summary, which displays model results in terms of "range" rather than as histograms.] There may be several possible reasons for the disparity: Instrumental and other effects that exaggerate or otherwise distort observed temperature trends. Or, more likely: Shortcomings in models that result

  1. Validation of conducting wall models using magnetic measurements

    NASA Astrophysics Data System (ADS)

    Hanson, J. M.; Bialek, J.; Turco, F.; King, J.; Navratil, G. A.; Strait, E. J.; Turnbull, A.

    2016-10-01

    The impact of conducting wall eddy currents on perturbed magnetic field measurements is a key issue for understanding the measurement and control of long-wavelength MHD stability in tokamak devices. As plasma response models have growth in sophistication, the need to understand and resolve small changes in these measurements has become more important, motivating increased fidelity in simulations of externally applied fields and the wall eddy current response. In this manuscript, we describe thorough validation studies of the wall models in the mars-f and valen stability codes, using coil-sensor vacuum coupling measurements from the DIII-D tokamak (Luxon et al 2005 Fusion Sci. Technol. 48 807). The valen formulation treats conducting structures with arbitrary three-dimensional geometries, while mars-f uses an axisymmetric wall model and a spectral decomposition of the problem geometry with a fixed toroidal harmonic n. The vacuum coupling measurements have a strong sensitivity to wall eddy currents induced by time-changing coil currents, owing to the close proximities of both the sensors and coils to the wall. Measurements from individual coil and sensor channels are directly compared with valen predictions. It is found that straightforward improvements to the valen model, such as refining the wall mesh and simulating the vertical extent of the DIII-D poloidal field sensors, lead to good agreement with the experimental measurements. In addition, couplings to multi-coil, n  =  1 toroidal mode perturbations are calculated from the measurements and compared with predictions from both codes. The toroidal mode comparisons favor the fully three-dimensional simulation approach, likely because this approach naturally treats n  >  1 sidebands generated by the coils and wall eddy currents, as well as the n  =  1 fundamental.

  2. Validation of the galactic cosmic ray and geomagnetic transmission models

    NASA Technical Reports Server (NTRS)

    Badhwar, G. D.; Truong, A. G.; O'Neill, P. M.; Choutko, V.

    2001-01-01

    A very high-momentum resolution particle spectrometer called the Alpha Magnetic Spectrometer (AMS) was flown in the payload bay of the Space Shuttle in a 51.65 degrees x 380-km orbit during the last solar minimum. This spectrometer has provided the first high statistics data set for galactic cosmic radiation protons, and helium, as well as limited spectral data on carbon and oxygen nuclei in the International Space Station orbit. First measurements of the albedo protons at this inclination were also made. Because of the high-momentum resolution and high statistics, the data can be separated as a function of magnetic latitude. A related investigation, the balloon borne experiment with a superconducting solenoid spectrometer (BESS), has been flown from Lynn Lake, Canada and has also provided excellent high-resolution data on protons and helium. These two data sets have been used here to study the validity of two galactic cosmic ray models and the geomagnetic transmission function developed from the 1990 geomagnetic reference field model. The predictions of both the CREME96 and NASA/JSC models are in good agreement with the AMS data. The shape of the AMS measured albedo proton spectrum, up to 2 GeV, is in excellent agreement with the previous balloon and satellite observations. A new LIS spectrum was developed that is consistent with both previous and new BESS 3He observations. Because the astronaut radiation exposures onboard ISS will be highest around the time of the solar minimum, these AMS measurements and these models provide important benchmarks for future radiation studies. AMS-02 slated for launch in September 2003, will provide even better momentum resolution and higher statistics data. Published by Elsevier Science Ltd.

  3. Validation of the galactic cosmic ray and geomagnetic transmission models.

    PubMed

    Badhwar, G D; Truong, A G; O'Neill, P M; Choutko, V

    2001-06-01

    A very high-momentum resolution particle spectrometer called the Alpha Magnetic Spectrometer (AMS) was flown in the payload bay of the Space Shuttle in a 51.65 degrees x 380-km orbit during the last solar minimum. This spectrometer has provided the first high statistics data set for galactic cosmic radiation protons, and helium, as well as limited spectral data on carbon and oxygen nuclei in the International Space Station orbit. First measurements of the albedo protons at this inclination were also made. Because of the high-momentum resolution and high statistics, the data can be separated as a function of magnetic latitude. A related investigation, the balloon borne experiment with a superconducting solenoid spectrometer (BESS), has been flown from Lynn Lake, Canada and has also provided excellent high-resolution data on protons and helium. These two data sets have been used here to study the validity of two galactic cosmic ray models and the geomagnetic transmission function developed from the 1990 geomagnetic reference field model. The predictions of both the CREME96 and NASA/JSC models are in good agreement with the AMS data. The shape of the AMS measured albedo proton spectrum, up to 2 GeV, is in excellent agreement with the previous balloon and satellite observations. A new LIS spectrum was developed that is consistent with both previous and new BESS 3He observations. Because the astronaut radiation exposures onboard ISS will be highest around the time of the solar minimum, these AMS measurements and these models provide important benchmarks for future radiation studies. AMS-02 slated for launch in September 2003, will provide even better momentum resolution and higher statistics data.

  4. Can a Ground Water Flow Model be Validated?

    NASA Astrophysics Data System (ADS)

    Yeh, T. J.; Xiang, J.; Khaleel, R.

    2007-05-01

    Multi-scale spatial and temporal variability of inflow and outflow of groundwater basins are well-known facts. Multi- scale aquifer heterogeneity is a reality. Traditional in-situ borehole characterization and monitoring methods can cover only a fraction of a groundwater basin. Consequently, our knowledge of a groundwater basin is limited and uncertain. Our lack of knowledge and information about groundwater basins has led to grossly misleading predictions of groundwater flow and contaminant migration. Validity of our subsurface model as such has been seriously questioned, as has our ability to predict flow and solute migration in aquifers. Groundwater resources management virtually becomes a matter of political debate without much scientific basis. Recent advances in hydrologic and geophysical tomographic survey technologies have brought forth cost- effective means to characterize aquifer spatial heterogeneity. This paper discusses an application of hydraulic tomographic survey to characterization of heterogeneous sandboxes. It demonstrates that detailed characterization can lead to satisfactory predictions, using a ground water flow model, of drawdown evolution induced by pumping tests. We thereby advocate high-resolution characterization and monitoring of the subsurface such that reliable assessment and proper management of our groundwater resources is possible.

  5. Narrowband VLF observations as validation of Plasmaspheric model

    NASA Astrophysics Data System (ADS)

    Collier, Andrew; Clilverd, Mark; Rodger, C. J.; Delport, Brett; Lichtenberger, János

    2012-07-01

    PLASMON is a European Union FP7 project which will use observations of whistlers and field line resonances to construct a data assimilative model of the plasmasphere. This model will be validated by comparison with electron precipitation data derived from narrowband VLF observations of subionospheric propagation from the AARDDVARK network. A VLF receiver on Marion Island, located at 46.9° S 37.1° E (L = 2.60), is able to observe the powerful NWC transmitter in Australia over a 1.4 < L < 3.0 path which passes exclusively over the ocean. The signal is thus very strong and exhibits an excellent signal-to-noise ratio. Data from the UltraMSK narrowband VLF receiver on Marion Island are used to examine evidence of particle precipitation along this path, thereby inferring the rate at which electrons are scattered into the bounce loss cone. This path covers a small range of L-values so that there is little ambiguity in the source of any peturbations. Perturbations detected on the path during geomagnetic storms should predominantly be responses to energetic electron precipitation processes occurring inside the plasmasphere. Comparisons will be made to preliminary plasmaspheric results from the PLASMON project.

  6. Validating the topographic climatology logic of the MTCLIM model

    SciTech Connect

    Glassy, J.M.; Running, S.W.

    1995-06-01

    The topographic climatology logic of the MTCLIM model was validated using a comparison of modeled air temperatures vs. remotely sensed, thermal infrared (TIR) surface temperatures from three Daedalus Thematic Mapper Simulator scenes. The TIR data was taken in 1990 near Sisters, Oregon, as part of the NASA OTTER project. The original air temperature calculation method was modified for the spatial context of this study. After stratifying by canopy closure and relative solar loading, r{sup 2} values of 0.74, 0.89, and 0.97 were obtained for the March, June, and August scenes, respectively, using a modified air temperature algorithm. Consistently lower coefficients of determination were obtained using the original air temperature algorithm on the same data r{sup 2} values of .070, .52, and .66 for the March, June, and August samples respectively. The difficulties of comparing screen height air temperatures with remotely sensed surface temperatures are discussed, and several ideas for follow-on studies are suggested.

  7. Validated Analytical Model of a Pressure Compensation Drip Irrigation Emitter

    NASA Astrophysics Data System (ADS)

    Shamshery, Pulkit; Wang, Ruo-Qian; Taylor, Katherine; Tran, Davis; Winter, Amos

    2015-11-01

    This work is focused on analytically characterizing the behavior of pressure-compensating drip emitters in order to design low-cost, low-power irrigation solutions appropriate for off-grid communities in developing countries. There are 2.5 billion small acreage farmers worldwide who rely solely on their land for sustenance. Drip, compared to flood, irrigation leads to up to 70% reduction in water consumption while increasing yields by 90% - important in countries like India which are quickly running out of water. To design a low-power drip system, there is a need to decrease the pumping pressure requirement at the emitters, as pumping power is the product of pressure and flow rate. To efficiently design such an emitter, the relationship between the fluid-structure interactions that occur in an emitter need to be understood. In this study, a 2D analytical model that captures the behavior of a common drip emitter was developed and validated through experiments. The effects of independently changing the channel depth, channel width, channel length and land height on the performance were studied. The model and the key parametric insights presented have the potential to be optimized in order to guide the design of low-pressure, clog-resistant, pressure-compensating emitters.

  8. Validation of symptom validity tests using a "child-model" of adult cognitive impairments.

    PubMed

    Rienstra, A; Spaan, P E J; Schmand, B

    2010-08-01

    Validation studies of symptom validity tests (SVTs) in children are uncommon. However, since children's cognitive abilities are not yet fully developed, their performance may provide additional support for the validity of these measures in adult populations. Four SVTs, the Test of Memory Malingering (TOMM), the Word Memory Test (WMT), the Amsterdam Short-Term Memory (ASTM) test, and the Word Completion Memory Test (WCMT), along with several neuropsychological instruments were administered to 48 Dutch school children aged 7-12. All children scored above the established adult cut-offs on the TOMM and the WMT. They could pass the ASTM test if their reading skills were at a level equivalent to that of 9 year olds. All children passed our criterion of a negative WCMT score. However, the WCMT does seem sensitive to the level of verbal fluency. Implications for the applicability of these SVTs in adult populations are discussed. PMID:20484327

  9. Transient PVT measurements and model predictions for vessel heat transfer. Part II.

    SciTech Connect

    Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.; Evans, Gregory Herbert; Rice, Steven F.

    2010-07-01

    Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models in which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.

  10. SDSS-II: Determination of shape and color parameter coefficients for SALT-II fit model

    SciTech Connect

    Dojcsak, L.; Marriner, J.; /Fermilab

    2010-08-01

    In this study we look at the SALT-II model of Type IA supernova analysis, which determines the distance moduli based on the known absolute standard candle magnitude of the Type IA supernovae. We take a look at the determination of the shape and color parameter coefficients, {alpha} and {beta} respectively, in the SALT-II model with the intrinsic error that is determined from the data. Using the SNANA software package provided for the analysis of Type IA supernovae, we use a standard Monte Carlo simulation to generate data with known parameters to use as a tool for analyzing the trends in the model based on certain assumptions about the intrinsic error. In order to find the best standard candle model, we try to minimize the residuals on the Hubble diagram by calculating the correct shape and color parameter coefficients. We can estimate the magnitude of the intrinsic errors required to obtain results with {chi}{sup 2}/degree of freedom = 1. We can use the simulation to estimate the amount of color smearing as indicated by the data for our model. We find that the color smearing model works as a general estimate of the color smearing, and that we are able to use the RMS distribution in the variables as one method of estimating the correct intrinsic errors needed by the data to obtain the correct results for {alpha} and {beta}. We then apply the resultant intrinsic error matrix to the real data and show our results.

  11. Validation of Community Models: 2. Development of a Baseline, Using the Wang-Sheeley-Arge Model

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    This paper is the second in a series providing independent validation of community models of the outer corona and inner heliosphere. Here I present a comprehensive validation of the Wang-Sheeley-Arge (WSA) model. These results will serve as a baseline against which to compare the next generation of comparable forecasting models. The WSA model is used by a number of agencies to predict Solar wind conditions at Earth up to 4 days into the future. Given its importance to both the research and forecasting communities, it is essential that its performance be measured systematically and independently. I offer just such an independent and systematic validation. I report skill scores for the model's predictions of wind speed and interplanetary magnetic field (IMF) polarity for a large set of Carrington rotations. The model was run in all its routinely used configurations. It ingests synoptic line of sight magnetograms. For this study I generated model results for monthly magnetograms from multiple observatories, spanning the Carrington rotation range from 1650 to 2074. I compare the influence of the different magnetogram sources and performance at quiet and active times. I also consider the ability of the WSA model to forecast both sharp transitions in wind speed from slow to fast wind and reversals in the polarity of the radial component of the IMF. These results will serve as a baseline against which to compare future versions of the model as well as the current and future generation of magnetohydrodynamic models under development for forecasting use.

  12. Alaska North Slope Tundra Travel Model and Validation Study

    SciTech Connect

    Harry R. Bader; Jacynthe Guimond

    2006-03-01

    of variability in snow depth cover throughout the period of field experimentation. The amount of change in disturbance indicators was greater in the tundra communities of the Foothills than in those of the Coastal Plain. However the overall level of change in both community types was less than expected. In Coastal Plain communities, ground hardness and snow slab thickness were found to play an important role in change in active layer depth and soil moisture as a result of treatment. In the Foothills communities, snow cover had the most influence on active layer depth and soil moisture as a result of treatment. Once certain minimum thresholds for ground hardness, snow slab thickness, and snow depth were attained, it appeared that little or no additive effect was realized regarding increased resistance to disturbance in the tundra communities studied. DNR used the results of this modeling project to set a standard for maximum permissible disturbance of cross-country tundra travel, with the threshold set below the widely accepted standard of Low Disturbance levels (as determined by the U.S. Fish and Wildlife Service). DNR followed the modeling project with a validation study, which seemed to support the field trial conclusions and indicated that the standard set for maximum permissible disturbance exhibits a conservative bias in favor of environmental protection. Finally DNR established a quick and efficient tool for visual estimations of disturbance to determine when investment in field measurements is warranted. This Visual Assessment System (VAS) seemed to support the plot disturbance measurements taking during the modeling and validation phases of this project.

  13. Tsunami-HySEA model validation for tsunami current predictions

    NASA Astrophysics Data System (ADS)

    Macías, Jorge; Castro, Manuel J.; González-Vida, José Manuel; Ortega, Sergio

    2016-04-01

    Model ability to compute and predict tsunami flow velocities is of importance in risk assessment and hazard mitigation. Substantial damage can be produced by high velocity flows, particularly in harbors and bays, even when the wave height is small. Besides, an accurate simulation of tsunami flow velocities and accelerations is fundamental for advancing in the study of tsunami sediment transport. These considerations made the National Tsunami Hazard Mitigation Program (NTHMP) proposing a benchmark exercise focussed on modeling and simulating tsunami currents. Until recently, few direct measurements of tsunami velocities were available to compare and to validate model results. After Tohoku 2011 many current meters measurement were made, mainly in harbors and channels. In this work we present a part of the contribution made by the EDANYA group from the University of Malaga to the NTHMP workshop organized at Portland (USA), 9-10 of February 2015. We have selected three out of the five proposed benchmark problems. Two of them consist in real observed data from the Tohoku 2011 event, one at Hilo Habour (Hawaii) and the other at Tauranga Bay (New Zealand). The third one consists in laboratory experimental data for the inundation of Seaside City in Oregon. Acknowledgements: This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069) and the Spanish Government Research project DAIFLUID (MTM2012-38383-C02-01) and Universidad de Málaga, Campus de Excelencia Andalucía TECH. The GPU and multi-GPU computations were performed at the Unit of Numerical Methods (UNM) of the Research Support Central Services (SCAI) of the University of Malaga.

  14. Nonparametric model validations for hidden Markov models with applications in financial econometrics

    PubMed Central

    Zhao, Zhibiao

    2011-01-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601

  15. Modelling Ar II spectral emission from the ASTRAL helicon plasma

    NASA Astrophysics Data System (ADS)

    Munoz Burgos, Jorge; Boivin, Robert; Loch, Stuart; Kamar, Ola; Ballance, Connor; Pindzola, Mitch

    2008-11-01

    We describe our spectral modeling of ArII emission from the ASTRAL helicon plasma at Auburn University. Collisional-radiative theory is used to model the emitted spectrum, with account being taken for the density and temperature variation along the line of sight. This study has two main aims. Firstly to test the atomic data used in the model and secondly to identify spectral line ratios in the 200 nm - 1000 nm range that could be used as temperature diagnostics. Using the temperature at which Ar II emission starts to be seen we have been able to test recent ionization and recombination data. Using selected spectral lines we were then able to test the importance of the continuum-coupling effects included in the most recent Ar+ electron impact excitation data. Selected spectral line ratios have been identified that show a strong temperature variation and have potential as a temperature diagnostic.

  16. On the verification and validation of detonation models

    NASA Astrophysics Data System (ADS)

    Quirk, James

    2013-06-01

    This talk will consider the verification and validation of detonation models, such as Wescott-Stewart-Davis (Journal of Applied Physics. 2005), from the perspective of the American Institute of Aeronautics and Astronautics policy on numerical accuracy (AIAA J. Vol. 36, No. 1, 1998). A key aspect of the policy is that accepted documentation procedures must be used for journal articles with the aim of allowing the reported work to be reproduced by the interested reader. With the rise of electronic documents, since the policy was formulated, it is now possible to satisfy this mandate in its strictest sense: that is, it is now possible to run a comptuational verification study directly in a PDF, thereby allowing a technical author to report numerical subtleties that traditionally have been ignored. The motivation for this document-centric approach is discussed elsewhere (Quirk2003, Adaptive Mesh Refinement Theory and Practice, Springer), leaving the talk to concentrate on specific detonation examples that should be of broad interest to the shock-compression community.

  17. Validation of body composition models for high school wrestlers.

    PubMed

    Williford, H N; Smith, J F; Mansfield, E R; Conerly, M D; Bishop, P A

    1986-04-01

    This study investigates the utility of two equations for predicting minimum wrestling weight and three equations for predicting body density for the population of high school wrestlers. A sample of 54 wrestlers was assessed for body density by underwater weighing, residual volume by helium dilution, and selected anthropometric measures. The differences between observed and predicted responses were analyzed for the five models. Four statistical tests were used to validate the equations, including tests for the mean of differences, proportion of positive differences, equality of standard errors from regression, and equivalence of regression coefficients between original and second sample data. The Michael and Katch equation and two Forsyth and Sinning equations (FS1 and FS21) for body density did not predict as well as expected. The Michael and Katch equation tends to overpredict body density while FS1 underpredicts. The FS2 equation, consisting of a constant adjustment to FS1, predicts well near the mean but not at the ends of the sample range. The two Tcheng and Tipton equations produce estimates which slightly but consistently overpredict minimum wrestling weight, the long form equation by 2.5 pounds and the short form by 3.8 pounds. As a result the proportion of positive differences is less than would be expected. But based on the tests for the standard errors and regression coefficients, the evidence does not uniformly reject these two equations.

  18. Validation of Community Models: Identifying Events in Space Weather Model Timelines

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    I develop and document a set of procedures which test the quality of predictions of solar wind speed and polarity of the interplanetary magnetic field (IMF) made by coupled models of the ambient solar corona and heliosphere. The Wang-Sheeley-Arge (WSA) model is used to illustrate the application of these validation procedures. I present an algorithm which detects transitions of the solar wind from slow to high speed. I also present an algorithm which processes the measured polarity of the outward directed component of the IMF. This removes high-frequency variations to expose the longer-scale changes that reflect IMF sector changes. I apply these algorithms to WSA model predictions made using a small set of photospheric synoptic magnetograms obtained by the Global Oscillation Network Group as input to the model. The results of this preliminary validation of the WSA model (version 1.6) are summarized.

  19. An independent verification and validation of the Future Theater Level Model conceptual model

    SciTech Connect

    Hartley, D.S. III; Kruse, K.L.; Martellaro, A.J.; Packard, S.L.; Thomas, B. Jr.; Turley, V.K.

    1994-08-01

    This report describes the methodology and results of independent verification and validation performed on a combat model in its design stage. The combat model is the Future Theater Level Model (FTLM), under development by The Joint Staff/J-8. J-8 has undertaken its development to provide an analysis tool that addresses the uncertainties of combat more directly than previous models and yields more rapid study results. The methodology adopted for this verification and validation consisted of document analyses. Included were detailed examination of the FTLM design documents (at all stages of development), the FTLM Mission Needs Statement, and selected documentation for other theater level combat models. These documents were compared to assess the FTLM as to its design stage, its purpose as an analytical combat model, and its capabilities as specified in the Mission Needs Statement. The conceptual design passed those tests. The recommendations included specific modifications as well as a recommendation for continued development. The methodology is significant because independent verification and validation have not been previously reported as being performed on a combat model in its design stage. The results are significant because The Joint Staff/J-8 will be using the recommendations from this study in determining whether to proceed with develop of the model.

  20. Modeling anisoplanatism in the Keck II laser guide star AO system

    NASA Astrophysics Data System (ADS)

    Fitzgerald, Michael P.; Witzel, Gunther; Britton, Matthew C.; Ghez, Andrea M.; Meyer, Leo; Sitarski, Breann N.; Cheng, Carina; Becklin, Eric E.; Campbell, Randall D.; Do, Tuan; Lu, Jessica R.; Matthews, Keith; Morris, Mark R.; Neyman, Christopher R.; Tyler, Glenn A.; Wizinowich, Peter L.; Yelda, Sylvana

    2012-07-01

    Anisoplanatism is a primary source of photometric and astrometric error in single-conjugate adaptive optics. We present initial results of a project to model the off-axis optical transfer function in the adaptive optics system at the Keck II telescope. The model currently accounts for the effects of atmospheric anisoplanatism in natural guide star observations. The model for the atmospheric contribution to the anisoplanatic transfer function uses contemporaneous MASS/ DIMM measurements. Here we present the results of a validation campaign using observations of naturally guided visual binary stars under varying conditions, parameterized by the r0 and θ0 parameters of the C2n atmospheric turbulence profile. We are working to construct a model of the instrumental field-dependent aberrations in the NIRC2 camera using an artificial source in the Nasmyth focal plane. We also discuss our plans to extend the work to laser guide star operation.

  1. Nonlinear convective pulsation models of type II Cepheids

    NASA Astrophysics Data System (ADS)

    Smolec, Radoslaw

    2015-08-01

    We present a grid of nonlinear convective pulsation models of type-II Cepheids: BL Her stars, W Vir stars and RV Tau stars. The models cover a wide range of masses, luminosities, effective temperatures and chemical compositions. The most interesting result is detection of deterministic chaos in the models. Different routes to chaos are detected (period doubling, intermittent route) as well as variety of phenomena intrinsic to chaotic dynamics (periodic islands within chaotic bands, crisis bifurcation, type-I and type-III intermittency). Some of the phenomena (period doubling in BL Her and in RV Tau stars, irregular pulsation of RV Tau stars) are well known in the pulsation of type-II Cepheids. Prospects of discovering the other are briefly discussed. Transition from BL Her type pulsation through W Vir type till RV Tau type is analysed. In the most luminous models a dynamical instability is detected, which indicates that pulsation driven mass loss is important process occurring in type-II Cepheids.

  2. Validation of the integration of CFD and SAS4A/SASSYS-1: Analysis of EBR-II shutdown heat removal test 17

    SciTech Connect

    Thomas, J. W.; Fanning, T. H.; Vilim, R.; Briggs, L. L.

    2012-07-01

    Recent analyses have demonstrated the need to model multidimensional phenomena, particularly thermal stratification in outlet plena, during safety analyses of loss-of-flow transients of certain liquid-metal cooled reactor designs. Therefore, Argonne's reactor systems safety code SAS4A/SASSYS-1 is being enhanced by integrating 3D computational fluid dynamics models of the plena. A validation exercise of the new tool is being performed by analyzing the protected loss-of-flow event demonstrated by the EBR-II Shutdown Heat Removal Test 17. In this analysis, the behavior of the coolant in the cold pool is modeled using the CFD code STAR-CCM+, while the remainder of the cooling system and the reactor core are modeled with SAS4A/SASSYS-1. This paper summarizes the code integration strategy and provides the predicted 3D temperature and velocity distributions inside the cold pool during SHRT-17. The results of the coupled analysis should be considered preliminary at this stage, as the exercise pointed to the need to improve the CFD model of the cold pool tank. (authors)

  3. Convergent and Discriminant Validation of Three Classroom Observation Systems: A Proposed Model.

    ERIC Educational Resources Information Center

    Borich, Gary D.; Malitz, David

    Evaluated is the validity of the behavioral categories held in common among three classroom observation systems. The validity model employed was that reported by Campbell and Fiske (1959) which requires that both convergent and discriminant validity be demonstrated. These procedures were applied to data obtained from the videotapes of 62 teacher…

  4. Theoretical models for Type I and Type II supernova

    SciTech Connect

    Woosley, S.E.; Weaver, T.A.

    1985-01-01

    Recent theoretical progress in understanding the origin and nature of Type I and Type II supernovae is discussed. New Type II presupernova models characterized by a variety of iron core masses at the time of collapse are presented and the sensitivity to the reaction rate /sup 12/C(..cap alpha..,..gamma..)/sup 16/O explained. Stars heavier than about 20 M/sub solar/ must explode by a ''delayed'' mechanism not directly related to the hydrodynamical core bounce and a subset is likely to leave black hole remnants. The isotopic nucleosynthesis expected from these massive stellar explosions is in striking agreement with the sun. Type I supernovae result when an accreting white dwarf undergoes a thermonuclear explosion. The critical role of the velocity of the deflagration front in determining the light curve, spectrum, and, especially, isotopic nucleosynthesis in these models is explored. 76 refs., 8 figs.

  5. Real-time infrared signature model validation for hardware-in-the-loop simulations

    NASA Astrophysics Data System (ADS)

    Sanders, Jeffrey S.; Peters, Trina S.

    1997-07-01

    Techniques and tools for validation of real-time infrared target signature models are presented. The model validation techniques presented in this paper were developed for hardware-in-the-loop (HWIL) simulations at the U.S. Army Missile Command's Research, Development, and Engineering Center. Real-time target model validation is a required deliverable to the customer of a HWIL simulation facility and is a critical part of ensuring the fidelity of a HWIL simulation. There are two levels of real-time target model validation. The first level is comparison of the target model to some baseline or measured data which answers the question `are the simulation inputs correct?'. The second level of validation is a simulation validation which answers the question `for a given target model input is the simulation hardware and software generating the correct output?'. This paper deals primarily with the first level of target model validation. IR target signature models have often been validated by subjective visual inspection or by objective, but limited, statistical comparisons. Subjective methods can be very satisfying to the simulation developer but offer little comfort to the simulation customer since subjective methods cannot be documented. Generic statistical methods offer a level of documentation, yet are often not robust enough to fully test the fidelity of an IR signature. Advances in infrared seeker and sensor technology have led to the necessity of system specific target model validation. For any HWIL simulation it must be demonstrated that the sensor responds to the real-time signature model in a manner which is functionally equivalent to the sensor's response to a baseline model. Depending on the application, a baseline method can be measured IR imagery or the output of a validated IR signature prediction code. Tools are described that generate validation data for HWIL simulations at MICOM and example real-time model validations are presented.

  6. Modeling and experimental validation of unsteady impinging flames

    SciTech Connect

    Fernandes, E.C.; Leandro, R.E.

    2006-09-15

    .03, and {beta}{sub Env}=1.0. A physical analysis of the proportionality constant {beta} showed that for the disc flames, {tau} corresponds to the ratio between H and the velocity of the coherent structures. In the case of envelope and cool central core flames, {tau} corresponds to the ratio between H and the mean jet velocity. The predicted frequency fits the experimental data, supporting the validity of the mathematical modeling, empirical formulation, and assumptions made. (author)

  7. Uncalibrated modelling of conservative tracer and pesticide leaching to groundwater: comparison of potential Tier II exposure assessment models.

    PubMed

    Fox, Garey A; Sabbagh, George J; Chen, Wenlin; Russell, Mark H

    2006-06-01

    The Root Zone Water Quality Model (RZWQM) and Pesticide Root Zone Model (PRZM) are currently being considered by the Office of Pesticide Programs (OPP) in the United States Environmental Protection Agency (US EPA) for Tier II screening of pesticide leaching to groundwater (November 2005). The objective of the present research was to compare RZWQM and PRZM based on observed conservative tracer and pesticide pore water and soil concentrations collected in two unique groundwater leaching studies in North Carolina and Georgia. These two sites had been used previously by the Federal Insecticide, Fungicide and Rodenticide Act (FIFRA) Environmental Model Validation Task Force (EMVTF) in the validation of PRZM. As in the FIFRA EMVTF PRZM validation, 'cold' modelling using input parameters based on EPA guidelines/databases and 'site-specific' modelling using field-measured soil and hydraulic parameters were performed with a recently released version of RZWQM called RZWQM-NAWQA (National Water Quality Assessment). Model calibration was not performed for either the 'cold' or 'site-specific' modelling. The models were compared based on predicted pore water and soil concentrations of bromide and pesticides throughout the soil profile. Both models tended to predict faster movement through the soil profile than observed. Based on a quantitative normalised objective function (NOF), RZWQM-NAWQA generally outperformed or was equivalent to PRZM in simulating pore water and soil concentrations. Both models were more successful in predicting soil concentrations (i.e. NOF < 1.0 for site-specific data, which satisfies site-specific applicability) than they were at predicting pore water concentrations.

  8. Experiments for calibration and validation of plasticity and failure material modeling: 304L stainless steel.

    SciTech Connect

    Lee, Kenneth L.; Korellis, John S.; McFadden, Sam X.

    2006-01-01

    Experimental data for material plasticity and failure model calibration and validation were obtained from 304L stainless steel. Model calibration data were taken from smooth tension, notched tension, and compression tests. Model validation data were provided from experiments using thin-walled tube specimens subjected to path dependent combinations of internal pressure, extension, and torsion.

  9. Atomic Data and Spectral Model for Fe II

    NASA Astrophysics Data System (ADS)

    Bautista, Manuel A.; Fivet, Vanessa; Ballance, Connor; Quinet, Pascal; Ferland, Gary; Mendoza, Claudio; Kallman, Timothy R.

    2015-08-01

    We present extensive calculations of radiative transition rates and electron impact collision strengths for Fe ii. The data sets involve 52 levels from the 3d7, 3d64s, and 3{d}54{s}2 configurations. Computations of A-values are carried out with a combination of state-of-the-art multiconfiguration approaches, namely the relativistic Hartree–Fock, Thomas–Fermi–Dirac potential, and Dirac–Fock methods, while the R-matrix plus intermediate coupling frame transformation, Breit–Pauli R-matrix, and Dirac R-matrix packages are used to obtain collision strengths. We examine the advantages and shortcomings of each of these methods, and estimate rate uncertainties from the resulting data dispersion. We proceed to construct excitation balance spectral models, and compare the predictions from each data set with observed spectra from various astronomical objects. We are thus able to establish benchmarks in the spectral modeling of [Fe ii] emission in the IR and optical regions as well as in the UV Fe ii absorption spectra. Finally, we provide diagnostic line ratios and line emissivities for emission spectroscopy as well as column densities for absorption spectroscopy. All atomic data and models are available online and through the AtomPy atomic data curation environment.

  10. Model selection and validation of extreme distribution by goodness-of-fit test based on conditional position

    NASA Astrophysics Data System (ADS)

    Abidin, Nahdiya Zainal; Adam, Mohd Bakri

    2014-09-01

    In Extreme Value Theory, the important aspect of model extrapolation is to model the extreme behavior. This is because the choice of the extreme value distribution model affects the prediction that is about to be made. Thus, model validation which is called Goodness-of-fit (GoF) test is necessary. In this study, the GoF tests were used to fit the Generalized Extreme Value (GEV) Type-II model against the simulated observed values. The μ, σ and ξ were estimated by Maximum Likelihood. The critical values based on conditional points were developed by Monte-Carlo simulation. The powers of the tests were identified by power study. The data that is distributed according to GEV Type-II distribution was used to test whether the critical values developed are able to confirm the fit between GEV Type-II model and the data. To confirm the fit, the statistics value of the GOF test should be smaller than the critical value.

  11. Videofluoroscopic Validation of a Translational Murine Model of Presbyphagia.

    PubMed

    Lever, Teresa E; Brooks, Ryan T; Thombs, Lori A; Littrell, Loren L; Harris, Rebecca A; Allen, Mitchell J; Kadosh, Matan D; Robbins, Kate L

    2015-06-01

    Presbyphagia affects approximately 40% of otherwise healthy people over 60 years of age. Hence, it is a condition of primary aging rather than a consequence of primary disease. This distinction warrants systematic investigations to understand the causal mechanisms of aging versus disease specifically on the structure and function of the swallowing mechanism. Toward this goal, we have been studying healthy aging C57BL/6 mice (also called B6), the most popular laboratory rodent for biomedical research. The goal of this study was to validate this strain as a model of presbyphagia for translational research purposes. We tested two age groups of B6 mice: young (4-7 months; n = 16) and old (18-21 months; n = 11). Mice underwent a freely behaving videofluoroscopic swallow study (VFSS) protocol developed in our lab. VFSS videos (recorded at 30 frames per second) were analyzed frame-by-frame to quantify 15 swallow metrics. Six of the 15 swallow metrics were significantly different between young and old mice. Compared to young mice, old mice had significantly longer pharyngeal and esophageal transit times (p = 0.038 and p = 0.022, respectively), swallowed larger boluses (p = 0.032), and had a significantly higher percentage of ineffective primary esophageal swallows (p = 0.0405). In addition, lick rate was significantly slower for old mice, measured using tongue cycle rate (p = 0.0034) and jaw cycle rate (p = 0.0020). This study provides novel evidence that otherwise healthy aging B6 mice indeed develop age-related changes in swallow function resembling presbyphagia in humans. Specifically, aging B6 mice have a generally slow swallow that spans all stages of swallowing: oral, pharyngeal, and esophageal. The next step is to build upon this foundational work by exploring the responsible mechanisms of presbyphagia in B6 mice.

  12. Inelastic properties of magnetorheological composites: II. Model, identification of parameters

    NASA Astrophysics Data System (ADS)

    Kaleta, Jerzy; Lewandowski, Daniel; Zietek, Grazyna

    2007-10-01

    As a result of a two-part research project the inelastic properties of a selected group of magnetorheological composites in cyclic shear conditions have been identified. In the first part the fabrication of the composites, their structure, the control-measurement setup, the test methods and the experimental results were described. In the second part (presented here), the experimental data are used to construct a constitutive model and identify it. A four-parameter model of an elastic/viscoplastic body was adopted for description. The model coefficients were made dependent on magnetic field strength H. The model was analysed and procedures for its identification were designed. Two-phase identification of the model parameters was carried out. The model has been shown to be valid in a frequency range above 5 Hz.

  13. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    NASA Technical Reports Server (NTRS)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL

  14. Second-Moment RANS Model Verification and Validation Using the Turbulence Modeling Resource Website (Invited)

    NASA Technical Reports Server (NTRS)

    Eisfeld, Bernhard; Rumsey, Chris; Togiti, Vamshi

    2015-01-01

    The implementation of the SSG/LRR-omega differential Reynolds stress model into the NASA flow solvers CFL3D and FUN3D and the DLR flow solver TAU is verified by studying the grid convergence of the solution of three different test cases from the Turbulence Modeling Resource Website. The model's predictive capabilities are assessed based on four basic and four extended validation cases also provided on this website, involving attached and separated boundary layer flows, effects of streamline curvature and secondary flow. Simulation results are compared against experimental data and predictions by the eddy-viscosity models of Spalart-Allmaras (SA) and Menter's Shear Stress Transport (SST).

  15. Distinguishing 'new' from 'old' organic carbon in reclaimed coal mine sites using thermogravimetry: II. Field validation

    SciTech Connect

    Maharaj, S.; Barton, C.D.; Karathanasis, T.A.D.; Rowe, H.D.; Rimmer, S.M.

    2007-04-15

    Thermogravimetry was used under laboratory conditions to differentiate 'new' and 'old' organic carbon (c) by using grass litter, coal, and limestone to represent the different C fractions. Thermogravimetric and derivative thermogravimetry curves showed pyrolysis peaks at distinctively different temperatures, with the peak for litter occurring at 270 to 395{sup o}C, for coal at 415 to 520 {sup o}C, and for limestone at 700 to 785{sup o}C. To validate this method in a field setting, we studied four reforested coal mine sites in Kentucky representing a chronosequence since reclamation: 0 and 2 years located at Bent Mountain and 3 and 8 years located at the Starfire mine. A nonmined mature (approximate to 80 years old) stand at Robinson Forest, Kentucky, was selected as a reference location. Results indicated a general peak increase in the 270 to 395{sup o}C region with increased time, signifying an increase in the 'new' organic matter (OM) fraction. For the Bent Mountain site, the OM fraction increased from 0.03 to 0.095% between years 0 and 2, whereas the Starfire site showed an increase from 0.095 to 1.47% between years 3 and 8. This equates to a C sequestration rate of 2.92 Mg ha{sup -1} yr{sup -1} for 'new' OM in the upper 10-cm layer during the 8 years of reclamation on eastern Kentucky reclaimed coal mine sites. Results suggest that stable isotopes and elemental data can be used as proxy tools for qualifying soil organic C (SOC) changes over time on the reclaimed coal mine sites but cannot be used to determine the exact SOC accumulation rate. However, results suggested that the thermogravimetric and derivative thermogravimetry methods can be used to quantify SOC accumulation and has the potential to be a more reliable, cost-effective, and rapid means to determine the new organic C fraction in mixed geological material, especially in areas dominated by coal and carbonate materials.

  16. Importance of Sea Ice for Validating Global Climate Models

    NASA Technical Reports Server (NTRS)

    Geiger, Cathleen A.

    1997-01-01

    Reproduction of current day large-scale physical features and processes is a critical test of global climate model performance. Without this benchmark, prognoses of future climate conditions are at best speculation. A fundamental question relevant to this issue is, which processes and observations are both robust and sensitive enough to be used for model validation and furthermore are they also indicators of the problem at hand? In the case of global climate, one of the problems at hand is to distinguish between anthropogenic and naturally occuring climate responses. The polar regions provide an excellent testing ground to examine this problem because few humans make their livelihood there, such that anthropogenic influences in the polar regions usually spawn from global redistribution of a source originating elsewhere. Concomitantly, polar regions are one of the few places where responses to climate are non-anthropogenic. Thus, if an anthropogenic effect has reached the polar regions (e.g. the case of upper atmospheric ozone sensitivity to CFCs), it has most likely had an impact globally but is more difficult to sort out from local effects in areas where anthropogenic activity is high. Within this context, sea ice has served as both a monitoring platform and sensitivity parameter of polar climate response since the time of Fridtjof Nansen. Sea ice resides in the polar regions at the air-sea interface such that changes in either the global atmospheric or oceanic circulation set up complex non-linear responses in sea ice which are uniquely determined. Sea ice currently covers a maximum of about 7% of the earth's surface but was completely absent during the Jurassic Period and far more extensive during the various ice ages. It is also geophysically very thin (typically <10 m in Arctic, <3 m in Antarctic) compared to the troposphere (roughly 10 km) and deep ocean (roughly 3 to 4 km). Because of these unique conditions, polar researchers regard sea ice as one of the

  17. Coupled two-dimensional main-chain torsional potential for protein dynamics II: performance and validation.

    PubMed

    Gao, Ya; Li, Yongxiu; Mou, Lirong; Hu, Wenxin; Zheng, Jun; Zhang, John Z H; Mei, Ye

    2015-03-19

    The accuracy of force fields is of utmost importance in molecular modeling of proteins. Despite successful applications of force fields for about the past 30 years, some inherent flaws lying in force fields, such as biased secondary propensities and fixed atomic charges, have been observed in different aspects of biomolecular research; hence, a correction to current force fields is desirable. Because of the simplified functional form and the limited number of parameters for main chain torsion (MCT) in traditional force fields, it is not easy to propose an exquisite force field that is well-balanced among various conformations. Recently, AMBER-compatible force fields with coupled MCT term have been proposed, which show some improvement over AMBER03 and AMBER99SB force fields. In this work, further calibration of the torsional parameters has been conducted by changing the solvation model in quantum mechanical calculation and minimizing the deviation from the nuclear magnetic resonance experiments for some benchmark model systems and a folded protein. The results show that the revised force fields give excellent agreement with experiments in J coupling, chemical shifts, and secondary structure populations. In addition, the polarization effect is found to be crucial for the systems with ordered secondary structures. PMID:25719206

  18. Gas dynamics modeling of the HYLIFE-II reactor

    SciTech Connect

    Jantzen, C.

    1995-08-01

    Gas dynamics in the IFE reactor, HYLIFE-II is modeled using the code, TSUNAMI. This code is a 2-D shock-solver that uses the Godunov method with operator splitting. Results from a cylindrically symmetric simulation indicate an initial, low density, burst of high energy particles enters the final focus transport lens within 40 microseconds after the blast, much faster than the proposed 1 millisecond shutter closing time. After approximately 100 microseconds the chamber debris flux levels off to one eighth its peak value and maintains this intensity until the shutter closes. Although initial protective jet ablation is considered, neither secondary radiation nor condensation are modeled. Therefore results are conservative.

  19. An integrated model of the TOPAZ-II electromagnetic pump

    SciTech Connect

    El-Genk, M.S.; Paramonov, D.V. . Inst. of Space Nuclear Power Studies)

    1994-11-01

    A detailed model of the electromagnetic pump of the TOPAZ-II space nuclear reactor power system is developed and compared with experimental data. The magnetic field strength in the pump depends not only on the current supplied by the pump thermionic fuel elements in the reactor core but also on the temperature of the coolant, the magnetic coil, and the pump structure. All electric and thermal properties of the coolant, wall material of the pump ducts, and electric leads are taken to be temperature dependent. The model predictions are in good agreement with experimental data.

  20. MATSIM -The Development and Validation of a Numerical Voxel Model based on the MATROSHKA Phantom

    NASA Astrophysics Data System (ADS)

    Beck, Peter; Rollet, Sofia; Berger, Thomas; Bergmann, Robert; Hajek, Michael; Latocha, Marcin; Vana, Norbert; Zechner, Andrea; Reitz, Guenther

    The AIT Austrian Institute of Technology coordinates the project MATSIM (MATROSHKA Simulation) in collaboration with the Vienna University of Technology and the German Aerospace Center. The aim of the project is to develop a voxel-based model of the MATROSHKA anthro-pomorphic torso used at the International Space Station (ISS) as foundation to perform Monte Carlo high-energy particle transport simulations for different irradiation conditions. Funded by the Austrian Space Applications Programme (ASAP), MATSIM is a co-investigation with the European Space Agency (ESA) ELIPS project MATROSHKA, an international collaboration of more than 18 research institutes and space agencies from all over the world, under the science and project lead of the German Aerospace Center. The MATROSHKA facility is designed to determine the radiation exposure of an astronaut onboard ISS and especially during an ex-travehicular activity. The numerical model developed in the frame of MATSIM is validated by reference measurements. In this report we give on overview of the model development and compare photon and neutron irradiations of the detector-equipped phantom torso with Monte Carlo simulations using FLUKA. Exposure to Co-60 photons was realized in the standard ir-radiation laboratory at Seibersdorf, while investigations with neutrons were performed at the thermal column of the Vienna TRIGA Mark-II reactor. The phantom was loaded with passive thermoluminescence dosimeters. In addition, first results of the calculated dose distribution within the torso are presented for a simulated exposure in low-Earth orbit.

  1. Transfer matrix modeling and experimental validation of cellular porous material with resonant inclusions.

    PubMed

    Doutres, Olivier; Atalla, Noureddine; Osman, Haisam

    2015-06-01

    Porous materials are widely used for improving sound absorption and sound transmission loss of vibrating structures. However, their efficiency is limited to medium and high frequencies of sound. A solution for improving their low frequency behavior while keeping an acceptable thickness is to embed resonant structures such as Helmholtz resonators (HRs). This work investigates the absorption and transmission acoustic performances of a cellular porous material with a two-dimensional periodic arrangement of HR inclusions. A low frequency model of a resonant periodic unit cell based on the parallel transfer matrix method is presented. The model is validated by comparison with impedance tube measurements and simulations based on both the finite element method and a homogenization based model. At the HR resonance frequency (i) the transmission loss is greatly improved and (ii) the sound absorption of the foam can be either decreased or improved depending on the HR tuning frequency and on the thickness and properties of the host foam. Finally, the diffuse field sound absorption and diffuse field sound transmission loss performance of a 2.6 m(2) resonant cellular material are measured. It is shown that the improvements observed at the Helmholtz resonant frequency on a single cell are confirmed at a larger scale. PMID:26093437

  2. Quantitative endoscopic imaging elastic scattering spectroscopy: model system/tissue phantom validation

    NASA Astrophysics Data System (ADS)

    Lindsley, E. H.; Farkas, D. L.

    2008-02-01

    We have designed and built an imaging elastic scattering spectroscopy endoscopic instrument for the purpose of detecting cancer in vivo. As part of our testing and validation of the system, known targets representing potential disease states of interest were constructed using polystyrene beads of known average diameter and TiO II crystals embedded in a two-layer agarose gel. Final construction geometry was verified using a dissection microscope. The phantoms were then imaged using the endoscopic probe at a known incident angle, and the results compared to model predictions. The mathematical model that was used combines classic ray-tracing optics with Mie scattering to predict the images that would be observed by the probe at a given physical distance from a Mie-regime scattering media. This model was used generate the expected observed response for a broad range of parameter values, and these results were then used as a library to fit the observed data from the phantoms. Compared against the theoretical library, the best matching signal correlated well with known phantom material dimensions. These results lead us to believe that imaging elastic scattering can be useful in detection/diagnosis, but further refinement of the device will be necessary to detect the weak signals in a real clinical setting.

  3. Models of TCP in high-BDP environments and their experimental validation

    SciTech Connect

    Vardoyan, G.; Rao, Nageswara S; Towlsey, D.

    2016-01-01

    In recent years, the computer networking community has seen a steady growth in bandwidth-delay products (BDPs). Several TCP variants were created to combat the shortcomings of legacy TCP when it comes to operation in high-BDP environments. These variants, among which are CUBIC, STCP, and H-TCP, have been extensively studied in some empirical contexts, and some analytical models exist for CUBIC and STCP. However, since these studies have been conducted, BDPs have risen even more, and new bulk data transfer tools have emerged that utilize multiple parallel TCP streams. In view of these new developments, it is imperative to revisit the question: Which congestion control algorithms are best adapted to current networking environments? In order to help resolve this question, we contribute the following: (i) using first principles, we develop a general throughput-prediction framework that takes into account buffer sizes and maximum window constraints; (ii) we validate the models using measurements and achieve low prediction errors; (iii) we note differences in TCP dynamics between two experimental configurations and find one of them to be significantly more deterministic than the other; we also find that CUBIC and H-TCP outperform STCP, especially when multiple streams are used; and (iv) we present preliminary results for modelling multiple TCP streams for CUBIC and STCP.

  4. Transfer matrix modeling and experimental validation of cellular porous material with resonant inclusions.

    PubMed

    Doutres, Olivier; Atalla, Noureddine; Osman, Haisam

    2015-06-01

    Porous materials are widely used for improving sound absorption and sound transmission loss of vibrating structures. However, their efficiency is limited to medium and high frequencies of sound. A solution for improving their low frequency behavior while keeping an acceptable thickness is to embed resonant structures such as Helmholtz resonators (HRs). This work investigates the absorption and transmission acoustic performances of a cellular porous material with a two-dimensional periodic arrangement of HR inclusions. A low frequency model of a resonant periodic unit cell based on the parallel transfer matrix method is presented. The model is validated by comparison with impedance tube measurements and simulations based on both the finite element method and a homogenization based model. At the HR resonance frequency (i) the transmission loss is greatly improved and (ii) the sound absorption of the foam can be either decreased or improved depending on the HR tuning frequency and on the thickness and properties of the host foam. Finally, the diffuse field sound absorption and diffuse field sound transmission loss performance of a 2.6 m(2) resonant cellular material are measured. It is shown that the improvements observed at the Helmholtz resonant frequency on a single cell are confirmed at a larger scale.

  5. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  6. Assessment of the Validity of the Double Superhelix Model for Reconstituted High Density Lipoproteins

    PubMed Central

    Jones, Martin K.; Zhang, Lei; Catte, Andrea; Li, Ling; Oda, Michael N.; Ren, Gang; Segrest, Jere P.

    2010-01-01

    For several decades, the standard model for high density lipoprotein (HDL) particles reconstituted from apolipoprotein A-I (apoA-I) and phospholipid (apoA-I/HDL) has been a discoidal particle ∼100 Å in diameter and the thickness of a phospholipid bilayer. Recently, Wu et al. (Wu, Z., Gogonea, V., Lee, X., Wagner, M. A., Li, X. M., Huang, Y., Undurti, A., May, R. P., Haertlein, M., Moulin, M., Gutsche, I., Zaccai, G., Didonato, J. A., and Hazen, S. L. (2009) J. Biol. Chem. 284, 36605–36619) used small angle neutron scattering to develop a new model they termed double superhelix (DSH) apoA-I that is dramatically different from the standard model. Their model possesses an open helical shape that wraps around a prolate ellipsoidal type I hexagonal lyotropic liquid crystalline phase. Here, we used three independent approaches, molecular dynamics, EM tomography, and fluorescence resonance energy transfer spectroscopy (FRET) to assess the validity of the DSH model. (i) By using molecular dynamics, two different approaches, all-atom simulated annealing and coarse-grained simulation, show that initial ellipsoidal DSH particles rapidly collapse to discoidal bilayer structures. These results suggest that, compatible with current knowledge of lipid phase diagrams, apoA-I cannot stabilize hexagonal I phase particles of phospholipid. (ii) By using EM, two different approaches, negative stain and cryo-EM tomography, show that reconstituted apoA-I/HDL particles are discoidal in shape. (iii) By using FRET, reconstituted apoA-I/HDL particles show a 28–34-Å intermolecular separation between terminal domain residues 40 and 240, a distance that is incompatible with the dimensions of the DSH model. Therefore, we suggest that, although novel, the DSH model is energetically unfavorable and not likely to be correct. Rather, we conclude that all evidence supports the likelihood that reconstituted apoA-I/HDL particles, in general, are discoidal in shape. PMID:20974855

  7. Modeling and Simulation of Longitudinal Dynamics for LER-HER PEP II Rings

    SciTech Connect

    Rivetta, Claudio; Mastorides, T.; Fox, J.D.; Teytelman, D.; Van Winkle, D.; /SLAC

    2007-03-06

    A time domain modeling and simulation tool for beam-cavity interactions in LER and HER rings at PEP II are presented. The motivation for this tool is to explore the stability margins and performance limits of PEP II RF systems at higher currents and upgraded RF configurations. It also serves as test bed for new control algorithms and can define the ultimate limits of the architecture. The time domain program captures the dynamical behavior of the beam-cavity interaction based on a reduced model. The ring current is represented by macro-bunches. Multiple RF station in the ring are represented via one or two macro-cavities. Each macro-cavity captures the overall behavior of all the 2 or 4 cavity RF station. Station models include nonlinear elements in the klystron and signal processing. This allows modeling the principal longitudinal impedance control loops interacting with the longitudinal beam model. Validation of simulation tool is in progress by comparing the measured growth rates for both LER and HER rings with simulation results. The simulated behavior of both machines at high currents are presented comparing different control strategies and the effect of non-linear klystrons in the growth rates.

  8. Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Sebok, Angelia; Gore, Brian; Hooey, Becky

    2012-01-01

    We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures.

  9. A MODEL STUDY OF TRANSVERSE MODE COUPLING INSTABILITY AT NATIONAL SYNCHROTRON LIGHT SOURCE-II (NSLS-II).

    SciTech Connect

    BLEDNYKH, A.; WANG, J.M.

    2005-05-15

    The vertical impedances of the preliminary designs of National Synchrotron Light Source II (NSLS-II) Mini Gap Undulators (MGU) are calculated by means of GdfidL code. The Transverse Mode Coupling Instability (TMCI) thresholds corresponding to these impedances are estimated using an analytically solvable model.

  10. Competitive Adsorption of Cd(II), Cr(VI), and Pb(II) onto Nanomaghemite: A Spectroscopic and Modeling Approach.

    PubMed

    Komárek, Michael; Koretsky, Carla M; Stephen, Krishna J; Alessi, Daniel S; Chrastný, Vladislav

    2015-11-01

    A combined modeling and spectroscopic approach is used to describe Cd(II), Cr(VI), and Pb(II) adsorption onto nanomaghemite and nanomaghemite coated quartz. A pseudo-second order kinetic model fitted the adsorption data well. The sorption capacity of nanomaghemite was evaluated using a Langmuir isotherm model, and a diffuse double layer surface complexation model (DLM) was developed to describe metal adsorption. Adsorption mechanisms were assessed using X-ray photoelectron spectroscopy and X-ray absorption spectroscopy. Pb(II) adsorption occurs mainly via formation of inner-sphere complexes, whereas Cr(VI) likely adsorbs mainly as outer-sphere complexes and Cd(II) as a mixture of inner- and outer-sphere complexes. The simple DLM describes well the pH-dependence of single adsorption edges. However, it fails to adequately capture metal adsorption behavior over broad ranges of ionic strength or metal-loading on the sorbents. For systems with equimolar concentrations of Pb(II), Cd(II), and Cr(VI). Pb(II) adsorption was reasonably well predicted by the DLM, but predictions were poorer for Cr(VI) and Cd(II). This study demonstrates that a simple DLM can describe well the adsorption of the studied metals in mixed sorbate-sorbent systems, but only under narrow ranges of ionic strength or metal loading. The results also highlight the sorption potential of nanomaghemite for metals in complex systems. PMID:26457556

  11. 2013 CEF RUN - PHASE 1 DATA ANALYSIS AND MODEL VALIDATION

    SciTech Connect

    Choi, A.

    2014-05-08

    Phase 1 of the 2013 Cold cap Evaluation Furnace (CEF) test was completed on June 3, 2013 after a 5-day round-the-clock feeding and pouring operation. The main goal of the test was to characterize the CEF off-gas produced from a nitric-formic acid flowsheet feed and confirm whether the CEF platform is capable of producing scalable off-gas data necessary for the revision of the DWPF melter off-gas flammability model; the revised model will be used to define new safety controls on the key operating parameters for the nitric-glycolic acid flowsheet feeds including total organic carbon (TOC). Whether the CEF off-gas data were scalable for the purpose of predicting the potential flammability of the DWPF melter exhaust was determined by comparing the predicted H{sub 2} and CO concentrations using the current DWPF melter off-gas flammability model to those measured during Phase 1; data were deemed scalable if the calculated fractional conversions of TOC-to-H{sub 2} and TOC-to-CO at varying melter vapor space temperatures were found to trend and further bound the respective measured data with some margin of safety. Being scalable thus means that for a given feed chemistry the instantaneous flow rates of H{sub 2} and CO in the DWPF melter exhaust can be estimated with some degree of conservatism by multiplying those of the respective gases from a pilot-scale melter by the feed rate ratio. This report documents the results of the Phase 1 data analysis and the necessary calculations performed to determine the scalability of the CEF off-gas data. A total of six steady state runs were made during Phase 1 under non-bubbled conditions by varying the CEF vapor space temperature from near 700 to below 300°C, as measured in a thermowell (T{sub tw}). At each steady state temperature, the off-gas composition was monitored continuously for two hours using MS, GC, and FTIR in order to track mainly H{sub 2}, CO, CO{sub 2}, NO{sub x}, and organic gases such as CH{sub 4}. The standard

  12. Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition

    NASA Technical Reports Server (NTRS)

    Ewing, Anthony; Adams, Charles

    2004-01-01

    Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.

  13. Localized multi-scale energy and vorticity analysis. II. Finite-amplitude instability theory and validation

    NASA Astrophysics Data System (ADS)

    San Liang, X.; Robinson, Allan R.

    2007-12-01

    A novel localized finite-amplitude hydrodynamic stability analysis is established in a unified treatment for the study of real oceanic and atmospheric processes, which are in general highly nonlinear, and intermittent in space and time. We first re-state the classical definition using the multi-scale energy and vorticity analysis (MS-EVA) developed in Liang and Robinson [Liang, X.S., Robinson, A.R., 2005. Localized multiscale energy and vorticity analysis. I. Fundamentals. Dyn. Atmos. Oceans 38, 195-230], and then manipulate certain global operators to achieve the temporal and spatial localization. The key of the spatial localization is transfer-transport separation, which is made precise with the concept of perfect transfer, while relaxation of marginalization leads to the localization of time. In doing so the information of transfer lost in the averages is retrieved and an easy-to-use instability metric is obtained. The resulting metric is field-like (Eulerian), conceptually generalizing the classical formalism, a bulk notion over the whole system. In this framework, an instability has a structure, which is of particular use for open flow processes. We check the structure of baroclinic instability with the benchmark Eady model solution, and the Iceland-Faeroe Frontal (IFF) intrusion, a highly localized and nonlinear process occurring frequently in the region between Iceland and Faeroe Islands. A clear isolated baroclinic instability is identified around the intrusion, which is further found to be characterized by the transition from a spatially growing mode to a temporally growing mode. We also check the consistency of the MS-EVA dynamics with the barotropic Kuo model. An observation is that a local perturbation burst does not necessarily imply an instability: the perturbation energy could be transported from other processes occurring elsewhere. We find that our analysis yields a Kuo theorem-consistent mean-eddy interaction, which is not seen in a conventional

  14. Hysteresis modeling and experimental validation of a magnetorheological damper

    NASA Astrophysics Data System (ADS)

    Bai, Xian-Xu; Chen, Peng; Qian, Li-Jun; Zhu, An-Ding

    2015-04-01

    In this paper, for modeling the MR dampers, based on the phenomenological model, a normalized phenomenological model is derived through incorporating a "normalization" concept and a restructured model is proposed and realized also with incorporation of the "normalization" concept. In order to demonstrate, a multi-islands genetic algorithm (GA) is employed to identify the parameters of the restructured model, the normalized phenomenological model as well as the phenomenological model. The research results indicate that, as compared with the phenomenological model and the normalized phenomenological model, (1) the restructured model not only can effectively decrease the number of the model parameters and reduce the complexity of the model, but also can describe the nonlinear hysteretic behavior of MR dampers more accurately, and (2) the normalized phenomenological model can improve the model efficiency as compared with the phenomenological model, although not as good as the restructured model.

  15. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  16. Comprehensive computer model for magnetron sputtering. II. Charged particle transport

    SciTech Connect

    Jimenez, Francisco J. Dew, Steven K.; Field, David J.

    2014-11-01

    Discharges for magnetron sputter thin film deposition systems involve complex plasmas that are sensitively dependent on magnetic field configuration and strength, working gas species and pressure, chamber geometry, and discharge power. The authors present a numerical formulation for the general solution of these plasmas as a component of a comprehensive simulation capability for planar magnetron sputtering. This is an extensible, fully three-dimensional model supporting realistic magnetic fields and is self-consistently solvable on a desktop computer. The plasma model features a hybrid approach involving a Monte Carlo treatment of energetic electrons and ions, along with a coupled fluid model for thermalized particles. Validation against a well-known one-dimensional system is presented. Various strategies for improving numerical stability are investigated as is the sensitivity of the solution to various model and process parameters. In particular, the effect of magnetic field, argon gas pressure, and discharge power are studied.

  17. Community-wide model validation studies for systematic assessment of ionosphere-thermosphere models

    NASA Astrophysics Data System (ADS)

    Shim, Ja Soon; Kuznetsova, Maria; Rastätter, Lutz

    2016-07-01

    As an unbiased agent, the Community Coordinated Modeling Center (CCMC) has been leading community-wide model validation efforts; GEM, CEDAR and GEM-CEDAR Modeling Challenges since 2009. The CEDAR ETI (Electrodynamics Thermosphere Ionosphere) Challenge focused on the ability of ionosphere-thermosphere (IT) models to reproduce basic IT system parameters, such as electron and neutral densities, NmF2, hmF2, and Total Electron Content (TEC). Model-data time series comparisons were performed for a set of selected events with different levels of geomagnetic activity (quiet, moderate, storms). The follow-on CEDAR-GEM Challenge aims to quantify geomagnetic storm impacts on the IT system. On-going studies include quantifying the storm energy input, such as increase in auroral precipitation and Joule heating, and quantifying the storm-time variations of neutral density and TEC. In this paper, we will present lessons learned from the Modeling Challenges led by the CCMC.

  18. Contributions to the validation of the CJS model for granular materials

    NASA Astrophysics Data System (ADS)

    Elamrani, Khadija

    1992-07-01

    Behavior model validation in the field of geotechnics is addressed, with the objective of showing the advantages and limits of the CJS (Cambou Jafari Sidoroff) behavior model for granular materials. Several levels are addressed: theoretical analysis of the CJS model to reveal consistence and first capacities; shaping (followed by validation by confrontation with other programs) of a computation code by finite elements (FINITEL) to integrate this model and prepare it for complex applications; validation of the code/model structure thus constituted by comparing its results to those of experiments in the case of nonhomogeneous (superficial foundations) problems.

  19. Cross-validation analysis of bias models in Bayesian multi-model projections of climate

    NASA Astrophysics Data System (ADS)

    Huttunen, J. M. J.; Räisänen, J.; Nissinen, A.; Lipponen, A.; Kolehmainen, V.

    2016-05-01

    Climate change projections are commonly based on multi-model ensembles of climate simulations. In this paper we consider the choice of bias models in Bayesian multimodel predictions. Buser et al. (Clim Res 44(2-3):227-241, 2010a) introduced a hybrid bias model which combines commonly used constant bias and constant relation bias assumptions. The hybrid model includes a weighting parameter which balances these bias models. In this study, we use a cross-validation approach to study which bias model or bias parameter leads to, in a specific sense, optimal climate change projections. The analysis is carried out for summer and winter season means of 2 m-temperatures spatially averaged over the IPCC SREX regions, using 19 model runs from the CMIP5 data set. The cross-validation approach is applied to calculate optimal bias parameters (in the specific sense) for projecting the temperature change from the control period (1961-2005) to the scenario period (2046-2090). The results are compared to the results of the Buser et al. (Clim Res 44(2-3):227-241, 2010a) method which includes the bias parameter as one of the unknown parameters to be estimated from the data.

  20. On-Board Prediction of Power Consumption in Automobile Active Suspension SYSTEMS—II: Validation and Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Ben Mrad, R.; Fassois, S. D.; Levitt, J. A.; Bachrach, B. I.

    1996-03-01

    The focus of this part of the paper is on validation and performance evaluation. The indirect (standard) and novel direct predictors or part I, which use time-recursive realisations and no leading indicators, are critically compared by using the non-linear active suspension system model. The results, constituting the first known comparison between indirect and direct schemes, show similar performance with a slight superiority of the former. Experimental validation is based on an especially developed active suspension vehicle. The power consumption non-stationarity is, in this case, shown to be of the homogeneous type, and completely "masking" the signal's second-order characteristics, which revealed only after the non-stationarity's effective removal. The analysis leads to two distinct types of indirect predictors: An explicit type, based on non-stationary integrated autoregressive moving average models, and an implicit type, based on stationary autoregressive moving average model. The explicit predictor is shown to be uniformly better than the implicit, although the difference is small for short prediction horizons. The experimental results indicate that accurate power consumption prediction is possible, with errors ranging from 2.22% for a prediction horizon of 0.156 s, to still less than 10% for horizons that are up to 0.470 s long, and about 25% for 1.563 s long horizons.

  1. Impact of model development, calibration and validation decisions on hydrological simulations in West Lake Erie Basin

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Watershed simulation models are used extensively to investigate hydrologic processes, landuse and climate change impacts, pollutant load assessments and best management practices (BMPs). Developing, calibrating and validating these models require a number of critical decisions that will influence t...

  2. A Permutation Method to Assess Heterogeneity in External Validation for Risk Prediction Models

    PubMed Central

    Wang, Ling-Yi; Lee, Wen-Chung

    2015-01-01

    The value of a developed prediction model depends on its performance outside the development sample. The key is therefore to externally validate the model on a different but related independent data. In this study, we propose a permutation method to assess heterogeneity in external validation for risk prediction models. The permutation p value measures the extent of homology between development and validation datasets. If p < 0.05, the model may not be directly transported to the external validation population without further revision or updating. Monte-Carlo simulations are conducted to evaluate the statistical properties of the proposed method, and two microarray breast cancer datasets are analyzed for demonstration. The permutation method is easy to implement and is recommended for routine use in external validation for risk prediction models. PMID:25606854

  3. Testing the validity of the International Atomic Energy Agency (IAEA) safety culture model.

    PubMed

    López de Castro, Borja; Gracia, Francisco J; Peiró, José M; Pietrantoni, Luca; Hernández, Ana

    2013-11-01

    This paper takes the first steps to empirically validate the widely used model of safety culture of the International Atomic Energy Agency (IAEA), composed of five dimensions, further specified by 37 attributes. To do so, three independent and complementary studies are presented. First, 290 students serve to collect evidence about the face validity of the model. Second, 48 experts in organizational behavior judge its content validity. And third, 468 workers in a Spanish nuclear power plant help to reveal how closely the theoretical five-dimensional model can be replicated. Our findings suggest that several attributes of the model may not be related to their corresponding dimensions. According to our results, a one-dimensional structure fits the data better than the five dimensions proposed by the IAEA. Moreover, the IAEA model, as it stands, seems to have rather moderate content validity and low face validity. Practical implications for researchers and practitioners are included.

  4. Examining the Reliability and Validity of Clinician Ratings on the Five-Factor Model Score Sheet

    PubMed Central

    Few, Lauren R.; Miller, Joshua D.; Morse, Jennifer Q.; Yaggi, Kirsten E.; Reynolds, Sarah K.; Pilkonis, Paul A.

    2013-01-01

    Despite substantial research use, measures of the five-factor model (FFM) are infrequently used in clinical settings due, in part, to issues related to administration time and a reluctance to use self-report instruments. The current study examines the reliability and validity of the Five-Factor Model Score Sheet (FFMSS), which is a 30-item clinician rating form designed to assess the five domains and 30 facets of one conceptualization of the FFM. Studied in a sample of 130 outpatients, clinical raters demonstrated reasonably good interrater reliability across personality profiles and the domains manifested good internal consistency with the exception of Neuroticism. The FFMSS ratings also evinced expected relations with self-reported personality traits (e.g., FFMSS Extraversion and Schedule for Nonadaptive and Adaptive Personality Positive Temperament) and consensus-rated personality disorder symptoms (e.g., FFMSS Agreeableness and Narcissistic Personality Disorder). Finally, on average, the FFMSS domains were able to account for approximately 50% of the variance in domains of functioning (e.g., occupational, parental) and were even able to account for variance after controlling for Axis I and Axis II pathology. Given these findings, it is believed that the FFMSS holds promise for clinical use. PMID:20519735

  5. Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica

    SciTech Connect

    Oboh, I.; Aluyor, E.; Audu, T.

    2015-03-30

    The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R{sup 2}), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used to predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem.

  6. MESODIF-II. Variable Trajectory Plume Segment Model

    SciTech Connect

    Powell, D.C.; Hegley, H.L.; Fox, T.D.

    1986-06-01

    MESODIF-II which embodies a variable trajectory plume segment atmospheric transport model, is designed to predict normalized air concentrations and deposition of radioactive, but otherwise non-reactive, effluents released from one or two levels over the same position in an xy-plane. In such a model, calculated particle trajectories vary as synoptic scale wind varies. At all sampling times, the particles are connected to form a segmented plume centerline. The lateral and vertical dimensions of the plume are determined by a parameterization of turbulence scale diffusion. The impetus for the development of this model arose from the need of the United States Nuclear Regulatory Commission to assess radiological effects resulting from routine nuclear power reactor operations, as outlined in United States Nuclear Regulatory Guide 1.111.

  7. A third-generation wave model for coastal regions: 1. Model description and validation

    NASA Astrophysics Data System (ADS)

    Booij, N.; Ris, R. C.; Holthuijsen, L. H.

    1999-04-01

    A third-generation numerical wave model to compute random, short-crested waves in coastal regions with shallow water and ambient currents (Simulating Waves Nearshore (SWAN)) has been developed, implemented, and validated. The model is based on a Eulerian formulation of the discrete spectral balance of action density that accounts for refractive propagation over arbitrary bathymetry and current fields. It is driven by boundary conditions and local winds. As in other third-generation wave models, the processes of wind generation, whitecapping, quadruplet wave-wave interactions, and bottom dissipation are represented explicitly. In SWAN, triad wave-wave interactions and depth-induced wave breaking are added. In contrast to other third-generation wave models, the numerical propagation scheme is implicit, which implies that the computations are more economic in shallow water. The model results agree well with analytical solutions, laboratory observations, and (generalized) field observations.

  8. Crazy like a fox. Validity and ethics of animal models of human psychiatric disease.

    PubMed

    Rollin, Michael D H; Rollin, Bernard E

    2014-04-01

    Animal models of human disease play a central role in modern biomedical science. Developing animal models for human mental illness presents unique practical and philosophical challenges. In this article we argue that (1) existing animal models of psychiatric disease are not valid, (2) attempts to model syndromes are undermined by current nosology, (3) models of symptoms are rife with circular logic and anthropomorphism, (4) any model must make unjustified assumptions about subjective experience, and (5) any model deemed valid would be inherently unethical, for if an animal adequately models human subjective experience, then there is no morally relevant difference between that animal and a human.

  9. Cross-validation pitfalls when selecting and assessing regression and classification models

    PubMed Central

    2014-01-01

    Background We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. Methods We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. Results We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. Conclusions We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error. PMID:24678909

  10. Some guidance on preparing validation plans for the DART Full System Models.

    SciTech Connect

    Gray, Genetha Anne; Hough, Patricia Diane; Hills, Richard Guy

    2009-03-01

    Planning is an important part of computational model verification and validation (V&V) and the requisite planning document is vital for effectively executing the plan. The document provides a means of communicating intent to the typically large group of people, from program management to analysts to test engineers, who must work together to complete the validation activities. This report provides guidelines for writing a validation plan. It describes the components of such a plan and includes important references and resources. While the initial target audience is the DART Full System Model teams in the nuclear weapons program, the guidelines are generally applicable to other modeling efforts. Our goal in writing this document is to provide a framework for consistency in validation plans across weapon systems, different types of models, and different scenarios. Specific details contained in any given validation plan will vary according to application requirements and available resources.

  11. Modeling the World Health Organization Disability Assessment Schedule II using non-parametric item response models.

    PubMed

    Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana

    2015-03-01

    The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology.

  12. Modeling the World Health Organization Disability Assessment Schedule II using non-parametric item response models.

    PubMed

    Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana

    2015-03-01

    The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology. PMID:25524862

  13. Incorporating channel network information in hydrologic response modelling: model development and validation using ecologically relevant indicators

    NASA Astrophysics Data System (ADS)

    Biswal, B.; Singh, R.

    2015-12-01

    Many studies in the past have revealed that hydrologic response of a basin carries imprints of its channel network. However, accurate representation of channel networks in hydrologic models has been a challenge. In addition, dominating flow processes during high flow periods are not the same as those during recession periods, and there is a need for models that can represent these varying behaviors. In this study, we develop two model structures that aim to address the challenges above. The first model assumes that flow processes can be classified into two main categories: i) pure surface flow (PSF) and ii) mixed surface-subsurface flow (MSSF). The second model is a special case of the first model which neglects PSF. Using channel networks extracted from digital elevation models, we develop instantaneous unit hydrographs (IUHs) separately for PSF (PSFIUHs) and MSSF (MSSFIUHs). PSFIUH is descried by the channel 'network width function', whereas MSSFIUH is obtained by modifying a recently developed channel network morphology based recession flow model. To obtain the simulated streamflow time series for a basin, we convolute the PSFIUH and the MSSFIUH with the respective effective rainfall time series. The effective rainfall time series is obtained by using the probability distributed model (PDM). For comparison purposes, we also use a dual linear-bucket model for routing flow. Comparing model performance across 78 watersheds in the United States using the Nash Sutcliffe efficiency (NSE), we find that the two model structures that incorporate channel network information outperform the linear-bucket model in 56 watersheds. Further testing model performance using indicators that capture frequency and duration of low and high flows shows that the two developed models outperform the linear-bucket model in four out of five indicators.

  14. Validating animal models for preclinical research: a scientific and ethical discussion.

    PubMed

    Varga, Orsolya E; Hansen, Axel K; Sandøe, Peter; Olsson, I Anna S

    2010-06-01

    The use of animals to model humans in biomedical research relies on the notion that basic processes are sufficiently similar across species to allow extrapolation. Animal model validity is discussed in terms of the similarity between the model and the human condition it is intended to model, but no formal validation of models is applied. There is a stark contrast here with the use of non-animal alternatives in toxicology and safety studies, for which an extensive validation is required. We discuss both the potential and the limitations of validating preclinical animal models for proof-of-concept studies, by using an approach similar to that applied to alternative non-animal methods in toxicology and safety testing. A major challenge in devising a validation system for animal models is the lack of a clear gold standard with which to compare results. While a complete adoption of the validation approach for alternative methods is probably inappropriate for research animal models, key features, such as making data available for external validation and defining a strategy to run experiments in a way that permits meaningful retrospective analysis, remain highly relevant.

  15. Modeling organic transformations by microorganisms of soils in six contrasting ecosystems: Validation of the MOMOS model

    NASA Astrophysics Data System (ADS)

    Pansu, M.; Sarmiento, L.; Rujano, M. A.; Ablan, M.; Acevedo, D.; Bottner, P.

    2010-03-01

    The Modeling Organic Transformations by Microorganisms of Soils (MOMOS) model simulates the growth, respiration, and mortality of soil microorganisms as main drivers of the mineralization and humification processes of organic substrates. Originally built and calibrated using data from two high-altitude sites, the model is now validated with data from a 14C experiment carried out in six contrasting tropical ecosystems covering a large gradient of temperature, rainfall, vegetation, and soil types from 65 to 3968 m asl. MOMOS enabled prediction of a greater number of variables using a lower number of parameter values than for predictions previously published on this experiment. The measured 14C mineralization and transfer into microbial biomass (MB) and humified compartments were accurately modeled using (1) temperature and moisture response functions to daily adjust the model responses to weather conditions and (2) optimization of only one parameter, the respiration rate kresp of soil microorganisms at optimal temperature and moisture. This validates the parameterization and hypotheses of the previous calibration experiment. Climate and microbial respiratory activity, related to soil properties, appear as the main factors that regulate the C cycle. The kresp rate was found to be negatively related to the fine textural fraction of soil and positively related to soil pH, allowing the proposition of two transfer functions that can be helpful to generalize MOMOS application at regional or global scale.

  16. Modeling and Validation of a Three-Stage Solidification Model for Sprays

    NASA Astrophysics Data System (ADS)

    Tanner, Franz X.; Feigl, Kathleen; Windhab, Erich J.

    2010-09-01

    A three-stage freezing model and its validation are presented. In the first stage, the cooling of the droplet down to the freezing temperature is described as a convective heat transfer process in turbulent flow. In the second stage, when the droplet has reached the freezing temperature, the solidification process is initiated via nucleation and crystal growth. The latent heat release is related to the amount of heat convected away from the droplet and the rate of solidification is expressed with a freezing progress variable. After completion of the solidification process, in stage three, the cooling of the solidified droplet (particle) is described again by a convective heat transfer process until the particle approaches the temperature of the gaseous environment. The model has been validated by experimental data of a single cocoa butter droplet suspended in air. The subsequent spray validations have been performed with data obtained from a cocoa butter melt in an experimental spray tower using the open-source computational fluid dynamics code KIVA-3.

  17. Principle and validation of modified hysteretic models for magnetorheological dampers

    NASA Astrophysics Data System (ADS)

    Bai, Xian-Xu; Chen, Peng; Qian, Li-Jun

    2015-08-01

    Magnetorheological (MR) dampers, semi-active actuators for vibration and shock control systems, have attracted increasing attention during the past two decades. However, it is difficult to establish a precise mathematical model for the MR dampers and their control systems due to their intrinsic strong nonlinear hysteretic behavior. A phenomenological model based on the Bouc-Wen model can be used to effectively describe the nonlinear hysteretic behavior of the MR dampers, but the structure of the phenomenological model is complex and the Bouc-Wen model is functionally redundant. In this paper, based on the phenomenological model, (1) a normalized phenomenological model is derived through incorporating a ‘normalization’ concept, and (2) a restructured model, also incorporating the ‘normalization’ concept, is proposed and realized. In order to demonstrate this, a multi-islands genetic algorithm (GA) is employed to identify the parameters of the restructured model, the normalized phenomenological model, and the phenomenological model. The performance of the three models for describing and predicting the damping force characteristics of the MR dampers are compared and analyzed using the identified parameters. The research results indicate that, as compared with the phenomenological model and the normalized phenomenological model, (1) the restructured model can not only effectively decrease the number of the model parameters and reduce the complexity of the model, but can also describe the nonlinear hysteretic behavior of MR dampers more accurately, and (2) the meanings of several model parameters of the restructured model are clearer and the initial ranges of the model parameters are more explicit, which is of significance for parameter identification.

  18. Eagle II: A prototype for multi-resolution combat modeling

    SciTech Connect

    Powell, D.R.; Hutchinson, J.L.

    1993-02-01

    Eagle 11 is a prototype analytic model derived from the integration of the low resolution Eagle model with the high resolution SIMNET model. This integration promises a new capability to allow for a more effective examination of proposed or existing combat systems that could not be easily evaluated using either Eagle or SIMNET alone. In essence, Eagle II becomes a multi-resolution combat model in which simulated combat units can exhibit both high and low fidelity behavior at different times during model execution. This capability allows a unit to behave in a highly manner only when required, thereby reducing the overall computational and manpower requirements for a given study. In this framework, the SIMNET portion enables a highly credible assessment of the performance of individual combat systems under consideration, encompassing both engineering performance and crew capabilities. However, when the assessment being conducted goes beyond system performance and extends to questions of force structure balance and sustainment, then SISMNET results can be used to ``calibrate`` the Eagle attrition process appropriate to the study at hand. Advancing technologies, changes in the world-wide threat, requirements for flexible response, declining defense budgets, and down-sizing of military forces motivate the development of manpower-efficient, low-cost, responsive tools for combat development studies. Eagle and SIMNET both serve as credible and useful tools. The integration of these two models promises enhanced capabilities to examine the broader, deeper, more complex battlefield of the future with higher fidelity, greater responsiveness and low overall cost.

  19. Model transparency and validation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force--7.

    PubMed

    Eddy, David M; Hollingworth, William; Caro, J Jaime; Tsevat, Joel; McDonald, Kathryn M; Wong, John B

    2012-01-01

    Trust and confidence are critical to the success of health care models. There are two main methods for achieving this: transparency (people can see how the model is built) and validation (how well the model reproduces reality). This report describes recommendations for achieving transparency and validation developed by a taskforce appointed by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making. Recommendations were developed iteratively by the authors. A nontechnical description--including model type, intended applications, funding sources, structure, intended uses, inputs, outputs, other components that determine function, and their relationships, data sources, validation methods, results, and limitations--should be made available to anyone. Technical documentation, written in sufficient detail to enable a reader with necessary expertise to evaluate the model and potentially reproduce it, should be made available openly or under agreements that protect intellectual property, at the discretion of the modelers. Validation involves face validity (wherein experts evaluate model structure, data sources, assumptions, and results), verification or internal validity (check accuracy of coding), cross validity (comparison of results with other models analyzing the same problem), external validity (comparing model results with real-world results), and predictive validity (comparing model results with prospectively observed events). The last two are the strongest form of validation. Each section of this article contains a number of recommendations that were iterated among the authors, as well as among the wider modeling taskforce, jointly set up by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making. PMID:22999134

  20. Refinement, Validation and Benchmarking of a Model for E-Government Service Quality

    NASA Astrophysics Data System (ADS)

    Magoutas, Babis; Mentzas, Gregoris

    This paper presents the refinement and validation of a model for Quality of e-Government Services (QeGS). We built upon our previous work where a conceptualized model was identified and put focus on the confirmatory phase of the model development process, in order to come up with a valid and reliable QeGS model. The validated model, which was benchmarked with very positive results with similar models found in the literature, can be used for measuring the QeGS in a reliable and valid manner. This will form the basis for a continuous quality improvement process, unleashing the full potential of e-government services for both citizens and public administrations.

  1. Model-Based Verification and Validation of Spacecraft Avionics

    NASA Technical Reports Server (NTRS)

    Khan, Mohammed Omair

    2012-01-01

    Our simulation was able to mimic the results of 30 tests on the actual hardware. This shows that simulations have the potential to enable early design validation - well before actual hardware exists. Although simulations focused around data processing procedures at subsystem and device level, they can also be applied to system level analysis to simulate mission scenarios and consumable tracking (e.g. power, propellant, etc.). Simulation engine plug-in developments are continually improving the product, but handling time for time-sensitive operations (like those of the remote engineering unit and bus controller) can be cumbersome.

  2. A poroelastic model valid in large strains with applications to perfusion in cardiac modeling

    NASA Astrophysics Data System (ADS)

    Chapelle, D.; Gerbeau, J.-F.; Sainte-Marie, J.; Vignon-Clementel, I. E.

    2009-12-01

    This paper is motivated by the modeling of blood flows through the beating myocardium, namely cardiac perfusion. As in other works, perfusion is modeled here as a flow through a poroelastic medium. The main contribution of this study is the derivation of a general poroelastic model valid for a nearly incompressible medium which experiences finite deformations. A numerical procedure is proposed to iteratively solve the porous flow and the nonlinear poroviscoelastic problems. Three-dimensional numerical experiments are presented to illustrate the model. The first test cases consist of typical poroelastic configurations: swelling and complete drainage. Finally, a simulation of cardiac perfusion is presented in an idealized left ventricle embedded with active fibers. Results show the complex temporal and spatial interactions of the muscle and blood, reproducing several key phenomena observed in cardiac perfusion.

  3. Enhanced Stability of the Fe(II)/Mn(II) State in a Synthetic Model of Heterobimetallic Cofactor Assembly.

    PubMed

    Kerber, William D; Goheen, Joshua T; Perez, Kaitlyn A; Siegler, Maxime A

    2016-01-19

    Heterobimetallic Mn/Fe cofactors are found in the R2 subunit of class Ic ribonucleotide reductases (R2c) and R2-like ligand binding oxidases (R2lox). Selective cofactor assembly is due at least in part to the thermodynamics of M(II) binding to the apoprotein. We report here equilibrium studies of Fe(II)/Mn(II) discrimination in the biomimetic model system H5(F-HXTA) (5-fluoro-2-hydroxy-1,3-xylene-α,α'-diamine-N,N,N',N'-tetraacetic acid). The homobimetallic F-HXTA complexes [Fe(H2O)6][1]2·14H2O and [Mn(H2O)6][2]2·14H2O (1 = [Fe(II)2(F-HXTA)(H2O)4](-); 2 = [Mn(II)2(F-HXTA)(H2O)4](-)) were characterized by single crystal X-ray diffraction. NMR data show that 1 retains its structure in solution (2 is NMR silent). Metal exchange is facile, and the heterobimetallic complex [Fe(II)Mn(II)(F-HXTA)(H2O)4](-) (3) is formed from mixtures of 1 and 2. (19)F NMR was used to quantify 1 and 3 in the presence of excess M(II)(aq) at various metal ratios, and equilibrium constants for Fe(II)/Mn(II) discrimination were calculated from these data. Fe(II) is preferred over Mn(II) with K1 = 182 ± 13 for complete replacement (2 ⇌ 1). This relatively modest preference is attributed to a hard-soft acid-base mismatch between the divalent cations and the polycarboxylate ligand. The stepwise constants for replacement are K2 = 20.1 ± 1.3 (2 ⇌ 3) and K3 = 9.1 ± 1.1 (3 ⇌ 1). K2 > K3 demonstrates enhanced stability of the heterobimetallic state beyond what is expected for simple Mn(II) → Fe(II) replacement. The relevance to Fe(II)/Mn(II) discrimination in R2c and R2lox proteins is discussed.

  4. Stimulus design for model selection and validation in cell signaling.

    PubMed

    Apgar, Joshua F; Toettcher, Jared E; Endy, Drew; White, Forest M; Tidor, Bruce

    2008-02-01

    Mechanism-based chemical kinetic models are increasingly being used to describe biological signaling. Such models serve to encapsulate current understanding of pathways and to enable insight into complex biological processes. One challenge in model development is that, with limited experimental data, multiple models can be consistent with known mechanisms and existing data. Here, we address the problem of model ambiguity by providing a method for designing dynamic stimuli that, in stimulus-response experiments, distinguish among parameterized models with different topologies, i.e., reaction mechanisms, in which only some of the species can be measured. We develop the approach by presenting two formulations of a model-based controller that is used to design the dynamic stimulus. In both formulations, an input signal is designed for each candidate model and parameterization so as to drive the model outputs through a target trajectory. The quality of a model is then assessed by the ability of the corresponding controller, informed by that model, to drive the experimental system. We evaluated our method on models of antibody-ligand binding, mitogen-activated protein kinase (MAPK) phosphorylation and de-phosphorylation, and larger models of the epidermal growth factor receptor (EGFR) pathway. For each of these systems, the controller informed by the correct model is the most successful at designing a stimulus to produce the desired behavior. Using these stimuli we were able to distinguish between models with subtle mechanistic differences or where input and outputs were multiple reactions removed from the model differences. An advantage of this method of model discrimination is that it does not require novel reagents, or altered measurement techniques; the only change to the experiment is the time course of stimulation. Taken together, these results provide a strong basis for using designed input stimuli as a tool for the development of cell signaling models. PMID

  5. A Comprehensive Archive of Aerosol and Trace Gas Spatial Distributions for Model and Satellite Validation

    NASA Astrophysics Data System (ADS)

    Wilson, J. C.; Meland, B. S.; Axisa, D.

    2015-12-01

    The University of Denver Aerosol Group has assembled measured aerosol size distributions, gaseous concentrations, and atmospheric state variables covering a 30 year time period into one comprehensive archive. Measurements were made during the period 1987-2013 and include data from a total of 21 NASA field campaigns. Measurements were taken from the ground to over 21 km in altitude, from 72 S Latitude to 90 N latitude on over 300 individual flights on NASA Research Aircraft. Aerosol measurements were made with the University of Denver's Nucleation-Mode Aerosol Size Spectrometer (NMASS), Focused Cavity Aerosol Spectrometer, and/or a low-pressure Condensation Particle Counter (CPC) depending on the specific campaign. The science payloads varied with the campaign objectives, but the aerosol data were invariably acquired in conjunction with measurements by other investigators placing them in the context of atmospheric composition. The archive includes location and time of the measurements along with the tropopause heights and selected atmospheric composition and state data such as ambient temperatures and pressures, abundances of ozone, N2O, oxides of nitrogen, water vapor, CO2 etc. The data archive is stored in NetCDF format and includes all relevant metadata for measured quantities. This archive will be hosted by NASA and will be available to the public for model validation. The data includes indexing by scientific campaign, date, and spatial coordinates. This will facilitate comparisons across the available range of times, locations and related measurements. This data set has been used for validation of satellite remote sensing data. Coincident measurements of aerosol size distributions were used to calculate extinction profiles which were compared to those retrieved with the SAGE II satellite. Agreement between extinctions derived from the in situ size measurements and those provided by SAGE II was good for the 452, 525, and 1020 nm wavelength channels, but poor for

  6. Validation of mechanical models for reinforced concrete structures: Presentation of the French project ``Benchmark des Poutres de la Rance''

    NASA Astrophysics Data System (ADS)

    L'Hostis, V.; Brunet, C.; Poupard, O.; Petre-Lazar, I.

    2006-11-01

    Several ageing models are available for the prediction of the mechanical consequences of rebar corrosion. They are used for service life prediction of reinforced concrete structures. Concerning corrosion diagnosis of reinforced concrete, some Non Destructive Testing (NDT) tools have been developed, and have been in use for some years. However, these developments require validation on existing concrete structures. The French project “Benchmark des Poutres de la Rance” contributes to this aspect. It has two main objectives: (i) validation of mechanical models to estimate the influence of rebar corrosion on the load bearing capacity of a structure, (ii) qualification of the use of the NDT results to collect information on steel corrosion within reinforced-concrete structures. Ten French and European institutions from both academic research laboratories and industrial companies contributed during the years 2004 and 2005. This paper presents the project that was divided into several work packages: (i) the reinforced concrete beams were characterized from non-destructive testing tools, (ii) the mechanical behaviour of the beams was experimentally tested, (iii) complementary laboratory analysis were performed and (iv) finally numerical simulations results were compared to the experimental results obtained with the mechanical tests.

  7. Dimensional and hierarchical models of depression using the Beck Depression Inventory-II in an Arab college student sample

    PubMed Central

    2010-01-01

    Background An understanding of depressive symptomatology from the perspective of confirmatory factor analysis (CFA) could facilitate valid and interpretable comparisons across cultures. The objectives of the study were: (i) using the responses of a sample of Arab college students to the Beck Depression Inventory (BDI-II) in CFA, to compare the "goodness of fit" indices of the original dimensional three-and two-factor first-order models, and their modifications, with the corresponding hierarchical models (i.e., higher - order and bifactor models); (ii) to assess the psychometric characteristics of the BDI-II, including convergent/discriminant validity with the Hopkins Symptom Checklist (HSCL-25). Method Participants (N = 624) were Kuwaiti national college students, who completed the questionnaires in class. CFA was done by AMOS, version 16. Eleven models were compared using eight "fit" indices. Results In CFA, all the models met most "fit" criteria. While the higher-order model did not provide improved fit over the dimensional first - order factor models, the bifactor model (BFM) had the best fit indices (CMNI/DF = 1.73; GFI = 0.96; RMSEA = 0.034). All regression weights of the dimensional models were significantly different from zero (P < 0.001). Standardized regression weights were mostly 0.27-0.60, and all covariance paths were significantly different from zero. The regression weights of the BFM showed that the variance related to the specific factors was mostly accounted for by the general depression factor, indicating that the general depression score is an adequate representation of severity. The BDI-II had adequate internal consistency and convergent/discriminant validity. The mean BDI score (15.5, SD = 8.5) was significantly higher than those of students from other countries (P < 0.001). Conclusion The broadly adequate fit of the various models indicates that they have some merit and implies that the relationship between the domains of depression probably

  8. The subthalamic nucleus part II: modelling and simulation of activity.

    PubMed

    Heida, Tjitske; Marani, Enrico; Usunoff, Kamen G

    2008-01-01

    Part I of The Subthalamic Nucleus (volume 198) (STN) accentuates the gap between experimental animal and human information concerning subthalamic development, cytology, topography and connections.The light and electron microscopical cytology focuses on the open nucleus concept and the neuronal types present in the STN. The cytochemistry encompasses enzymes, NO, glial fibrillary acidic protein (GFAP), calcium binding proteins, and receptors (dopamine, cannabinoid, opioid, glutamate, gamma-aminobutyric acid (GABA), serotonin, cholinergic, and calcium channels). The ontogeny of the subthalamic cell cord is also reviewed. The topography concerns the rat, cat, baboon and human STN. The descriptions of the connections are also given from a historical point of view. Recent tracer studies on the rat nigro-subthalamic connection revealed contralateral projections. This monograph (Part II of the two volumes) on the subthalamic nucleus (STN) starts with a systemic model of the basal ganglia to evaluate the position of the STN in the direct, indirect and hyperdirect pathways. A summary of in vitro studies is given, describing STN spontaneous activity as well as responses to depolarizing and hyperpolarizing inputs and high-frequency stimulation. STN bursting activity and the underlying ionic mechanisms are investigated. Deep brain stimulation used for symptomatic treatment of Parkinson's disease is discussed in terms of the elements that are influenced and its hypothesized mechanisms. This part of the monograph explores the pedunculopontine-subthalamic connections and summarizes attempts to mimic neurotransmitter actions of the pedunculopontine nucleus in cell cultures and high-frequency stimulation on cultured dissociated rat subthalamic neurons. STN cell models - single- and multi-compartment models and system-level models are discussed in relation to subthalamic function and dysfunction. Parts I and II are compared. PMID:18727495

  9. A virtual source model for Kilo-voltage cone beam CT: Source characteristics and model validation

    SciTech Connect

    Spezi, E.; Volken, W.; Frei, D.; Fix, M. K.

    2011-09-15

    Purpose: The purpose of this investigation was to study the source characteristics of a clinical kilo-voltage cone beam CT unit and to develop and validate a virtual source model that could be used for treatment planning purposes. Methods: We used a previously commissioned full Monte Carlo model and new bespoke software to study the source characteristics of a clinical kilo-voltage cone beam CT (CBCT) unit. We identified the main particle sources, their spatial, energy and angular distribution for all the image acquisition presets currently used in our clinical practice. This includes a combination of two energies (100 and 120 kVp), two filters (neutral and bowtie), and eight different x-ray beam apertures. We subsequently built a virtual source model which we validated against full Monte Carlo calculations. Results: We found that the radiation output of the clinical kilo-voltage cone beam CT unit investigated in this study could be reproduced with a virtual model comprising of two sources (target and filtration cone) or three sources (target, filtration cone and bowtie filter) when additional filtration was used. With this model, we accounted for more than 97% of the photons exiting the unit. Each source in our model was characterised by a origin distribution in both X and Y directions, a fluence map, a single energy spectrum for unfiltered beams and a two dimensional energy spectrum for bowtie filtered beams. The percentage dose difference between full Monte Carlo and virtual source model based dose distributions was well within the statistical uncertainty associated with the calculations ( {+-} 2%, one standard deviation) in all cases studied. Conclusions: The virtual source that we developed is accurate in calculating the dose delivered from a commercial kilo-voltage cone beam CT unit operating with routine clinical image acquisition settings. Our data have also shown that target, filtration cone, and bowtie filter sources needed to be all included in the model

  10. Experiments for Calibration and Validation of Plasticity and Failure Material Modeling: 6061-T651 Aluminum

    SciTech Connect

    McFadden, Sam X.; Korellis, John S.; Lee, Kenneth L.; Rogillio, Brendan R.; Hatch, Paul W.

    2008-03-01

    Experimental data for material plasticity and failure model calibration and validation were obtained from 6061-T651 aluminum, in the form of a 4-in. diameter extruded rod. Model calibration data were taken from smooth tension, notched tension, and shear tests. Model validation data were provided from experiments using thin-walled tube specimens subjected to path-dependent combinations of internal pressure, extension, and torsion.

  11. The Sandia MEMS Passive Shock Sensor : FY08 testing for functionality, model validation, and technology readiness.

    SciTech Connect

    Walraven, Jeremy Allen; Blecke, Jill; Baker, Michael Sean; Clemens, Rebecca C.; Mitchell, John Anthony; Brake, Matthew Robert; Epp, David S.; Wittwer, Jonathan W.

    2008-10-01

    This report summarizes the functional, model validation, and technology readiness testing of the Sandia MEMS Passive Shock Sensor in FY08. Functional testing of a large number of revision 4 parts showed robust and consistent performance. Model validation testing helped tune the models to match data well and identified several areas for future investigation related to high frequency sensitivity and thermal effects. Finally, technology readiness testing demonstrated the integrated elements of the sensor under realistic environments.

  12. Modeling mania: Further validation for Black Swiss mice as model animals.

    PubMed

    Hannah-Poquette, Chelsey; Anderson, Grant W; Flaisher-Grinberg, Shlomit; Wang, Jia; Meinerding, Tonya M; Einat, Haim

    2011-09-30

    The paucity of appropriate animal models for bipolar disorder hinders the research of the disorder and its treatments. Previous work suggests that Black Swiss (BS) mice may be a suitable model animal for behavioral domains of mania including reward-seeking, risk-taking, vigor, aggression and sensitivity to psychostimulants. These behaviors are high in BS mice compared with other strains and are responsive to the mood stabilizers lithium and valproate but not to the antidepressant imipramine. The current study evaluated the etiological validity of this model by assessing brain expression of two proteins implicated in affective disorders, β-catenin and BDNF, in BS mice versus C57bl/6, A/J and CBA/J mice. Additionally, pharmacological validity was further tested by assessing the effects of risperidone in a behavioral battery of tests. β-catenin and BDNF expression were evaluated in the frontal cortex and hippocampus of untreated BS, CBA/J, A/J and C57bl/6 mice by western blot. Subchronic 0.1 and 0.3mg/kg doses of risperidone were tested in a battery of behavioral tests for domains of mania. Expression of β-catenin was found to be lower in the hippocampus of BS mice compared with the other strains. Reduced β-catenin expression was not observed in the frontal cortex. BDNF expression levels were similar between strains in both the hippocampus and frontal cortex. In the behavioral tests, risperidone ameliorated amphetamine-induced hyperactivity without affecting other tests in the battery. These results offer additional pharmacological and possible etiological validity supporting the utilization of Black Swiss mice as a model for domains of mania. PMID:21570428

  13. Comprehensive and Macrospin-Based Magnetic Tunnel Junction Spin Torque Oscillator Model- Part II: Verilog-A Model Implementation

    NASA Astrophysics Data System (ADS)

    Chen, Tingsu; Eklund, Anders; Iacocca, Ezio; Rodriguez, Saul; Malm, B. Gunnar; Akerman, Johan; Rusu, Ana

    2015-03-01

    The rapid development of the magnetic tunnel junction (MTJ) spin torque oscillator (STO) technology demands an analytical model to enable building MTJ STO-based circuits and systems so as to evaluate and utilize MTJ STOs in various applications. In Part I of this paper, an analytical model based on the macrospin approximation, has been introduced and verified by comparing it with the measurements of three different MTJ STOs. In Part II, the full Verilog-A implementation of the proposed model is presented. To achieve a reliable model, an approach to reproduce the phase noise generated by the MTJ STO has been proposed and successfully employed. The implemented model yields a time domain signal, which retains the characteristics of operating frequency, linewidth, oscillation amplitude and DC operating point, with respect to the magnetic field and applied DC current. The Verilog-A implementation is verified against the analytical model, providing equivalent device characteristics for the full range of biasing conditions. Furthermore, a system that includes an MTJ STO and CMOS RF circuits is simulated to validate the proposed model for system- and circuit-level designs. The simulation results demonstrate that the proposed model opens the possibility to explore STO technology in a wide range of applications.

  14. Validation and assessment of integer programming sensor placement models.

    SciTech Connect

    Uber, James G.; Hart, William Eugene; Watson, Jean-Paul; Phillips, Cynthia Ann; Berry, Jonathan W.

    2005-02-01

    We consider the accuracy of predictions made by integer programming (IP) models of sensor placement for water security applications. We have recently shown that IP models can be used to find optimal sensor placements for a variety of different performance criteria (e.g. minimize health impacts and minimize time to detection). However, these models make a variety of simplifying assumptions that might bias the final solution. We show that our IP modeling assumptions are similar to models developed for other sensor placement methodologies, and thus IP models should give similar predictions. However, this discussion highlights that there are significant differences in how temporal effects are modeled for sensor placement. We describe how these modeling assumptions can impact sensor placements.

  15. Model transparency and validation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force-7.

    PubMed

    Eddy, David M; Hollingworth, William; Caro, J Jaime; Tsevat, Joel; McDonald, Kathryn M; Wong, John B

    2012-01-01

    Trust and confidence are critical to the success of health care models. There are two main methods for achieving this: transparency (people can see how the model is built) and validation (how well it reproduces reality). This report describes recommendations for achieving transparency and validation, developed by a task force appointed by the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the Society for Medical Decision Making (SMDM). Recommendations were developed iteratively by the authors. A nontechnical description should be made available to anyone-including model type and intended applications; funding sources; structure; inputs, outputs, other components that determine function, and their relationships; data sources; validation methods and results; and limitations. Technical documentation, written in sufficient detail to enable a reader with necessary expertise to evaluate the model and potentially reproduce it, should be made available openly or under agreements that protect intellectual property, at the discretion of the modelers. Validation involves face validity (wherein experts evaluate model structure, data sources, assumptions, and results), verification or internal validity (check accuracy of coding), cross validity (comparison of results with other models analyzing same problem), external validity (comparing model results to real-world results), and predictive validity (comparing model results with prospectively observed events). The last two are the strongest form of validation. Each section of this paper contains a number of recommendations that were iterated among the authors, as well as the wider modeling task force jointly set up by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making. PMID:22990088

  16. Campus Energy Model for Control and Performance Validation

    SciTech Connect

    2014-09-19

    The core of the modeling platform is an extensible block library for the MATLAB/Simulink software suite. The platform enables true co-simulation (interaction at each simulation time step) with NREL's state-of-the-art modeling tools and other energy modeling software.

  17. Estimation and Q-Matrix Validation for Diagnostic Classification Models

    ERIC Educational Resources Information Center

    Feng, Yuling

    2013-01-01

    Diagnostic classification models (DCMs) are structured latent class models widely discussed in the field of psychometrics. They model subjects' underlying attribute patterns and classify subjects into unobservable groups based on their mastery of attributes required to answer the items correctly. The effective implementation of DCMs depends…

  18. LHC phenomenology of SO(10) models with Yukawa unification. II.

    NASA Astrophysics Data System (ADS)

    Anandakrishnan, Archana; Bryant, B. Charles; Raby, Stuart

    2014-07-01

    In this paper we study Yukawa-unified SO(10) supersymmetric (SUSY) grand unified theories (GUTs) with two types of SO(10) boundary conditions: (i) universal gaugino masses and (ii) nonuniversal gaugino masses with effective "mirage" mediation. With these boundary conditions, we perform a global χ2 analysis to obtain the parameters consistent with 11 low energy observables, including the top, bottom, and tau masses. Both boundary conditions have universal scalar masses and "just so" splitting for the up- and down-type Higgs masses. In these models, the third family scalars are lighter than the first two families and the gauginos are lighter than all the scalars. We therefore focus on the gluino phenomenology in these models. In particular, we estimate the lowest allowed gluino mass in our models coming from the most recent LHC data and compare this to limits obtained using simplified models. We find that the lower bound on Mg ˜ in Yukawa-unified SO(10) SUSY GUTs is generically ˜1.2 TEV at the 1σ level unless there is considerable degeneracy between the gluino and the lightest supersymmetric particle, in which case the bounds are much weaker. Hence many of our benchmark points are not ruled out by the present LHC data and are still viable models which can be tested at LHC 14.

  19. Exact Analysis of Squared Cross-Validity Coefficient in Predictive Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2009-01-01

    In regression analysis, the notion of population validity is of theoretical interest for describing the usefulness of the underlying regression model, whereas the presumably more important concept of population cross-validity represents the predictive effectiveness for the regression equation in future research. It appears that the inference…

  20. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…