Science.gov

Sample records for ii model validation

  1. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    SciTech Connect

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and sensible

  2. A scattering model for perfectly conducting random surfaces. I - Model development. II - Range of validity

    NASA Technical Reports Server (NTRS)

    Fung, A. K.; Pan, G. W.

    1987-01-01

    The surface current on a perfectly conducting randomly rough surface is estimated by solving iteratively a standard integral equation, and the estimate is then used to compute the far-zone scattered fields and the backscattering coefficients for vertical, horizontal and cross polarizations. The model developed here yields a simple backscattering coefficient expression in terms of the surface parameters. The expression reduces analytically to the Kirchhoff and the first-order small-perturbation model in the high- and low-frequency regions, respectively. The range of validity of the model is determined.

  3. Validated Competing Event Model for the Stage I-II Endometrial Cancer Population

    SciTech Connect

    Carmona, Ruben; Gulaya, Sachin; Murphy, James D.; Rose, Brent S.; Wu, John; Noticewala, Sonal; McHale, Michael T.; Yashar, Catheryn M.; Vaida, Florin; Mell, Loren K.

    2014-07-15

    Purpose/Objectives(s): Early-stage endometrial cancer patients are at higher risk of noncancer mortality than of cancer mortality. Competing event models incorporating comorbidity could help identify women most likely to benefit from treatment intensification. Methods and Materials: 67,397 women with stage I-II endometrioid adenocarcinoma after total hysterectomy diagnosed from 1988 to 2009 were identified in Surveillance, Epidemiology, and End Results (SEER) and linked SEER-Medicare databases. Using demographic and clinical information, including comorbidity, we sought to develop and validate a risk score to predict the incidence of competing mortality. Results: In the validation cohort, increasing competing mortality risk score was associated with increased risk of noncancer mortality (subdistribution hazard ratio [SDHR], 1.92; 95% confidence interval [CI], 1.60-2.30) and decreased risk of endometrial cancer mortality (SDHR, 0.61; 95% CI, 0.55-0.78). Controlling for other variables, Charlson Comorbidity Index (CCI) = 1 (SDHR, 1.62; 95% CI, 1.45-1.82) and CCI >1 (SDHR, 3.31; 95% CI, 2.74-4.01) were associated with increased risk of noncancer mortality. The 10-year cumulative incidences of competing mortality within low-, medium-, and high-risk strata were 27.3% (95% CI, 25.2%-29.4%), 34.6% (95% CI, 32.5%-36.7%), and 50.3% (95% CI, 48.2%-52.6%), respectively. With increasing competing mortality risk score, we observed a significant decline in omega (ω), indicating a diminishing likelihood of benefit from treatment intensification. Conclusion: Comorbidity and other factors influence the risk of competing mortality among patients with early-stage endometrial cancer. Competing event models could improve our ability to identify patients likely to benefit from treatment intensification.

  4. Development of a livestock odor dispersion model: part II. Evaluation and validation.

    PubMed

    Yu, Zimu; Guo, Huiqing; Laguë, Claude

    2011-03-01

    A livestock odor dispersion model (LODM) was developed to predict odor concentration and odor frequency using routine hourly meteorological data input. The odor concentrations predicted by the LODM were compared with the results obtained from other commercial models (Industrial Source Complex Short-Term model, version 3, CALPUFF) to evaluate its appropriateness. Two sets of field odor plume measurement data were used to validate the model. The model-predicted mean odor concentrations and odor frequencies were compared with those measured. Results show that this model has good performance for predicting odor concentrations and odor frequencies. PMID:21416754

  5. A wheat grazing model for simulating grain and beef production: Part II - model validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Model evaluation is a prerequisite to its adoption and successful application. The objective of this paper is to evaluate the ability of a newly developed wheat grazing model to predict fall-winter forage and grain yields of winter wheat (Triticum aestivum L.) as well as daily weight gains per steer...

  6. A physical model of the bidirectional reflectance of vegetation canopies. I - Theory. II - Inversion and validation

    NASA Technical Reports Server (NTRS)

    Verstraete, Michel M.; Pinty, Bernard; Dickinson, Robert E.

    1990-01-01

    A new physically based analytical model of the bidirectional reflectance of vegetation canopies is derived. The model expresses the bidirectional reflectance field of a semiinfinite canopy as a combination of functions describing (1) the optical properties of the leaves through their single-scattering albedo and their phase function, (2) the average distribution of leaf orientations, and (3) the architecture of the canopy. The model is validated against laboratory and ground-based measurements in the visible and IR spectral regions, taken over two vegetation covers. The intrinsic optical properties of leaves and the information on the geometrical canopy arrangements in space were obtained using an inversion procedure based on a nonlinear optimization technique. Model predictions of bidirectional reflectances obtained using the inversion procedure compare well with actual observations.

  7. Southern california offshore air quality model validation study. Volume II: synthesis of findings. Final report

    SciTech Connect

    Zannetti, P.; Wilbur, D.M.; Baxter, R.A.

    1981-11-01

    This volume describes the significant results of a BLM-funded study conducted jointly by AeroVironment Inc. and the Naval Postgraduate School to validate and/or modify screening models commonly used to predict onshore air quality impacts from outer continental shelf (OCS) emission sources. The study involved both field experiments and computer modeling analysis to give a better understanding of dispersion over water and at the land/sea interface. Two field experiments were performed releasing SF tracer gas from a research vessel offshore the Ventura-Oxnard, California coastal area in September, 1980 and January, 1981. Modifications are discussed for standard Gaussian models to predict peak plume concentration values, the horizontal and vertical shape of the plume, and peak ground-level impacts from OCS emission sources.

  8. Fluids with competing interactions. II. Validating a free energy model for equilibrium cluster size

    NASA Astrophysics Data System (ADS)

    Bollinger, Jonathan A.; Truskett, Thomas M.

    2016-08-01

    Using computer simulations, we validate a simple free energy model that can be analytically solved to predict the equilibrium size of self-limiting clusters of particles in the fluid state governed by a combination of short-range attractive and long-range repulsive pair potentials. The model is a semi-empirical adaptation and extension of the canonical free energy-based result due to Groenewold and Kegel [J. Phys. Chem. B 105, 11702-11709 (2001)], where we use new computer simulation data to systematically improve the cluster-size scalings with respect to the strengths of the competing interactions driving aggregation. We find that one can adapt a classical nucleation like theory for small energetically frustrated aggregates provided one appropriately accounts for a size-dependent, microscopic energy penalty of interface formation, which requires new scaling arguments. This framework is verified in part by considering the extensive scaling of intracluster bonding, where we uncover a superlinear scaling regime distinct from (and located between) the known regimes for small and large aggregates. We validate our model based on comparisons against approximately 100 different simulated systems comprising compact spherical aggregates with characteristic (terminal) sizes between six and sixty monomers, which correspond to wide ranges in experimentally controllable parameters.

  9. Assessing the wildlife habitat value of New England salt marshes: II. Model testing and validation.

    PubMed

    McKinney, Richard A; Charpentier, Michael A; Wigand, Cathleen

    2009-07-01

    We tested a previously described model to assess the wildlife habitat value of New England salt marshes by comparing modeled habitat values and scores with bird abundance and species richness at sixteen salt marshes in Narragansett Bay, Rhode Island USA. As a group, wildlife habitat value assessment scores for the marshes ranged from 307-509, or 31-67% of the maximum attainable score. We recorded 6 species of wading birds (Ardeidae; herons, egrets, and bitterns) at the sites during biweekly survey. Species richness (r (2)=0.24, F=4.53, p=0.05) and abundance (r (2)=0.26, F=5.00, p=0.04) of wading birds significantly increased with increasing assessment score. We optimized our assessment model for wading birds by using Akaike information criteria (AIC) to compare a series of models comprised of specific components and categories of our model that best reflect their habitat use. The model incorporating pre-classification, wading bird habitat categories, and natural land surrounding the sites was substantially supported by AIC analysis as the best model. The abundance of wading birds significantly increased with increasing assessment scores generated with the optimized model (r (2)=0.48, F=12.5, p=0.003), demonstrating that optimizing models can be helpful in improving the accuracy of the assessment for a given species or species assemblage. In addition to validating the assessment model, our results show that in spite of their urban setting our study marshes provide substantial wildlife habitat value. This suggests that even small wetlands in highly urbanized coastal settings can provide important wildlife habitat value if key habitat attributes (e.g., natural buffers, habitat heterogeneity) are present. PMID:18597178

  10. SAGE II aerosol data validation based on retrieved aerosol model size distribution from SAGE II aerosol measurements

    NASA Technical Reports Server (NTRS)

    Wang, Pi-Huan; Mccormick, M. P.; Mcmaster, L. R.; Chu, W. P.; Swissler, T. J.; Osborn, M. T.; Russell, P. B.; Oberbeck, V. R.; Livingston, J.; Rosen, J. M.

    1989-01-01

    Consideration is given to aerosol correlative measurements experiments for the Stratospheric Aerosol and Gas Experiment (SAGE) II, conducted between November 1984 and July 1986. The correlative measurements were taken with an impactor/laser probe, a dustsonde, and an airborne 36-cm lidar system. The primary aerosol quantities measured by the ground-based instruments are compared with those calculated from the aerosol size distributions from SAGE II aerosol extinction measurements. Good agreement is found between the two sets of measurements.

  11. Development and validation of an evaporation duct model. Part II: Evaluation and improvement of stability functions

    NASA Astrophysics Data System (ADS)

    Ding, Juli; Fei, Jianfang; Huang, Xiaogang; Cheng, Xiaoping; Hu, Xiaohua; Ji, Liang

    2015-06-01

    This study aims to validate and improve the universal evaporation duct (UED) model through a further analysis of the stability function ( ψ). A large number of hydrometeorological observations obtained from a tower platform near Xisha Island of the South China Sea are employed, together with the latest variations in ψ function. Applicability of different ψ functions for specific sea areas and stratification conditions is investigated based on three objective criteria. The results show that, under unstable conditions, ψ function of Fairall et al. (1996) (i.e., Fairall96, similar for abbreviations of other function names) in general offers the best performance. However, strictly speaking, this holds true only for the stability (represented by bulk Richardson number R iB) range -2.6 ⩽ R iB < -0.1; when conditions become weakly unstable (-0.1 ⩽ R iB < -0.01), Fairall96 offers the second best performance after Hu and Zhang (1992) (HYQ92). Conversely, for near-neutral but slightly unstable conditions (-0.01 ⩽ R iB < 0.0), the effects of Edson04, Fairall03, Grachev00, and Fairall96 are similar, with Edson04 being the best function but offering only a weak advantage. Under stable conditions, HYQ92 is the optimal and offers a pronounced advantage, followed by the newly introduced SHEBA07 (by Grachev et al., 2007) function. Accordingly, the most favorable functions, i.e., Fairall96 and HYQ92, are incorporated into the UED model to obtain an improved version of the model. With the new functions, the mean root-mean-square (rms) errors of the modified refractivity ( M), 0-5-m M slope, 5-40-m M slope, and the rms errors of evaporation duct height (EDH) are reduced by 21.65%, 9.12%, 38.79%, and 59.06%, respectively, compared to the classical Naval Postgraduate School model.

  12. SAGE II aerosol data validation based on retrieved aerosol model size distribution from SAGE II aerosol measurements.

    PubMed

    Wang, P H; McCormick, M P; McMaster, L R; Chu, W P; Swissler, T J; Osborn, M T; Russell, P B; Oberbeck, V R; Livingston, J; Rosen, J M; Hofmann, D J; Grams, G W; Fuller, W H; Yue, G K

    1989-06-20

    This paper describes an investigation of the comprehensive aerosol correlative measurement experiments conducted between November 1984 and July 1986 for satellite measurement program of the Stratospheric Aerosol and Gas Experiment (SAGE II). The correlative sensors involved in the experiments consist of the NASA Ames Research Center impactor/laser probe, the University of Wyoming dustsonde, and the NASA Langley Research Center airborne 14-inch (36 cm) lidar system. The approach of the analysis is to compare the primary aerosol quantities measured by the ground-based instruments with the calculated ones based on the aerosol size distributions retrieved from the SAGE II aerosol extinction measurements. The analysis shows that the aerosol size distributions derived from the SAGE II observations agree qualitatively with the in situ measurements made by the impactor/laser probe. The SAGE II-derived vertical distributions of the ratio N0.15/N0.25 (where Nr is the cumulative aerosol concentration for particle radii greater than r, in micrometers) and the aerosol backscatter profiles at 0.532- and 0.6943-micrometer lidar wavelengths are shown to agree with the dustsonde and the 14-inch (36-cm) lidar observations, with the differences being within the respective uncertainties of the SAGE II and the other instruments. PMID:11539801

  13. Person Heterogeneity of the BDI-II-C and Its Effects on Dimensionality and Construct Validity: Using Mixture Item Response Models

    ERIC Educational Resources Information Center

    Wu, Pei-Chen; Huang, Tsai-Wei

    2010-01-01

    This study was to apply the mixed Rasch model to investigate person heterogeneity of Beck Depression Inventory-II-Chinese version (BDI-II-C) and its effects on dimensionality and construct validity. Person heterogeneity was reflected by two latent classes that differ qualitatively. Additionally, person heterogeneity adversely affected the…

  14. Predictive modeling of infrared radiative heating in tomato dry-peeling process: Part II. Model validation and sensitivity analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A predictive mathematical model was developed to simulate heat transfer in a tomato undergoing double sided infrared (IR) heating in a dry-peeling process. The aims of this study were to validate the developed model using experimental data and to investigate different engineering parameters that mos...

  15. Validating the Serpent Model of FiR 1 Triga Mk-II Reactor by Means of Reactor Dosimetry

    NASA Astrophysics Data System (ADS)

    Viitanen, Tuomas; Leppänen, Jaakko

    2016-02-01

    A model of the FiR 1 Triga Mk-II reactor has been previously generated for the Serpent Monte Carlo reactor physics and burnup calculation code. In the current article, this model is validated by comparing the predicted reaction rates of nickel and manganese at 9 different positions in the reactor to measurements. In addition, track-length estimators are implemented in Serpent 2.1.18 to increase its performance in dosimetry calculations. The usage of the track-length estimators is found to decrease the reaction rate calculation times by a factor of 7-8 compared to the standard estimator type in Serpent, the collision estimators. The differences in the reaction rates between the calculation and the measurement are below 20%.

  16. Black liquor combustion validated recovery boiler modeling: Final year report. Volume 2 (Appendices I, section 5 and II, section 1)

    SciTech Connect

    Grace, T.M.; Frederick, W.J.; Salcudean, M.; Wessel, R.A.

    1998-08-01

    This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished were as follows: (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes. (2) Validate the enhanced furnace models, so that users can have confidence in the predicted results. (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler. (4) Facilitate the transfer of codes, black liquid submodels, and fundamental knowledge to the US kraft pulp industry. Volume 2 contains the last section of Appendix I, Radiative heat transfer in kraft recovery boilers, and the first section of Appendix II, The effect of temperature and residence time on the distribution of carbon, sulfur, and nitrogen between gaseous and condensed phase products from low temperature pyrolysis of kraft black liquor.

  17. Assessing Wildlife Habitat Value of New England Salt Marshes: II. Model Testing and Validation

    EPA Science Inventory

    We test a previously described model to assess the wildlife habitat value of New England salt marshes by comparing modeled habitat values and scores with bird abundance and species richness at sixteen salt marshes in Narragansett Bay, Rhode Island USA. Assessment scores ranged f...

  18. The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models

    EPA Science Inventory

    The second phase of the MicroArray Quality Control (MAQC-II) project evaluated common practices for developing and validating microarray-based models aimed at predicting toxicological and clinical endpoints. Thirty-six teams developed classifiers for 13 endpoints - some easy, som...

  19. TAMDAR Sensor Validation in 2003 AIRS II

    NASA Technical Reports Server (NTRS)

    Daniels, Taumi S.; Murray, John J.; Anderson, Mark V.; Mulally, Daniel J.; Jensen, Kristopher R.; Grainger, Cedric A.; Delene, David J.

    2005-01-01

    This study entails an assessment of TAMDAR in situ temperature, relative humidity and winds sensor data from seven flights of the UND Citation II. These data are undergoing rigorous assessment to determine their viability to significantly augment domestic Meteorological Data Communications Reporting System (MDCRS) and the international Aircraft Meteorological Data Reporting (AMDAR) system observational databases to improve the performance of regional and global numerical weather prediction models. NASA Langley Research Center participated in the Second Alliance Icing Research Study from November 17 to December 17, 2003. TAMDAR data taken during this period is compared with validation data from the UND Citation. The data indicate acceptable performance of the TAMDAR sensor when compared to measurements from the UND Citation research instruments.

  20. Model Validation Status Review

    SciTech Connect

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  1. SOSS ICN Model Validation

    NASA Technical Reports Server (NTRS)

    Zhu, Zhifan

    2016-01-01

    Under the NASA-KAIA-KARI ATM research collaboration agreement, SOSS ICN Model has been developed for Incheon International Airport. This presentation describes the model validation work in the project. The presentation will show the results and analysis of the validation.

  2. Black liquor combustion validated recovery boiler modeling: Final year report. Volume 3 (Appendices II, sections 2--3 and III)

    SciTech Connect

    Grace, T.M.; Frederick, W.J.; Salcudean, M.; Wessel, R.A.

    1998-08-01

    This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished were as follows: (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes. (2) Validate the enhanced furnace models, so that users can have confidence in the predicted results. (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler. (4) Facilitate the transfer of codes, black liquid submodels, and fundamental knowledge to the US kraft pulp industry. Volume 3 contains the following appendix sections: Formation and destruction of nitrogen oxides in recovery boilers; Sintering and densification of recovery boiler deposits laboratory data and a rate model; and Experimental data on rates of particulate formation during char bed burning.

  3. Numerical investigation of dynamic microorgan devices as drug screening platforms. Part II: Microscale modeling approach and validation.

    PubMed

    Tourlomousis, Filippos; Chang, Robert C

    2016-03-01

    The authors have previously reported a rigorous macroscale modeling approach for an in vitro 3D dynamic microorgan device (DMD). This paper represents the second of a two-part model-based investigation where the effect of microscale (single liver cell-level) shear-mediated mechanotransduction on drug biotransformation is deconstructed. Herein, each cell is explicitly incorporated into the geometric model as single compartmentalized metabolic structures. Each cell's metabolic activity is coupled with the microscale hydrodynamic Wall Shear Stress (WSS) simulated around the cell boundary through a semi-empirical polynomial function as an additional reaction term in the mass transfer equations. Guided by the macroscale model-based hydrodynamics, only 9 cells in 3 representative DMD domains are explicitly modeled. Dynamic and reaction similarity rules based on non-dimensionalization are invoked to correlate the numerical and empirical models, accounting for the substrate time scales. The proposed modeling approach addresses the key challenge of computational cost towards modeling complex large-scale DMD-type system with prohibitively high cell densities. Transient simulations are implemented to extract the drug metabolite profile with the microscale modeling approach validated with an experimental drug flow study. The results from the author's study demonstrate the preferred implementation of the microscale modeling approach over that of its macroscale counterpart. PMID:26333066

  4. Macrotransport-solidification kinetics modeling of equiaxed dendritic growth: Part II. Computation problems and validation on INCONEL 718 superalloy castings

    NASA Astrophysics Data System (ADS)

    Nastac, L.; Stefanescu, D. M.

    1996-12-01

    In Part I of the article, a new analytical model that describes solidification of equiaxed dendrites was presented. In this part of the article, the model is used to simulate the solidification of INCONEL 718 superalloy castings. The model was incorporated into a commercial finite-element code, PROCAST. A special procedure called microlatent heat method (MLHM) was used for coupling between macroscopic heat flow and microscopic growth kinetics. A criterion for time-stepping selection in microscopic modeling has been derived in conjunction with MLHM. Reductions in computational (CPU) time up to 90 pct over the classic latent heat method were found by adopting this coupling. Validation of the model was performed against experimental data for an INCONEL 718 superalloy casting. In the present calculations, the model for globulitic dendrite was used. The evolution of fraction of solid calculated with the present model was compared with Scheil’s model and experiments. An important feature in solidification of INCONEL 718 is the detrimental Laves phase. Laves phase content is directly related to the intensity of microsegregation of niobium, which is very sensitive to the evolution of the fraction of solid. It was found that there is a critical cooling rate at which the amount of Laves phase is maximum. The critical cooling rate is not a function of material parameters (diffusivity, partition coefficient, etc.). It depends only on the grain size and solidification time. The predictions generated with the present model are shown to agree very well with experiments.

  5. Development of a new version of the Liverpool Malaria Model. II. Calibration and validation for West Africa

    PubMed Central

    2011-01-01

    Background In the first part of this study, an extensive literature survey led to the construction of a new version of the Liverpool Malaria Model (LMM). A new set of parameter settings was provided and a new development of the mathematical formulation of important processes related to the vector population was performed within the LMM. In this part of the study, so far undetermined model parameters are calibrated through the use of data from field studies. The latter are also used to validate the new LMM version, which is furthermore compared against the original LMM version. Methods For the calibration and validation of the LMM, numerous entomological and parasitological field observations were gathered for West Africa. Continuous and quality-controlled temperature and precipitation time series were constructed using intermittent raw data from 34 weather stations across West Africa. The meteorological time series served as the LMM data input. The skill of LMM simulations was tested for 830 different sets of parameter settings of the undetermined LMM parameters. The model version with the highest skill score in terms of entomological malaria variables was taken as the final setting of the new LMM version. Results Validation of the new LMM version in West Africa revealed that the simulations compare well with entomological field observations. The new version reproduces realistic transmission rates and simulated malaria seasons are comparable to field observations. Overall the new model version performs much better than the original model. The new model version enables the detection of the epidemic malaria potential at fringes of endemic areas and, more importantly, it is now applicable to the vast area of malaria endemicity in the humid African tropics. Conclusions A review of entomological and parasitological data from West Africa enabled the construction of a new LMM version. This model version represents a significant step forward in the modelling of a weather

  6. Validation of the European System for Cardiac Operative Risk Evaluation-II model in an urban Indian population and comparison with three other risk scoring systems

    PubMed Central

    Pillai, Biju Sivam; Baloria, Kanwar Aditya; Selot, Nandini

    2015-01-01

    Aims and Objectives: The aims were to compare the European System for Cardiac Operative Risk Evaluation (EuroSCORE)-II system against three established risk scoring systems for predictive accuracy in an urban Indian population and suggest improvements or amendments in the existing scoring system for adaptation in Indian population. Materials and Methods: EuroSCORE-II, Parsonnet score, System-97 score, and Cleveland score were obtained preoperatively for 1098 consecutive patients. EuroSCORE-II system was analyzed in comparison to each of the above three scoring systems in an urban Indian population. Calibrations of scoring systems were assessed using Hosmer–Lemeshow test. Areas under receiver operating characteristics (ROC) curves were compared according to the statistical approach suggested by Hanley and McNeil. Results: All EuroSCORE-II subgroups had highly significant P values stating good predictive mortality, except high-risk group (P = 0.175). The analysis of ROC curves of different scoring systems showed that the highest predictive value for mortality was calculated for the System-97 score followed by the Cleveland score. System-97 revealed extremely high predictive accuracies across all subgroups (curve area >80%). This difference in predictive accuracy was found to be statistically significant (P < 0.001). Conclusions: The present study suggests that the EuroSCORE-II model in its present form is not validated for use in the Indian population. An interesting observation was significantly accurate predictive abilities of the System-97 score. PMID:26139738

  7. Groundwater Model Validation

    SciTech Connect

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  8. Comparison of Analytical Models of Propagation of CMEs and its Validation Using Type II Radio Bursts Observations

    NASA Astrophysics Data System (ADS)

    Perez Alanis, C. A.; Aguilar-Rodriguez, E.; Corona Romero, P.

    2015-12-01

    Coronal Mass Ejections (CMEs) are large-scale eruptive events arising from the solar corona that are expelled into the interplanetary (IP) medium. The CMEs can be associated with interplanetary shocks and this associated with type II radio-burst emissions. Some of the CMEs carry a magnetic configuration that can generate geomagnetic storm, the main interest in space weather. It is therefore important to predict arrival times of CMEs that are potential to generate a geomagnetic storm. We used a number of hydrodynamic (viscous and inertial) drag force models to approximate the trajectory of a CME. We focus on obtaining proportionality constants to achieve good approximations to CME arrivals. We analized a set of fast CMEs by finding the appropiate drag coefficients that simultaneusly approximated: the in-situ arrivals of the events, their associated type II radio-burst and satellite observations of these phenomena. Our results suggest that quadratic and inertial drag are the dynamic agent that prevails for fast CMEs propagation. Our studies may contribute to future 'space weather forescasting' at the Earth.

  9. Validation of SAGE II NO2 measurements

    NASA Technical Reports Server (NTRS)

    Cunnold, D. M.; Zawodny, J. M.; Chu, W. P.; Mccormick, M. P.; Pommereau, J. P.; Goutail, F.

    1991-01-01

    The validity of NO2 measurements from the stratospheric aerosol and gas experiment (SAGE) II is examined by comparing the data with climatological distributions of NO2 and by examining the consistency of the observations themselves. The precision at high altitudes is found to be 5 percent, which is also the case at specific low altitudes for certain latitudes where the mixing ratio is 4 ppbv, and the precision is 0.2 ppbv at low altitudes. The autocorrelation distance of the smoothed profile measurement noise is 3-5 km and 10 km for 1-km and 5-km smoothing, respectively. The SAGE II measurements agree with spectroscopic measurements to within 10 percent, and the SAGE measurements are about 20 percent smaller than average limb monitor measurements at the mixing ratio peak. SAGE I and SAGE II measurements are slightly different, but the difference is not attributed to changes in atmospheric NO2.

  10. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  11. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part II: Benchmark comparisons of PUMA core parameters with MCNP5 and improvements due to a simple cell heterogeneity correction

    SciTech Connect

    Grant, C.; Mollerach, R.; Leszczynski, F.; Serra, O.; Marconi, J.; Fink, J.

    2006-07-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure vessel design with 451 vertical coolant channels and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update of reactor physics calculation methods and models was recently carried out covering cell, supercell (control rod) and core calculations. This paper presents benchmark comparisons of core parameters of a slightly idealized model of the Atucha-I core obtained with the PUMA reactor code with MCNP5. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, more symmetric than Atucha-II, and has some experimental data available. To validate the new models benchmark comparisons of k-effective, channel power and axial power distributions obtained with PUMA and MCNP5 have been performed. In addition, a simple cell heterogeneity correction recently introduced in PUMA is presented, which improves significantly the agreement of calculated channel powers with MCNP5. To complete the validation, the calculation of some of the critical configurations of the Atucha-I reactor measured during the experiments performed at first criticality is also presented. (authors)

  12. Data Assimilation of Photosynthetic Light-use Efficiency using Multi-angular Satellite Data: II Model Implementation and Validation

    NASA Technical Reports Server (NTRS)

    Hilker, Thomas; Hall, Forest G.; Tucker, J.; Coops, Nicholas C.; Black, T. Andrew; Nichol, Caroline J.; Sellers, Piers J.; Barr, Alan; Hollinger, David Y.; Munger, J. W.

    2012-01-01

    Spatially explicit and temporally continuous estimates of photosynthesis will be of great importance for increasing our understanding of and ultimately closing the terrestrial carbon cycle. Current capabilities to model photosynthesis, however, are limited by accurate enough representations of the complexity of the underlying biochemical processes and the numerous environmental constraints imposed upon plant primary production. A potentially powerful alternative to model photosynthesis through these indirect observations is the use of multi-angular satellite data to infer light-use efficiency (e) directly from spectral reflectance properties in connection with canopy shadow fractions. Hall et al. (this issue) introduced a new approach for predicting gross ecosystem production that would allow the use of such observations in a data assimilation mode to obtain spatially explicit variations in e from infrequent polar-orbiting satellite observations, while meteorological data are used to account for the more dynamic responses of e to variations in environmental conditions caused by changes in weather and illumination. In this second part of the study we implement and validate the approach of Hall et al. (this issue) across an ecologically diverse array of eight flux-tower sites in North America using data acquired from the Compact High Resolution Imaging Spectroradiometer (CHRIS) and eddy-flux observations. Our results show significantly enhanced estimates of e and therefore cumulative gross ecosystem production (GEP) over the course of one year at all examined sites. We also demonstrate that e is greatly heterogeneous even across small study areas. Data assimilation and direct inference of GEP from space using a new, proposed sensor could therefore be a significant step towards closing the terrestrial carbon cycle.

  13. Applied model validation

    NASA Astrophysics Data System (ADS)

    Davies, A. D.

    1985-07-01

    The NBS Center for Fire Research (CFR) conducts scientific research bearing on the fire safety of buildings, vehicles, tunnels and other inhabited structures. Data from controlled fire experiments are collected, analyzed and reduced to the analytical formulas that appear to underly the observed phenomena. These results and more general physical principles are then combined into models to predict the development of environments that may be hostile to humans. This is a progress report of an applied model validation case study. The subject model is Transport of Fire, Smoke and Gases (FAST). Products from a fire in a burn room exit through a connected corridor to outdoors. Cooler counterflow air in a lower layer feeds the fire. The model predicts corridor layer temperatures and thicknesses vs. time, given enclosure, fire and ambient specifications. Data have been collected from 38 tests using several fire sizes, but have not been reduced. Corresponding model results, and model and test documentation are yet to come. Considerable modeling and calculation is needed to convert instrument readings to test results comparable with model outputs so that residual differences may be determined.

  14. Validation Studies for the Diet History Questionnaire II

    Cancer.gov

    Data show that the DHQ I instrument provides reasonable nutrient estimates, and three studies were conducted to assess its validity/calibration. There have been no such validation studies with the DHQ II.

  15. Ab initio structural modeling of and experimental validation for Chlamydia trachomatis protein CT296 reveal structural similarity to Fe(II) 2-oxoglutarate-dependent enzymes

    SciTech Connect

    Kemege, Kyle E.; Hickey, John M.; Lovell, Scott; Battaile, Kevin P.; Zhang, Yang; Hefty, P. Scott

    2012-02-13

    Chlamydia trachomatis is a medically important pathogen that encodes a relatively high percentage of proteins with unknown function. The three-dimensional structure of a protein can be very informative regarding the protein's functional characteristics; however, determining protein structures experimentally can be very challenging. Computational methods that model protein structures with sufficient accuracy to facilitate functional studies have had notable successes. To evaluate the accuracy and potential impact of computational protein structure modeling of hypothetical proteins encoded by Chlamydia, a successful computational method termed I-TASSER was utilized to model the three-dimensional structure of a hypothetical protein encoded by open reading frame (ORF) CT296. CT296 has been reported to exhibit functional properties of a divalent cation transcription repressor (DcrA), with similarity to the Escherichia coli iron-responsive transcriptional repressor, Fur. Unexpectedly, the I-TASSER model of CT296 exhibited no structural similarity to any DNA-interacting proteins or motifs. To validate the I-TASSER-generated model, the structure of CT296 was solved experimentally using X-ray crystallography. Impressively, the ab initio I-TASSER-generated model closely matched (2.72-{angstrom} C{alpha} root mean square deviation [RMSD]) the high-resolution (1.8-{angstrom}) crystal structure of CT296. Modeled and experimentally determined structures of CT296 share structural characteristics of non-heme Fe(II) 2-oxoglutarate-dependent enzymes, although key enzymatic residues are not conserved, suggesting a unique biochemical process is likely associated with CT296 function. Additionally, functional analyses did not support prior reports that CT296 has properties shared with divalent cation repressors such as Fur.

  16. Ecological reality and model validation

    SciTech Connect

    Cale, Jr, W. G.; Shugart, H. H.

    1980-01-01

    Definitions of model realism and model validation are developed. Ecological and mathematical arguments are then presented to show that model equations which explicitly treat ecosystem processes can be systematically improved such that greater realism is attained and the condition of validity is approached. Several examples are presented.

  17. A musculoskeletal model of the equine forelimb for determining surface stresses and strains in the humerus-part II. Experimental testing and model validation.

    PubMed

    Pollock, Sarah; Stover, Susan M; Hull, M L; Galuppo, Larry D

    2008-08-01

    The first objective of this study was to experimentally determine surface bone strain magnitudes and directions at the donor site for bone grafts, the site predisposed to stress fracture, the medial and cranial aspects of the transverse cross section corresponding to the stress fracture site, and the middle of the diaphysis of the humerus of a simplified in vitro laboratory preparation. The second objective was to determine whether computing strains solely in the direction of the longitudinal axis of the humerus in the mathematical model was inherently limited by comparing the strains measured along the longitudinal axis of the bone to the principal strain magnitudes and directions. The final objective was to determine whether the mathematical model formulated in Part I [Pollock et al., 2008, ASME J. Biomech. Eng., 130, p. 041006] is valid for determining the bone surface strains at the various locations on the humerus where experimentally measured longitudinal strains are comparable to principal strains. Triple rosette strain gauges were applied at four locations circumferentially on each of two cross sections of interest using a simplified in vitro laboratory preparation. The muscles included the biceps brachii muscle in addition to loaded shoulder muscles that were predicted active by the mathematical model. Strains from the middle grid of each rosette, aligned along the longitudinal axis of the humerus, were compared with calculated principal strain magnitudes and directions. The results indicated that calculating strains solely in the direction of the longitudinal axis is appropriate at six of eight locations. At the cranial and medial aspects of the middle of the diaphysis, the average minimum principal strain was not comparable to the average experimental longitudinal strain. Further analysis at the remaining six locations indicated that the mathematical model formulated in Part I predicts strains within +/-2 standard deviations of experimental strains at

  18. Validation of SAGE II ozone measurements

    NASA Technical Reports Server (NTRS)

    Cunnold, D. M.; Chu, W. P.; Mccormick, M. P.; Veiga, R. E.; Barnes, R. A.

    1989-01-01

    Five ozone profiles from the Stratospheric Aerosol and Gas Experiment (SAGE) II are compared with coincident ozonesonde measurements obtained at Natal, Brazil, and Wallops Island, Virginia. It is shown that the mean difference between all of the measurements is about 1 percent and that the agreement is within 7 percent at altitudes between 20 and 53 km. Good agreement is also found for ozone mixing ratios on pressure surfaces. It is concluded that the SAGE II profiles provide useful ozone information up to about 60 km altitude.

  19. SAGE II aerosol validation - Selected altitude measurements, including particle micromeasurements

    NASA Technical Reports Server (NTRS)

    Oberbeck, Verne R.; Russell, Philip B.; Pueschel, Rudolf F.; Snetsinger, Kenneth G.; Ferry, Guy V.; Livingston, John M.; Rosen, James N.; Osborn, Mary T.; Kritz, Mark A.

    1989-01-01

    The validity of particulate extinction coefficients derived from limb path solar radiance measurements obtained during the Stratospheric Aerosol and Gas Experiment (SAGE) II is tested. The SAGE II measurements are compared with correlative aerosol measurements taken during January 1985, August 1985, and July 1986 with impactors, laser spectrometers, and filter samplers on a U-2 aircraft, an upward pointing lidar on a P-3 aircraft, and balloon-borne optical particle counters. The data for July 29, 1986 are discussed in detail. The aerosol measurements taken on this day at an altitude of 20.5 km produce particulate extinction values which validate the SAGE II values for similar wavelengths.

  20. Validation of mesoscale models

    NASA Technical Reports Server (NTRS)

    Kuo, Bill; Warner, Tom; Benjamin, Stan; Koch, Steve; Staniforth, Andrew

    1993-01-01

    The topics discussed include the following: verification of cloud prediction from the PSU/NCAR mesoscale model; results form MAPS/NGM verification comparisons and MAPS observation sensitivity tests to ACARS and profiler data; systematic errors and mesoscale verification for a mesoscale model; and the COMPARE Project and the CME.

  1. ON PREDICTION AND MODEL VALIDATION

    SciTech Connect

    M. MCKAY; R. BECKMAN; K. CAMPBELL

    2001-02-01

    Quantification of prediction uncertainty is an important consideration when using mathematical models of physical systems. This paper proposes a way to incorporate ''validation data'' in a methodology for quantifying uncertainty of the mathematical predictions. The report outlines a theoretical framework.

  2. Cross-Validation of the JSORRAT-II in Iowa.

    PubMed

    Ralston, Christopher A; Epperson, Douglas L; Edwards, Sarah R

    2016-09-01

    The predictive validity of the Juvenile Sexual Offense Recidivism Risk Assessment Tool-II (JSORRAT-II) was evaluated using an exhaustive sample of 11- to 17-year-old male juveniles who offended sexually (JSOs) between 2000 and 2006 in Iowa (n = 529). The validity of the tool in predicting juvenile sexual recidivism was significant (area under the receiver operating characteristic curve [AUC] = .70, 99% confidence interval [CI] = [.60, .81], d = 0.70). Non-significant predictive validity coefficients were observed for the prediction of non-sexual forms of recidivism. Additional analyses were undertaken to test hypotheses about the tool's performance with various subsamples. The age of the JSO at the time of the index sexual offense and time at risk outside secure facility placements interacted significantly with JSORRAT-II scores to predict juvenile sexual recidivism. The implications of these findings for practice and research on the validation of risk assessment tools are discussed. PMID:25179400

  3. TUTORIAL: Validating biorobotic models

    NASA Astrophysics Data System (ADS)

    Webb, Barbara

    2006-09-01

    Some issues in neuroscience can be addressed by building robot models of biological sensorimotor systems. What we can conclude from building models or simulations, however, is determined by a number of factors in addition to the central hypothesis we intend to test. These include the way in which the hypothesis is represented and implemented in simulation, how the simulation output is interpreted, how it is compared to the behaviour of the biological system, and the conditions under which it is tested. These issues will be illustrated by discussing a series of robot models of cricket phonotaxis behaviour. .

  4. Partial Validation of Multibody Program to Optimize Simulated Trajectories II (POST II) Parachute Simulation With Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Ben; Queen, Eric M.

    2002-01-01

    A capability to simulate trajectories Of Multiple interacting rigid bodies has been developed. This capability uses the Program to Optimize Simulated Trajectories II (POST II). Previously, POST II had the ability to simulate multiple bodies without interacting forces. The current implementation is used for the Simulation of parachute trajectories, in which the parachute and suspended bodies can be treated as rigid bodies. An arbitrary set of connecting lines can be included in the model and are treated as massless spring-dampers. This paper discusses details of the connection line modeling and results of several test cases used to validate the capability.

  5. A dynamic model of some malaria-transmitting anopheline mosquitoes of the Afrotropical region. II. Validation of species distribution and seasonal variations

    PubMed Central

    2013-01-01

    Background The first part of this study aimed to develop a model for Anopheles gambiae s.l. with separate parametrization schemes for Anopheles gambiae s.s. and Anopheles arabiensis. The characterizations were constructed based on literature from the past decades. This part of the study is focusing on the model’s ability to separate the mean state of the two species of the An. gambiae complex in Africa. The model is also evaluated with respect to capturing the temporal variability of An. arabiensis in Ethiopia. Before conclusions and guidance based on models can be made, models need to be validated. Methods The model used in this paper is described in part one (Malaria Journal 2013, 12:28). For the validation of the model, a data base of 5,935 points on the presence of An. gambiae s.s. and An. arabiensis was constructed. An additional 992 points were collected on the presence An. gambiae s.l.. These data were used to assess if the model could recreate the spatial distribution of the two species. The dataset is made available in the public domain. This is followed by a case study from Madagascar where the model’s ability to recreate the relative fraction of each species is investigated. In the last section the model’s ability to reproduce the temporal variability of An. arabiensis in Ethiopia is tested. The model was compared with data from four papers, and one field survey covering two years. Results Overall, the model has a realistic representation of seasonal and year to year variability in mosquito densities in Ethiopia. The model is also able to describe the distribution of An. gambiae s.s. and An. arabiensis in sub-Saharan Africa. This implies this model can be used for seasonal and long term predictions of changes in the burden of malaria. Before models can be used to improving human health, or guide which interventions are to be applied where, there is a need to understand the system of interest. Validation is an important part of this process. It is

  6. Benchmark Energetic Data in a Model System for Grubbs II Metathesis Catalysis and Their Use for the Development, Assessment, and Validation of Electronic Structure Methods

    SciTech Connect

    Zhao, Yan; Truhlar, Donald G.

    2009-01-31

    We present benchmark relative energetics in the catalytic cycle of a model system for Grubbs second-generation olefin metathesis catalysts. The benchmark data were determined by a composite approach based on CCSD(T) calculations, and they were used as a training set to develop a new spin-component-scaled MP2 method optimized for catalysis, which is called SCSC-MP2. The SCSC-MP2 method has improved performance for modeling Grubbs II olefin metathesis catalysts as compared to canonical MP2 or SCS-MP2. We also employed the benchmark data to test 17 WFT methods and 39 density functionals. Among the tested density functionals, M06 is the best performing functional. M06/TZQS gives an MUE of only 1.06 kcal/mol, and it is a much more affordable method than the SCSC-MP2 method or any other correlated WFT methods. The best performing meta-GGA is M06-L, and M06-L/DZQ gives an MUE of 1.77 kcal/mol. PBEh is the best performing hybrid GGA, with an MUE of 3.01 kcal/mol; however, it does not perform well for the larger, real Grubbs II catalyst. B3LYP and many other functionals containing the LYP correlation functional perform poorly, and B3LYP underestimates the stability of stationary points for the cis-pathway of the model system by a large margin. From the assessments, we recommend the M06, M06-L, and MPW1B95 functionals for modeling Grubbs II olefin metathesis catalysts. The local M06-L method is especially efficient for calculations on large systems.

  7. Uncertainty Modeling Via Frequency Domain Model Validation

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Andrisani, Dominick, II

    1999-01-01

    Abstract The majority of literature on robust control assumes that a design model is available and that the uncertainty model bounds the actual variations about the nominal model. However, methods for generating accurate design models have not received as much attention in the literature. The influence of the level of accuracy of the uncertainty model on closed loop performance has received even less attention. The research reported herein is an initial step in applying and extending the concept of model validation to the problem of obtaining practical uncertainty models for robust control analysis and design applications. An extension of model validation called 'sequential validation' is presented and applied to a simple spring-mass-damper system to establish the feasibility of the approach and demonstrate the benefits of the new developments.

  8. Statistical validation of system models

    SciTech Connect

    Barney, P.; Ferregut, C.; Perez, L.E.; Hunter, N.F.; Paez, T.L.

    1997-01-01

    It is common practice in system analysis to develop mathematical models for system behavior. Frequently, the actual system being modeled is also available for testing and observation, and sometimes the test data are used to help identify the parameters of the mathematical model. However, no general-purpose technique exists for formally, statistically judging the quality of a model. This paper suggests a formal statistical procedure for the validation of mathematical models of systems when data taken during operation of the system are available. The statistical validation procedure is based on the bootstrap, and it seeks to build a framework where a statistical test of hypothesis can be run to determine whether or not a mathematical model is an acceptable model of a system with regard to user-specified measures of system behavior. The approach to model validation developed in this study uses experimental data to estimate the marginal and joint confidence intervals of statistics of interest of the system. These same measures of behavior are estimated for the mathematical model. The statistics of interest from the mathematical model are located relative to the confidence intervals for the statistics obtained from the experimental data. These relative locations are used to judge the accuracy of the mathematical model. An extension of the technique is also suggested, wherein randomness may be included in the mathematical model through the introduction of random variable and random process terms. These terms cause random system behavior that can be compared to the randomness in the bootstrap evaluation of experimental system behavior. In this framework, the stochastic mathematical model can be evaluated. A numerical example is presented to demonstrate the application of the technique.

  9. Turbulence Modeling Verification and Validation

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2014-01-01

    Computational fluid dynamics (CFD) software that solves the Reynolds-averaged Navier-Stokes (RANS) equations has been in routine use for more than a quarter of a century. It is currently employed not only for basic research in fluid dynamics, but also for the analysis and design processes in many industries worldwide, including aerospace, automotive, power generation, chemical manufacturing, polymer processing, and petroleum exploration. A key feature of RANS CFD is the turbulence model. Because the RANS equations are unclosed, a model is necessary to describe the effects of the turbulence on the mean flow, through the Reynolds stress terms. The turbulence model is one of the largest sources of uncertainty in RANS CFD, and most models are known to be flawed in one way or another. Alternative methods such as direct numerical simulations (DNS) and large eddy simulations (LES) rely less on modeling and hence include more physics than RANS. In DNS all turbulent scales are resolved, and in LES the large scales are resolved and the effects of the smallest turbulence scales are modeled. However, both DNS and LES are too expensive for most routine industrial usage on today's computers. Hybrid RANS-LES, which blends RANS near walls with LES away from walls, helps to moderate the cost while still retaining some of the scale-resolving capability of LES, but for some applications it can still be too expensive. Even considering its associated uncertainties, RANS turbulence modeling has proved to be very useful for a wide variety of applications. For example, in the aerospace field, many RANS models are considered to be reliable for computing attached flows. However, existing turbulence models are known to be inaccurate for many flows involving separation. Research has been ongoing for decades in an attempt to improve turbulence models for separated and other nonequilibrium flows. When developing or improving turbulence models, both verification and validation are important

  10. MODEL VALIDATION REPORT FOR THE HOUSATONIC RIVER

    EPA Science Inventory

    The Model Validation Report will present a comparison of model validation runs to existing data for the model validation period. The validation period spans a twenty year time span to test the predictive capability of the model over a longer time period, similar to that which wil...

  11. Validity and Reliability of Health Promoting Lifestyle Profile II in the Iranian Elderly

    PubMed Central

    Tanjani, Parisa Taheri; Azadbakht, Mojtaba; Garmaroudi, Gholamreza; Sahaf, Robab; Fekrizadeh, Zohreh

    2016-01-01

    Background: With increasing age, the prevalence of chronic diseases increases. Since health-promoting behaviors (HPB) are considered a basic way of preventing diseases, especially chronic diseases, it is important to assess HPB. This study examines the validity and reliability of the Health Promoting Lifestyle Profile II (HPLP-II). Methods: This is a cross-sectional study which is conducted on 502 elderly individuals aged 60 and over in Tehran, Iran. In order to determine the validity, content and construct validity were used. The content validity index (CVI) was used to assess the content validity and to assess construct validity, confirmatory factor analysis (CFA), and item-total correlations were employed. For reliability, test-retest analysis was used, and the internal consistency of the HPLP-II was confirmed by Cronbach's alpha. For data analysis, SPSS-18 and Amos-7 software was used. Results: The mean age of the subjects was 66.3 ± 5.3 years. The CVI for the revised HPLP-II and all its subscales was higher than 0.82. The CFA confirmed a six-factor model aligned with the original HPLP-II. Pearson correlation coefficients between the revised HPLP-II and their items were in range of 0.27–0.65. Cronbach's alpha of the revised HPLP-II was obtained as 0.78 and for their subscales were in the range of 0.67–0.84. Intraclass correlation coefficient was obtained 0.79 (95% confidence interval: 0.59–0.86, P < 0.001). Conclusions: The Iranian HPLP-II scale is an appropriate tool for assessing HPBs of the Iranian elderly. PMID:27280010

  12. ADAPT model: Model use, calibration and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents an overview of the Agricultural Drainage and Pesticide Transport (ADAPT) model and a case study to illustrate the calibration and validation steps for predicting subsurface tile drainage and nitrate-N losses from an agricultural system. The ADAPT model is a daily time step field ...

  13. (Validity of environmental transfer models)

    SciTech Connect

    Blaylock, B.G.; Hoffman, F.O.; Gardner, R.H.

    1990-11-07

    BIOMOVS (BIOspheric MOdel Validation Study) is an international cooperative study initiated in 1985 by the Swedish National Institute of Radiation Protection to test models designed to calculate the environmental transfer and bioaccumulation of radionuclides and other trace substances. The objective of the symposium and workshop was to synthesize results obtained during Phase 1 of BIOMOVS (the first five years of the study) and to suggest new directions that might be pursued during Phase 2 of BIOMOVS. The travelers were an instrumental part of the development of BIOMOVS. This symposium allowed the travelers to present a review of past efforts at model validation and a synthesis of current activities and to refine ideas concerning future development of models and data for assessing the fate, effect, and human risks of environmental contaminants. R. H. Gardner also visited the Free University, Amsterdam, and the National Institute of Public Health and Environmental Protection (RIVM) in Bilthoven to confer with scientists about current research in theoretical ecology and the use of models for estimating the transport and effect of environmental contaminants and to learn about the European efforts to map critical loads of acid deposition.

  14. Proposed Modifications to the Conceptual Model of Coaching Efficacy and Additional Validity Evidence for the Coaching Efficacy Scale II-High School Teams

    ERIC Educational Resources Information Center

    Myers, Nicholas; Feltz, Deborah; Chase, Melissa

    2011-01-01

    The purpose of this study was to determine whether theoretically relevant sources of coaching efficacy could predict the measures derived from the Coaching Efficacy Scale II-High School Teams (CES II-HST). Data were collected from head coaches of high school teams in the United States (N = 799). The analytic framework was a multiple-group…

  15. Atlas II and IIA analyses and environments validation

    NASA Astrophysics Data System (ADS)

    Martin, Richard E.

    1995-06-01

    General Dynamics has now flown all four versions of the Atlas commercial launch vehicle, which cover a payload weight capability to geosynchronous transfer orbit (GTO) in the range of 5000-8000 lb. The key analyses to set design and environmental test parameters for the vehicle modifications and the ground and flight test data that validated them were prepared in paper IAF-91-170 for the first version, Atlas I. This paper presents similar data for the next two versions, Atlas II and IIA. The Atlas II has propellant tanks lengthened by 12 ft and is boosted by MA-5A rocket engines uprated to 474,000 lb liftoff thrust. GTO payload capability is 6225 lb with the 11-ft fairing. The Atlas IIA is an Atlas II with uprated RL10A-4 engines on the lengthened Centaur II upper stage. The two 20,800 lb thrust, 449 s specific impulse engines with an optional extendible nozzle increase payload capability to GTO to 6635 lb. The paper describes design parameters and validated test results for many other improvements that have generally provided greater capability at less cost, weight and complexity and better reliability. Those described include: moving the MA-5A start system to the ground, replacing the vernier engines with a simple 50 lb thrust on-off hydrazine roll control system, addition of a POGO suppressor, replacement of Centaur jettisonable insulation panels with fixed foam, a new inertial navigation unit (INU) that combines in one package a ring-laser gyro based strapdown guidance system with two MIL-STD-1750A processors, redundant MIL-STD-1553 data bus interfaces, robust Ada-based software and a new Al-Li payload adapter. Payload environment is shown to be essentially unchanged from previous Atlas vehicles. Validation of load, stability, control and pressurization requirements for the larger vehicle is discussed. All flights to date (five Atlas II, one Atlas IIA) have been successful in launching satellites for EUTELSAT, the U.S. Air Force and INTELSAT. Significant design

  16. Validation of Magnetospheric Magnetohydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Curtis, Brian

    Magnetospheric magnetohydrodynamic (MHD) models are commonly used for both prediction and modeling of Earth's magnetosphere. To date, very little validation has been performed to determine their limits, uncertainties, and differences. In this work, we performed a comprehensive analysis using several commonly used validation techniques in the atmospheric sciences to MHD-based models of Earth's magnetosphere for the first time. The validation techniques of parameter variability/sensitivity analysis and comparison to other models were used on the OpenGGCM, BATS-R-US, and SWMF magnetospheric MHD models to answer several questions about how these models compare. The questions include: (1) the difference between the model's predictions prior to and following to a reversal of Bz in the upstream interplanetary field (IMF) from positive to negative, (2) the influence of the preconditioning duration, and (3) the differences between models under extreme solar wind conditions. A differencing visualization tool was developed and used to address these three questions. We find: (1) For a reversal in IMF Bz from positive to negative, the OpenGGCM magnetopause is closest to Earth as it has the weakest magnetic pressure near-Earth. The differences in magnetopause positions between BATS-R-US and SWMF are explained by the influence of the ring current, which is included in SWMF. Densities are highest for SWMF and lowest for OpenGGCM. The OpenGGCM tail currents differ significantly from BATS-R-US and SWMF; (2) A longer preconditioning time allowed the magnetosphere to relax more, giving different positions for the magnetopause with all three models before the IMF Bz reversal. There were differences greater than 100% for all three models before the IMF Bz reversal. The differences in the current sheet region for the OpenGGCM were small after the IMF Bz reversal. The BATS-R-US and SWMF differences decreased after the IMF Bz reversal to near zero; (3) For extreme conditions in the solar

  17. Assimilation of Geosat Altimetric Data in a Nonlinear Shallow-Water Model of the Indian Ocean by Adjoint Approach. Part II: Some Validation and Interpretation of the Assimilated Results. Part 2; Some Validation and Interpretation of the Assimilated Results

    NASA Technical Reports Server (NTRS)

    Greiner, Eric; Perigaud, Claire

    1996-01-01

    This paper examines the results of assimilating Geosat sea level variations relative to the November 1986-November 1988 mean reference, in a nonlinear reduced-gravity model of the Indian Ocean, Data have been assimilated during one year starting in November 1986 with the objective of optimizing the initial conditions and the yearly averaged reference surface. The thermocline slope simulated by the model with or without assimilation is validated by comparison with the signal, which can be derived from expandable bathythermograph measurements performed in the Indian Ocean at that time. The topography simulated with assimilation on November 1986 is in very good agreement with the hydrographic data. The slopes corresponding to the South Equatorial Current and to the South Equatorial Countercurrent are better reproduced with assimilation than without during the first nine months. The whole circulation of the cyclonic gyre south of the equator is then strongly intensified by assimilation. Another assimilation experiment is run over the following year starting in November 1987. The difference between the two yearly mean surfaces simulated with assimilation is in excellent agreement with Geosat. In the southeastern Indian Ocean, the correction to the yearly mean dynamic topography due to assimilation over the second year is negatively correlated to the one the year before. This correction is also in agreement with hydrographic data. It is likely that the signal corrected by assimilation is not only due to wind error, because simulations driven by various wind forcings present the same features over the two years. Model simulations run with a prescribed throughflow transport anomaly indicate that assimilation is rather correcting in the interior of the model domain for inadequate boundary conditions with the Pacific.

  18. SRVAL. Stock-Recruitment Model VALidation Code

    SciTech Connect

    Christensen, S.W.

    1989-12-07

    SRVAL is a computer simulation model of the Hudson River striped bass population. It was designed to aid in assessing the validity of curve-fits of the linearized Ricker stock-recruitment model, modified to incorporate multiple-age spawners and to include an environmental variable, to variously processed annual catch-per-unit-effort (CPUE) statistics for a fish population. It is sometimes asserted that curve-fits of this kind can be used to determine the sensitivity of fish populations to such man-induced stresses as entrainment and impingement at power plants. SRVAL was developed to test such assertions and was utilized in testimony written in connection with the Hudson River Power Case (U. S. Environmental Protection Agency, Region II).

  19. A statistical-dynamical scheme for reconstructing ocean forcing in the Atlantic. Part II: methodology, validation and application to high-resolution ocean models

    NASA Astrophysics Data System (ADS)

    Minvielle, Marie; Cassou, Christophe; Bourdallé-Badie, Romain; Terray, Laurent; Najac, Julien

    2011-02-01

    A novel statistical-dynamical scheme has been developed to reconstruct the sea surface atmospheric variables necessary to force an ocean model. Multiple linear regressions are first built over a so-called learning period and over the entire Atlantic basin from the observed relationship between the surface wind conditions, or predictands, and the anomalous large scale atmospheric circulations, or predictors. The latter are estimated in the extratropics by 500 hPa geopotential height weather regimes and in the tropics by low-level wind classes. The transfer function further combined to an analog step is then used to reconstruct all the surface variables fields over 1958-2002. We show that the proposed hybrid scheme is very skillful in reproducing the mean state, the seasonal cycle and the temporal evolution of all the surface ocean variables at interannual timescale. Deficiencies are found in the level of variance especially in the tropics. It is underestimated for 2-m temperature and humidity as well as for surface radiative fluxes in the interannual frequency band while it is slightly overestimated at higher frequency. Decomposition in empirical orthogonal function (EOF) shows that the spatial and temporal coherence of the forcing fields is however very well captured by the reconstruction method. For dynamical downscaling purposes, reconstructed fields are then interpolated and used to carry out a high-resolution oceanic simulation using the NATL4 (1/4°) model integrated over 1979-2001. This simulation is compared to a reference experiment where the original observed forcing fields are prescribed instead. Mean states between the two experiments are virtually undistinguishable both in terms of surface fluxes and ocean dynamics estimated by the barotropic and the meridional overturning streamfunctions. The 3-dimensional variance of the simulated ocean is well preserved at interannual timescale both for temperature and salinity except in the tropics where it is

  20. Developing better and more valid animal models of brain disorders.

    PubMed

    Stewart, Adam Michael; Kalueff, Allan V

    2015-01-01

    Valid sensitive animal models are crucial for understanding the pathobiology of complex human disorders, such as anxiety, autism, depression and schizophrenia, which all have the 'spectrum' nature. Discussing new important strategic directions of research in this field, here we focus i) on cross-species validation of animal models, ii) ensuring their population (external) validity, and iii) the need to target the interplay between multiple disordered domains. We note that optimal animal models of brain disorders should target evolutionary conserved 'core' traits/domains and specifically mimic the clinically relevant inter-relationships between these domains. PMID:24384129

  1. Software Validation via Model Animation

    NASA Technical Reports Server (NTRS)

    Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.

    2015-01-01

    This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.

  2. Preparation of Power Distribution System for High Penetration of Renewable Energy Part I. Dynamic Voltage Restorer for Voltage Regulation Pat II. Distribution Circuit Modeling and Validation

    NASA Astrophysics Data System (ADS)

    Khoshkbar Sadigh, Arash

    and the power rating for loads, is presented to prioritize which loads, lines and cables the meters should be installed at to have the most effect on model validation.

  3. Factor structure and construct validity of the Behavioral Dyscontrol Scale-II.

    PubMed

    Shura, Robert D; Rowland, Jared A; Yoash-Gantz, Ruth E

    2015-01-01

    The Behavioral Dyscontrol Scale-II (BDS-II) was developed as an improved scoring method to the original BDS, which was designed to evaluate the capacity for independent regulation of behavior and attention. The purpose of this study was to evaluate the factor structure and construct validity of the BDS-II, which had not been adequately re-examined since the development of the new scoring system. In a sample of 164 Veterans with a mean age of 35 years, exploratory factor analysis was used to evaluate BDS-II latent factor structure. Correlations and regressions were used to explore validity against 22 psychometrically sound neurocognitive measures across seven neurocognitive domains of sensation, motor output, processing speed, attention, visual-spatial reasoning, memory, and executive functions. Factor analysis found a two-factor solution for this sample which explained 41% of the variance in the model. Validity analyses found significant correlations among the BDS-II scores and all other cognitive domains except sensation and language (which was not evaluated). Hierarchical regressions revealed that PASAT performance was strongly associated with all three BDS-II scores; dominant hand Finger Tapping Test was also associated with the Total score and Factor 1, and CPT-II Commissions was also associated with Factor 2. These results suggest the BDS-II is both a general test of cerebral functioning, and a more specific test of working memory, motor output, and impulsivity. The BDS-II may therefore show utility with younger populations for measuring frontal lobe abilities and might be very sensitive to neurological injury. PMID:25650736

  4. Inert doublet model and LEP II limits

    SciTech Connect

    Lundstroem, Erik; Gustafsson, Michael; Edsjoe, Joakim

    2009-02-01

    The inert doublet model is a minimal extension of the standard model introducing an additional SU(2) doublet with new scalar particles that could be produced at accelerators. While there exists no LEP II analysis dedicated for these inert scalars, the absence of a signal within searches for supersymmetric neutralinos can be used to constrain the inert doublet model. This translation however requires some care because of the different properties of the inert scalars and the neutralinos. We investigate what restrictions an existing DELPHI Collaboration study of neutralino pair production can put on the inert scalars and discuss the result in connection with dark matter. We find that although an important part of the inert doublet model parameter space can be excluded by the LEP II data, the lightest inert particle still constitutes a valid dark matter candidate.

  5. SAGE II aerosol data validation and initial data use - An introduction and overview

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Mccormick, M. P.

    1989-01-01

    The process of validating data from the Stratospheric Aerosol and Gas Experiment (SAGE) II and the initial use of the validated data are reviewed. The instruments developed for the SAGE II, the influence of the eruption of El Chichon on the global stratospheric aerosol, and various data validation experiments are discussed. Consideration is given to methods for deriving aerosol physical and optical properties from SAGE II extinction data and for inferring particle size distribution moments from SAGE II spectral extinction values.

  6. Geochemistry Model Validation Report: External Accumulation Model

    SciTech Connect

    K. Zarrabi

    2001-09-27

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation

  7. Validation of Multibody Program to Optimize Simulated Trajectories II Parachute Simulation with Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.

    2009-01-01

    A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.

  8. An Aqueous Thermodynamic Model for the Complexation of Sodium and Strontium with Organic Chelators valid to High Ionic Strength. II. N-(2-hydroxyethyl)ethylenedinitrilotriacetic acid (HEDTA)

    SciTech Connect

    Felmy, Andrew R.; Mason, Marvin J.; Qafoku, Odeta

    2003-04-01

    This is the second paper in a two part series on the development of aqueous thermodynamic models for the complexation of Na+ and Sr2+ with organic chelators. In this paper the development of an aqueous thermodynamic model describing the effects of ionic strength, carbonate concentration, and temperature on the complexation of Sr2+ by HEDTA under basic conditions is presented. The thermodynamic model describing the Na+ interactions with the HEDTA3- chelate relies solely on the use of Pitzer ion-interaction parameters. The exclusive use of Pitzer ion-interaction parameters differs significantly from our previous model for EDTA, which required the introduction of a NaEDTA3- ion pair. Estimation of the Pitzer ion-interaction parameters for HEDTA3- and SrHEDTA- with Na+ allows the extrapolation of a standard state equilibrium constant for the SrHEDTA- species which is one order of magnitude greater than the 0.1M reference state value available in the literature. The overall model is developed from data available in the literature on apparent equilibrium constants for HEDTA protonation, the solubility of salts in concentrated HEDTA solutions, and from new data on the solubility of SrCO3(c) obtained as part of this study. The predictions of the final thermodynamic model for the Na-Sr-OH-CO3-NO3-HEDTA-H2O system are tested by application to chemical systems containing competing metal ions (i.e., Ca2+).

  9. Factorial validity and measurement invariance across intelligence levels and gender of the overexcitabilities questionnaire-II (OEQ-II).

    PubMed

    Van den Broeck, Wim; Hofmans, Joeri; Cooremans, Sven; Staels, Eva

    2014-03-01

    The concept of overexcitability, derived from Dabrowski's theory of personality development, offers a promising approach for the study of the developmental dynamics of giftedness. The present study aimed at (a) examining the factorial structure of the Overexcitabilities Questionnaire-II scores (OEQ-II) and (b) testing measurement invariance of these scores across intelligence and gender. A sample of 641 Dutch-speaking adolescents from 11 to 15 years old, 363 girls and 278 boys, participated in this study. Results showed that a model without cross-loadings did not fit the data well (using confirmatory factor analysis), whereas a factor model in which all cross-loadings were included yielded fit statistics that were in support of the factorial structure of the OEQ-II scores (using exploratory structural equation modeling). Furthermore, our findings supported the assumption of (partial) strict measurement invariance of the OEQ-II scores across intelligence levels and across gender. Such levels of measurement invariance allow valid comparisons between factor means and factor relationships across groups. In particular, the gifted group scored significantly higher on intellectual and sensual overexcitability (OE) than the nongifted group, girls scored higher on emotional and sensual OE than boys, and boys scored higher on intellectual and psychomotor OE than girls. PMID:24079958

  10. Thermomechanical model of hydration swelling in smectitic clays: II three-scale inter-phase mass transfer: homogenization and computational validation

    NASA Astrophysics Data System (ADS)

    Murad, Márcio A.

    1999-06-01

    In Part I a two-scale thermomechanical theory of expansive compacted clays composed of adsorbed water and clay platelets was derived using a mixture-theoretic approach and the Coleman and Noll method of exploitation of the entropy inequality. This approach led to a two-scale model which describes the interaction between thermal and hydration effects between the adsorbed water and clay minerals. The purpose of this paper is twofold. Firstly, partial results toward a three-scale model are derived by homogenizing the two-scale model for the clay particles (clusters of clay platelets and adsorbed water) with the bulk water (water next to the swelling particles). The three-scale model is of dual porosity type wherein the clay particles act as sources/sinks of water to the macroscale bulk phase flow. One of the notable consequences of the homogenization procedure is the natural derivation of a generalized inter-phase mass transfer equation between adsorbed and bulk water. Further, variational principles and finite element approximations based on the Galerkin method are proposed to discretize the two-scale model. Numerical simulations of a bentonitic clay used for engineered barrier of nuclear waste repository are performed and numerical results are presented showing the influence of physico-chemical effects on the performance of the clay buffer.

  11. Physical properties of solar chromospheric plages. III - Models based on Ca II and Mg II observations

    NASA Technical Reports Server (NTRS)

    Kelch, W. L.; Linsky, J. L.

    1978-01-01

    Solar plages are modeled using observations of both the Ca II K and the Mg II h and k lines. A partial-redistribution approach is employed for calculating the line profiles on the basis of a grid of five model chromospheres. The computed integrated emission intensities for the five atmospheric models are compared with observations of six regions on the sun as well as with models of active-chromosphere stars. It is concluded that the basic plage model grid proposed by Shine and Linsky (1974) is still valid when the Mg II lines are included in the analysis and the Ca II and Mg II lines are analyzed using partial-redistribution diagnostics.

  12. Preparation of Power Distribution System for High Penetration of Renewable Energy Part I. Dynamic Voltage Restorer for Voltage Regulation Pat II. Distribution Circuit Modeling and Validation

    NASA Astrophysics Data System (ADS)

    Khoshkbar Sadigh, Arash

    by simulation and experimental tests under various conditions considering all possible cases such as different amounts of voltage sag depth (VSD), different amounts of point-on-wave (POW) at which voltage sag occurs, harmonic distortion, line frequency variation, and phase jump (PJ). Furthermore, the ripple amount of fundamental voltage amplitude calculated by the proposed method and its error is analyzed considering the line frequency variation together with harmonic distortion. The best and worst detection time of proposed method were measured 1ms and 8.8ms, respectively. Finally, the proposed method has been compared with other voltage sag detection methods available in literature. Part 2: Power System Modeling for Renewable Energy Integration: As power distribution systems are evolving into more complex networks, electrical engineers have to rely on software tools to perform circuit analysis. There are dozens of powerful software tools available in the market to perform the power system studies. Although their main functions are similar, there are differences in features and formatting structures to suit specific applications. This creates challenges for transferring power system circuit models data (PSCMD) between different software and rebuilding the same circuit in the second software environment. The objective of this part of thesis is to develop a Unified Platform (UP) to facilitate transferring PSCMD among different software packages and relieve the challenges of the circuit model conversion process. UP uses a commonly available spreadsheet file with a defined format, for any home software to write data to and for any destination software to read data from, via a script-based application called PSCMD transfer application. The main considerations in developing the UP are to minimize manual intervention and import a one-line diagram into the destination software or export it from the source software, with all details to allow load flow, short circuit and

  13. NASA GSFC CCMC Recent Model Validation Activities

    NASA Technical Reports Server (NTRS)

    Rastaetter, L.; Pulkkinen, A.; Taktakishvill, A.; Macneice, P.; Shim, J. S.; Zheng, Yihua; Kuznetsova, M. M.; Hesse, M.

    2012-01-01

    The Community Coordinated Modeling Center (CCMC) holds the largest assembly of state-of-the-art physics-based space weather models developed by the international space physics community. In addition to providing the community easy access to these modern space research models to support science research, its another primary goal is to test and validate models for transition from research to operations. In this presentation, we provide an overview of the space science models available at CCMC. Then we will focus on the community-wide model validation efforts led by CCMC in all domains of the Sun-Earth system and the internal validation efforts at CCMC to support space weather servicesjoperations provided its sibling organization - NASA GSFC Space Weather Center (http://swc.gsfc.nasa.gov). We will also discuss our efforts in operational model validation in collaboration with NOAA/SWPC.

  14. Algorithm for model validation: Theory and applications

    PubMed Central

    Sornette, D.; Davis, A. B.; Ide, K.; Vixie, K. R.; Pisarenko, V.; Kamm, J. R.

    2007-01-01

    Validation is often defined as the process of determining the degree to which a model is an accurate representation of the real world from the perspective of its intended uses. Validation is crucial as industries and governments depend increasingly on predictions by computer models to justify their decisions. We propose to formulate the validation of a given model as an iterative construction process that mimics the often implicit process occurring in the minds of scientists. We offer a formal representation of the progressive build-up of trust in the model. Thus, we replace static claims on the impossibility of validating a given model by a dynamic process of constructive approximation. This approach is better adapted to the fuzzy, coarse-grained nature of validation. Our procedure factors in the degree of redundancy versus novelty of the experiments used for validation as well as the degree to which the model predicts the observations. We illustrate the methodology first with the maturation of quantum mechanics as the arguably best established physics theory and then with several concrete examples drawn from some of our primary scientific interests: a cellular automaton model for earthquakes, a multifractal random walk model for financial time series, an anomalous diffusion model for solar radiation transport in the cloudy atmosphere, and a computational fluid dynamics code for the Richtmyer–Meshkov instability. PMID:17420476

  15. Algorithm for model validation: theory and applications.

    PubMed

    Sornette, D; Davis, A B; Ide, K; Vixie, K R; Pisarenko, V; Kamm, J R

    2007-04-17

    Validation is often defined as the process of determining the degree to which a model is an accurate representation of the real world from the perspective of its intended uses. Validation is crucial as industries and governments depend increasingly on predictions by computer models to justify their decisions. We propose to formulate the validation of a given model as an iterative construction process that mimics the often implicit process occurring in the minds of scientists. We offer a formal representation of the progressive build-up of trust in the model. Thus, we replace static claims on the impossibility of validating a given model by a dynamic process of constructive approximation. This approach is better adapted to the fuzzy, coarse-grained nature of validation. Our procedure factors in the degree of redundancy versus novelty of the experiments used for validation as well as the degree to which the model predicts the observations. We illustrate the methodology first with the maturation of quantum mechanics as the arguably best established physics theory and then with several concrete examples drawn from some of our primary scientific interests: a cellular automaton model for earthquakes, a multifractal random walk model for financial time series, an anomalous diffusion model for solar radiation transport in the cloudy atmosphere, and a computational fluid dynamics code for the Richtmyer-Meshkov instability. PMID:17420476

  16. A Validation Model for Administrative Competencies.

    ERIC Educational Resources Information Center

    Greer, John T.; Lockridge, Burma L.

    1974-01-01

    This paper presents a model for the identification and validation of competencies, designed as an all-inclusive framework, which may be adapted for specific situations. The criteria for including data in the validation process are as follows: (a) if the data are to be employed in making decisions about individuals or groups, all available evidence…

  17. Systematic Independent Validation of Inner Heliospheric Models

    NASA Technical Reports Server (NTRS)

    MacNeice, P. J.; Takakishvili, Alexandre

    2008-01-01

    This presentation is the first in a series which will provide independent validation of community models of the outer corona and inner heliosphere. In this work we establish a set of measures to be used in validating this group of models. We use these procedures to generate a comprehensive set of results from the Wang- Sheeley-Arge (WSA) model which will be used as a baseline, or reference, against which to compare all other models. We also run a test of the validation procedures by applying them to a small set of results produced by the ENLIL Magnetohydrodynamic (MHD) model. In future presentations we will validate other models currently hosted by the Community Coordinated Modeling Center(CCMC), including a comprehensive validation of the ENLIL model. The Wang-Sheeley-Arge (WSA) model is widely used to model the Solar wind, and is used by a number of agencies to predict Solar wind conditions at Earth as much as four days into the future. Because it is so important to both the research and space weather forecasting communities, it is essential that its performance be measured systematically, and independently. In this paper we offer just such an independent and systematic validation. We report skill scores for the model's predictions of wind speed and IMF polarity for a large set of Carrington rotations. The model was run in all its routinely used configurations. It ingests line of sight magnetograms. For this study we generated model results for monthly magnetograms from the National Solar Observatory (SOLIS), Mount Wilson Observatory and the GONG network, spanning the Carrington rotation range from 1650 to 2068. We compare the influence of the different magnetogram sources, performance at quiet and active times, and estimate the effect of different empirical wind speed tunings. We also consider the ability of the WSA model to identify sharp transitions in wind speed from slow to fast wind. These results will serve as a baseline against which to compare future

  18. Validity of the Sleep Subscale of the Diagnostic Assessment for the Severely Handicapped-II (DASH-II)

    ERIC Educational Resources Information Center

    Matson, Johnny L.; Malone, Carrie J.

    2006-01-01

    Currently there are no available sleep disorder measures for individuals with severe and profound intellectual disability. We, therefore, attempted to establish the external validity of the "Diagnostic Assessment for the Severely Handicapped-II" (DASH-II) sleep subscale by comparing daily observational sleep data with the responses of direct care…

  19. Validation of a watershed model without calibration

    NASA Astrophysics Data System (ADS)

    Vogel, Richard M.; Sankarasubramanian, A.

    2003-10-01

    Traditional approaches for the validation of watershed models focus on the "goodness of fit" between model predictions and observations. It is possible for a watershed model to exhibit a "good" fit, yet not accurately represent hydrologic processes; hence "goodness of fit" can be misleading. Instead, we introduce an approach which evaluates the ability of a model to represent the observed covariance structure of the input (climate) and output (streamflow) without ever calibrating the model. An advantage of this approach is that it is not confounded by model error introduced during the calibration process. We illustrate that once a watershed model is calibrated, the unavoidable model error can cloud our ability to validate (or invalidate) the model. We emphasize that model hypothesis testing (validation) should be performed prior to, and independent of, parameter estimation (calibration), contrary to traditional practice in which watershed models are usually validated after calibrating the model. Our approach is tested using two different watershed models at a number of different watersheds in the United States.

  20. Local thermal seeing modeling validation through observatory measurements

    NASA Astrophysics Data System (ADS)

    Vogiatzis, Konstantinos; Otarola, Angel; Skidmore, Warren; Travouillon, Tony; Angeli, George

    2012-09-01

    Dome and mirror seeing are critical effects influencing the optical performance of ground-based telescopes. Computational Fluid Dynamics (CFD) can be used to obtain the refractive index field along a given optical path and calculate the corresponding image quality utilizing optical modeling tools. This procedure is validated using measurements from the Keck II and CFHT telescopes. CFD models of Keck II and CFHT observatories on the Mauna Kea summit have been developed. The detailed models resolve all components that can influence the flow pattern through turbulence generation or heat release. Unsteady simulations generate time records of velocity and temperature fields from which the refractive index field at a given wavelength and turbulence parameters are obtained. At Keck II the Cn2 and l0 (inner scale of turbulence) were monitored along a 63m path sensitive primarily to turbulence around the top ring of the telescope tube. For validation, these parameters were derived from temperature and velocity fluctuations obtained from CFD simulations. At CFHT dome seeing has been inferred from their database that includes telescope delivered Image Quality (IQ). For this case CFD simulations were run for specific orientations of the telescope respect to incoming wind, wind speeds and outside air temperature. For validation, temperature fluctuations along the optical beam from the CFD are turned to refractive index variations and corresponding Optical Path Differences (OPD) then to Point Spread Functions (PSF) that are ultimately compared to the record of IQ.

  1. Empirical assessment of model validity

    SciTech Connect

    Wolfe, R.R. )

    1991-05-01

    The metabolism of amino acids is far more complicated than a 1- to 2-pool model. Yet, these simple models have been extensively used with many different isotopically labeled tracers to study protein metabolism. A tracer of leucine and measurement of leucine kinetics has been a favorite choice for following protein metabolism. However, administering a leucine tracer and following it in blood will not adequately reflect the complex multi-pool nature of the leucine system. Using the tracer enrichment of the ketoacid metabolite of leucine, alpha-ketoisocaproate (KIC), to reflect intracellular events of leucine was an important improvement. Whether this approach is adequate to follow accurately leucine metabolism in vivo or not has not been tested. From data obtained using simultaneous administration of leucine and KIC tracers, we developed a 10-pool model of the in vivo leucine-KIC and bicarbonate kinetic system. Data from this model were compared with conventional measurements of leucine kinetics. The results from the 10-pool model agreed best with the simplified approach using a leucine tracer and measurement of KIC enrichment.

  2. Validation of a Lagrangian particle model

    NASA Astrophysics Data System (ADS)

    Brzozowska, Lucyna

    2013-05-01

    In this paper a custom-developed model of dispersion of pollutants is presented. The proposed approach is based on both a Lagrangian particle model and an urban-scale diagnostic model of the air velocity field. Both models constitute a part of an operational air quality assessment system. The proposed model is validated by comparing its computed results with the results of measurements obtained in a wind tunnel reflecting conditions of the Mock Urban Setting Test (MUST) experiment. Commonly used measures of errors and model concordance are employed and the results obtained are additionally compared with those obtained by other authors for CFD and non-CFD class models. The obtained results indicate that the validity of the model presented in this paper is acceptable.

  3. Validation of the Hot Strip Mill Model

    SciTech Connect

    Richard Shulkosky; David Rosberg; Jerrud Chapman

    2005-03-30

    The Hot Strip Mill Model (HSMM) is an off-line, PC based software originally developed by the University of British Columbia (UBC) and the National Institute of Standards and Technology (NIST) under the AISI/DOE Advanced Process Control Program. The HSMM was developed to predict the temperatures, deformations, microstructure evolution and mechanical properties of steel strip or plate rolled in a hot mill. INTEG process group inc. undertook the current task of enhancing and validating the technology. With the support of 5 North American steel producers, INTEG process group tested and validated the model using actual operating data from the steel plants and enhanced the model to improve prediction results.

  4. On validation and invalidation of biological models

    PubMed Central

    Anderson, James; Papachristodoulou, Antonis

    2009-01-01

    Background Very frequently the same biological system is described by several, sometimes competing mathematical models. This usually creates confusion around their validity, ie, which one is correct. However, this is unnecessary since validity of a model cannot be established; model validation is actually a misnomer. In principle the only statement that one can make about a system model is that it is incorrect, ie, invalid, a fact which can be established given appropriate experimental data. Nonlinear models of high dimension and with many parameters are impossible to invalidate through simulation and as such the invalidation process is often overlooked or ignored. Results We develop different approaches for showing how competing ordinary differential equation (ODE) based models of the same biological phenomenon containing nonlinearities and parametric uncertainty can be invalidated using experimental data. We first emphasize the strong interplay between system identification and model invalidation and we describe a method for obtaining a lower bound on the error between candidate model predictions and data. We then turn to model invalidation and formulate a methodology for discrete-time and continuous-time model invalidation. The methodology is algorithmic and uses Semidefinite Programming as the computational tool. It is emphasized that trying to invalidate complex nonlinear models through exhaustive simulation is not only computationally intractable but also inconclusive. Conclusion Biological models derived from experimental data can never be validated. In fact, in order to understand biological function one should try to invalidate models that are incompatible with available data. This work describes a framework for invalidating both continuous and discrete-time ODE models based on convex optimization techniques. The methodology does not require any simulation of the candidate models; the algorithms presented in this paper have a worst case polynomial time

  5. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    SciTech Connect

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.

  6. Structural system identification: Structural dynamics model validation

    SciTech Connect

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  7. Oil spill impact modeling: development and validation.

    PubMed

    French-McCay, Deborah P

    2004-10-01

    A coupled oil fate and effects model has been developed for the estimation of impacts to habitats, wildlife, and aquatic organisms resulting from acute exposure to spilled oil. The physical fates model estimates the distribution of oil (as mass and concentrations) on the water surface, on shorelines, in the water column, and in the sediments, accounting for spreading, evaporation, transport, dispersion, emulsification, entrainment, dissolution, volatilization, partitioning, sedimentation, and degradation. The biological effects model estimates exposure of biota of various behavior types to floating oil and subsurface contamination, resulting percent mortality, and sublethal effects on production (somatic growth). Impacts are summarized as areas or volumes affected, percent of populations lost, and production foregone because of a spill's effects. This paper summarizes existing information and data used to develop the model, model algorithms and assumptions, validation studies, and research needs. Simulation of the Exxon Valdez oil spill is presented as a case study and validation of the model. PMID:15511105

  8. Feature extraction for structural dynamics model validation

    SciTech Connect

    Hemez, Francois; Farrar, Charles; Park, Gyuhae; Nishio, Mayuko; Worden, Keith; Takeda, Nobuo

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  9. Validation of an Experimentally Derived Uncertainty Model

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Cox, D. E.; Balas, G. J.; Juang, J.-N.

    1996-01-01

    The results show that uncertainty models can be obtained directly from system identification data by using a minimum norm model validation approach. The error between the test data and an analytical nominal model is modeled as a combination of unstructured additive and structured input multiplicative uncertainty. Robust controllers which use the experimentally derived uncertainty model show significant stability and performance improvements over controllers designed with assumed ad hoc uncertainty levels. Use of the identified uncertainty model also allowed a strong correlation between design predictions and experimental results.

  10. Evaluation (not validation) of quantitative models.

    PubMed Central

    Oreskes, N

    1998-01-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  11. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  12. Validation of Space Weather Models at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, M. M.; Pulkkinen, A.; Rastaetter, L.; Hesse, M.; Chulaki, A.; Maddox, M.

    2011-01-01

    The Community Coordinated Modeling Center (CCMC) is a multiagency partnership, which aims at the creation of next generation space weather modes. CCMC goal is to support the research and developmental work necessary to substantially increase space weather modeling capabilities and to facilitate advanced models deployment in forecasting operations. The CCMC conducts unbiased model testing and validation and evaluates model readiness for operational environment. The presentation will demonstrate the recent progress in CCMC metrics and validation activities.

  13. Validation of chemometric models - a tutorial.

    PubMed

    Westad, Frank; Marini, Federico

    2015-09-17

    In this tutorial, we focus on validation both from a numerical and conceptual point of view. The often applied reported procedure in the literature of (repeatedly) dividing a dataset randomly into a calibration and test set must be applied with care. It can only be justified when there is no systematic stratification of the objects that will affect the validated estimates or figures of merits such as RMSE or R(2). The various levels of validation may, typically, be repeatability, reproducibility, and instrument and raw material variation. Examples of how one data set can be validated across this background information illustrate that it will affect the figures of merits as well as the dimensionality of the models. Even more important is the robustness of the models for predicting future samples. Another aspect that is brought to attention is validation in terms of the overall conclusions when observing a specific system. One example is to apply several methods for finding the significant variables and see if there is a consensus subset that also matches what is reported in the literature or based on the underlying chemistry. PMID:26398418

  14. Real-time remote scientific model validation

    NASA Technical Reports Server (NTRS)

    Frainier, Richard; Groleau, Nicolas

    1994-01-01

    This paper describes flight results from the use of a CLIPS-based validation facility to compare analyzed data from a space life sciences (SLS) experiment to an investigator's preflight model. The comparison, performed in real-time, either confirms or refutes the model and its predictions. This result then becomes the basis for continuing or modifying the investigator's experiment protocol. Typically, neither the astronaut crew in Spacelab nor the ground-based investigator team are able to react to their experiment data in real time. This facility, part of a larger science advisor system called Principal Investigator in a Box, was flown on the space shuttle in October, 1993. The software system aided the conduct of a human vestibular physiology experiment and was able to outperform humans in the tasks of data integrity assurance, data analysis, and scientific model validation. Of twelve preflight hypotheses associated with investigator's model, seven were confirmed and five were rejected or compromised.

  15. A Hierarchical Systems Approach to Model Validation

    NASA Astrophysics Data System (ADS)

    Easterbrook, S. M.

    2011-12-01

    Existing approaches to the question of how climate models should be evaluated tend to rely on either philosophical arguments about the status of models as scientific tools, or on empirical arguments about how well runs from a given model match observational data. These have led to quantitative approaches expressed in terms of model bias or forecast skill, and ensemble approaches where models are assessed according to the extent to which the ensemble brackets the observational data. Unfortunately, such approaches focus the evaluation on models per se (or more specifically, on the simulation runs they produce) as though the models can be isolated from their context. Such approach may overlook a number of important aspects of the use of climate models: - the process by which models are selected and configured for a given scientific question. - the process by which model outputs are selected, aggregated and interpreted by a community of expertise in climatology. - the software fidelity of the models (i.e. whether the running code is actually doing what the modellers think it's doing). - the (often convoluted) history that begat a given model, along with the modelling choices long embedded in the code. - variability in the scientific maturity of different model components within a coupled system. These omissions mean that quantitative approaches cannot assess whether a model produces the right results for the wrong reasons, or conversely, the wrong results for the right reasons (where, say the observational data is problematic, or the model is configured to be unlike the earth system for a specific reason). Hence, we argue that it is a mistake to think that validation is a post-hoc process to be applied to an individual "finished" model, to ensure it meets some criteria for fidelity to the real world. We are therefore developing a framework for model validation that extends current approaches down into the detailed codebase and the processes by which the code is built

  16. Calibration and validation of rockfall models

    NASA Astrophysics Data System (ADS)

    Frattini, Paolo; Valagussa, Andrea; Zenoni, Stefania; Crosta, Giovanni B.

    2013-04-01

    Calibrating and validating landslide models is extremely difficult due to the particular characteristic of landslides: limited recurrence in time, relatively low frequency of the events, short durability of post-event traces, poor availability of continuous monitoring data, especially for small landslide and rockfalls. For this reason, most of the rockfall models presented in literature completely lack calibration and validation of the results. In this contribution, we explore different strategies for rockfall model calibration and validation starting from both an historical event and a full-scale field test. The event occurred in 2012 in Courmayeur (Western Alps, Italy), and caused serious damages to quarrying facilities. This event has been studied soon after the occurrence through a field campaign aimed at mapping the blocks arrested along the slope, the shape and location of the detachment area, and the traces of scars associated to impacts of blocks on the slope. The full-scale field test was performed by Geovert Ltd in the Christchurch area (New Zealand) after the 2011 earthquake. During the test, a number of large blocks have been mobilized from the upper part of the slope and filmed with high velocity cameras from different viewpoints. The movies of each released block were analysed to identify the block shape, the propagation path, the location of impacts, the height of the trajectory and the velocity of the block along the path. Both calibration and validation of rockfall models should be based on the optimization of the agreement between the actual trajectories or location of arrested blocks and the simulated ones. A measure that describe this agreement is therefore needed. For calibration purpose, this measure should simple enough to allow trial and error repetitions of the model for parameter optimization. In this contribution we explore different calibration/validation measures: (1) the percentage of simulated blocks arresting within a buffer of the

  17. Solar Sail Model Validation from Echo Trajectories

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Brickerhoff, Adam T.

    2007-01-01

    The NASA In-Space Propulsion program has been engaged in a project to increase the technology readiness of solar sails. Recently, these efforts came to fruition in the form of several software tools to model solar sail guidance, navigation and control. Furthermore, solar sails are one of five technologies competing for the New Millennium Program Space Technology 9 flight demonstration mission. The historic Echo 1 and Echo 2 balloons were comprised of aluminized Mylar, which is the near-term material of choice for solar sails. Both spacecraft, but particularly Echo 2, were in low Earth orbits with characteristics similar to the proposed Space Technology 9 orbit. Therefore, the Echo balloons are excellent test cases for solar sail model validation. We present the results of studies of Echo trajectories that validate solar sail models of optics, solar radiation pressure, shape and low-thrust orbital dynamics.

  18. Pain Documentation: Validation of a Reference Model.

    PubMed

    Gesner, Emily; Collins, Sarah A; Rocha, Roberto

    2015-01-01

    Over the last decade, interoperability of the Electronic Health Record (EHR) is becoming more of a reality. However, inconsistencies in documentation such as pain are considered a barrier to obtaining this goal. In order to be able to remedy this issue, it is necessary to validate reference models that have been created based upon requirements defined by Health Level 7 (HL7), Logical Names and Codes (LOINC) and the Intermountain Clinical Element Model using external published sources and guidelines. Using pain as an example of complex and inconsistent documentation, it was found that the reference model based upon these standards is valid because the data elements identified are broad and can meet the needs of each sub-domain within the primary domain of pain. PMID:26262163

  19. Predicting Backdrafting and Spillage for Natural-Draft Gas Combustion Appliances: Validating VENT-II

    SciTech Connect

    Rapp, Vi H.; Pastor-Perez, Albert; Singer, Brett C.; Wray, Craig P.

    2013-04-01

    VENT-II is a computer program designed to provide detailed analysis of natural draft and induced draft combustion appliance vent-systems (i.e., furnace or water heater). This program is capable of predicting house depressurization thresholds that lead to backdrafting and spillage of combustion appliances; however, validation reports of the program being applied for this purpose are not readily available. The purpose of this report is to assess VENT-II’s ability to predict combustion gas spillage events due to house depressurization by comparing VENT-II simulated results with experimental data for four appliance configurations. The results show that VENT-II correctly predicts depressurizations resulting in spillage for natural draft appliances operating in cold and mild outdoor conditions, but not for hot conditions. In the latter case, the predicted depressurizations depend on whether the vent section is defined as part of the vent connector or the common vent when setting up the model. Overall, the VENTII solver requires further investigation before it can be used reliably to predict spillage caused by depressurization over a full year of weather conditions, especially where hot conditions occur.

  20. Using Model Checking to Validate AI Planner Domain Models

    NASA Technical Reports Server (NTRS)

    Penix, John; Pecheur, Charles; Havelund, Klaus

    1999-01-01

    This report describes an investigation into using model checking to assist validation of domain models for the HSTS planner. The planner models are specified using a qualitative temporal interval logic with quantitative duration constraints. We conducted several experiments to translate the domain modeling language into the SMV, Spin and Murphi model checkers. This allowed a direct comparison of how the different systems would support specific types of validation tasks. The preliminary results indicate that model checking is useful for finding faults in models that may not be easily identified by generating test plans.

  1. Paleoclimate validation of a numerical climate model

    SciTech Connect

    Schelling, F.J.; Church, H.W.; Zak, B.D.; Thompson, S.L.

    1994-04-01

    An analysis planned to validate regional climate model results for a past climate state at Yucca Mountain, Nevada, against paleoclimate evidence for the period is described. This analysis, which will use the GENESIS model of global climate nested with the RegCM2 regional climate model, is part of a larger study for DOE`s Yucca Mountain Site Characterization Project that is evaluating the impacts of long term future climate change on performance of the potential high level nuclear waste repository at Yucca Mountain. The planned analysis and anticipated results are presented.

  2. Beyond the Standard Model II

    NASA Astrophysics Data System (ADS)

    Milton, Kimball A.; Kantowski, Ronald; Samuel, Mark A.

    1991-07-01

    Future Prospects * Quantum Mechanics at the Black Hole Horizon * Target-Space Duality and the Curse of the Wormhole * Mass Enhancement and Critical Behavior in Technicolor Theories * Proton-Proton and Proton-Antiproton Elastic Scattering at High Energies - Theory, Phenomenology, and Experiment * Gauge Masses in String Field Theory * An Introduction to Bosonic Technicolor * Anyonic Superconductivity * Hunting the Higgs Boson at LEP with OPAL * Beyond the Standard Model - The Sextet Quarks Way * Dynamical Breakdown of Z2 and Parity in QED3 with Fermion Self-Coupling * Scaling Properties of QED3 with Fermion Self-Couplings * Wheeler-DeWitt Quantum Gravity in (2+1) Dimensions * Kac-Moody Algebras from Covariantization of the Lax Operators * An Upper Bound on the Higgs Mass * Suppression of the Vacuum Energy Expectation Value * Lorentz Covariance of Quantum Fluctuations in Quantum Field Theory * The Gauge Invariance of the Critical Curve in Strong-coupling Gauge Theory * Heavy W Decays into Sfermions and a Photon * New Insights on Majoron Models * Program of Beyond the Standard Model II * List of Participants

  3. Using airborne laser scanning profiles to validate marine geoid models

    NASA Astrophysics Data System (ADS)

    Julge, Kalev; Gruno, Anti; Ellmann, Artu; Liibusk, Aive; Oja, Tõnis

    2014-05-01

    Airborne laser scanning (ALS) is a remote sensing method which utilizes LiDAR (Light Detection And Ranging) technology. The datasets collected are important sources for large range of scientific and engineering applications. Mostly the ALS is used to measure terrain surfaces for compilation of Digital Elevation Models but it can also be used in other applications. This contribution focuses on usage of ALS system for measuring sea surface heights and validating gravimetric geoid models over marine areas. This is based on the ALS ability to register echoes of LiDAR pulse from the water surface. A case study was carried out to analyse the possibilities for validating marine geoid models by using ALS profiles. A test area at the southern shores of the Gulf of Finland was selected for regional geoid validation. ALS measurements were carried out by the Estonian Land Board in spring 2013 at different altitudes and using different scan rates. The one wavelength Leica ALS50-II laser scanner on board of a small aircraft was used to determine the sea level (with respect to the GRS80 reference ellipsoid), which follows roughly the equipotential surface of the Earth's gravity field. For the validation a high-resolution (1'x2') regional gravimetric GRAV-GEOID2011 model was used. This geoid model covers the entire area of Estonia and surrounding waters of the Baltic Sea. The fit between the geoid model and GNSS/levelling data within the Estonian dry land revealed RMS of residuals ±1… ±2 cm. Note that such fitting validation cannot proceed over marine areas. Therefore, an ALS observation-based methodology was developed to evaluate the GRAV-GEOID2011 quality over marine areas. The accuracy of acquired ALS dataset were analyzed, also an optimal width of nadir-corridor containing good quality ALS data was determined. Impact of ALS scan angle range and flight altitude to obtainable vertical accuracy were investigated as well. The quality of point cloud is analysed by cross

  4. SPR Hydrostatic Column Model Verification and Validation.

    SciTech Connect

    Bettin, Giorgia; Lord, David; Rudeen, David Keith

    2015-10-01

    A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extended nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.

  5. Thermodynamically valid noise models for nonlinear devices

    NASA Astrophysics Data System (ADS)

    Coram, Geoffrey J.

    2000-11-01

    Noise has been a concern from the very beginning of signal processing and electrical engineering in general, although it was perhaps of less interest until vacuum- tube amplifiers made it audible just after 1900. Rigorous noise models for linear resistors were developed in 1927 by Nyquist and Johnson [1, 2]. However, the intervening years have not brought similarly well-established models for noise in nonlinear devices. This thesis proposes using thermodynamic principles to determine whether a given nonlinear device noise model is physically valid. These tests are applied to several models. One conclusion is that the standard Gaussian noise models for nonlinear devices predict thermodynamically impossible circuit behavior: these models should be abandoned. But the nonlinear shot-noise model predicts thermodynamically acceptable behavior under a constraint derived here. This thesis shows how the thermodynamic requirements can be reduced to concise mathematical tests, involving no approximations, for the Gaussian and shot-noise models. When the above-mentioned constraint is satisfied, the nonlinear shot-noise model specifies the current noise amplitude at each operating point from knowledge of the device v - i curve alone. This relation between the dissipative behavior and the noise fluctuations is called, naturally enough, a fluctuation- dissipation relation. This thesis further investigates such FDRs, including one for linear resistors in nonlinear circuits that was previously unexplored. The aim of this thesis is to provide thermodynamically solid foundations for noise models. It is hoped that hypothesized noise models developed to match experiment will be validated against the concise mathematical tests of this thesis. Finding a correct noise model will help circuit designers and physicists understand the actual processes causing the noise, and perhaps help them minimize the noise or its effect in the circuit. (Copies available exclusively from MIT Libraries, Rm

  6. Hierarchical Model Validation of Symbolic Performance Models of Scientific Kernels

    SciTech Connect

    Alam, Sadaf R; Vetter, Jeffrey S

    2006-08-01

    Multi-resolution validation of hierarchical performance models of scientific applications is critical primarily for two reasons. First, the step-by-step validation determines the correctness of all essential components or phases in a science simulation. Second, a model that is validated at multiple resolution levels is the very first step to generate predictive performance models, for not only existing systems but also for emerging systems and future problem sizes. We present the design and validation of hierarchical performance models of two scientific benchmarks using a new technique called the modeling assertions (MA). Our MA prototype framework generates symbolic performance models that can be evaluated efficiently by generating the equivalent model representations in Octave and MATLAB. The multi-resolution modeling and validation is conducted on two contemporary, massively-parallel systems, XT3 and Blue Gene/L system. The workload distribution and the growth rates predictions generated by the MA models are confirmed by the experimental data collected on the MPP platforms. In addition, the physical memory requirements that are generated by the MA models are verified by the runtime values on the Blue Gene/L system, which has 512 MBytes and 256 MBytes physical memory capacity in its two unique execution modes.

  7. Predictive Validation of an Influenza Spread Model

    PubMed Central

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  8. ExodusII Finite Element Data Model

    Energy Science and Technology Software Center (ESTSC)

    2005-05-14

    EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface. (exodus II is based on netcdf)

  9. A decision support system (GesCoN) for managing fertigation in vegetable crops. Part II-model calibration and validation under different environmental growing conditions on field grown tomato.

    PubMed

    Conversa, Giulia; Bonasia, Anna; Di Gioia, Francesco; Elia, Antonio

    2015-01-01

    The GesCoN model was evaluated for its capability to simulate growth, nitrogen uptake, and productivity of open field tomato grown under different environmental and cultural conditions. Five datasets collected from experimental trials carried out in Foggia (IT) were used for calibration and 13 datasets collected from trials conducted in Foggia, Perugia (IT), and Florida (USA) were used for validation. The goodness of fitting was performed by comparing the observed and simulated shoot dry weight (SDW) and N crop uptake during crop seasons, total dry weight (TDW), N uptake and fresh yield (TFY). In SDW model calibration, the relative RMSE values fell within the good 10-15% range, percent BIAS (PBIAS) ranged between -11.5 and 7.4%. The Nash-Sutcliffe efficiency (NSE) was very close to the optimal value 1. In the N uptake calibration RRMSE and PBIAS were very low (7%, and -1.78, respectively) and NSE close to 1. The validation of SDW (RRMSE = 16.7%; NSE = 0.96) and N uptake (RRMSE = 16.8%; NSE = 0.96) showed the good accuracy of GesCoN. A model under- or overestimation of the SDW and N uptake occurred when higher or a lower N rates and/or a more or less efficient system were used compared to the calibration trial. The in-season adjustment, using the "SDWcheck" procedure, greatly improved model simulations both in the calibration and in the validation phases. The TFY prediction was quite good except in Florida, where a large overestimation (+16%) was linked to a different harvest index (0.53) compared to the cultivars used for model calibration and validation in Italian areas. The soil water content at the 10-30 cm depth appears to be well-simulated by the software, and the GesCoN proved to be able to adaptively control potential yield and DW accumulation under limited N soil availability scenarios and consequently to modify fertilizer application. The DSSwell simulate SDW accumulation and N uptake of different tomato genotypes grown under Mediterranean and subtropical

  10. Validation of Space Weather Models at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, M. M.; Hesse, M.; Pulkkinen, A.; Maddox, M.; Rastaetter, L.; Berrios, D.; Zheng, Y.; MacNeice, P. J.; Shim, J.; Taktakishvili, A.; Chulaki, A.

    2011-01-01

    The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership to support the research and developmental work necessary to substantially increase space weather modeling capabilities and to facilitate advanced models deployment in forecasting operations. Space weather models and coupled model chains hosted at the CCMC range from the solar corona to the Earth's upper atmosphere. CCMC has developed a number of real-time modeling systems, as well as a large number of modeling and data products tailored to address the space weather needs of NASA's robotic missions. The CCMC conducts unbiased model testing and validation and evaluates model readiness for operational environment. CCMC has been leading recent comprehensive modeling challenges under GEM, CEDAR and SHINE programs. The presentation will focus on experience in carrying out comprehensive and systematic validation of large sets of. space weather models

  11. Turbulence Modeling Validation, Testing, and Development

    NASA Technical Reports Server (NTRS)

    Bardina, J. E.; Huang, P. G.; Coakley, T. J.

    1997-01-01

    The primary objective of this work is to provide accurate numerical solutions for selected flow fields and to compare and evaluate the performance of selected turbulence models with experimental results. Four popular turbulence models have been tested and validated against experimental data often turbulent flows. The models are: (1) the two-equation k-epsilon model of Wilcox, (2) the two-equation k-epsilon model of Launder and Sharma, (3) the two-equation k-omega/k-epsilon SST model of Menter, and (4) the one-equation model of Spalart and Allmaras. The flows investigated are five free shear flows consisting of a mixing layer, a round jet, a plane jet, a plane wake, and a compressible mixing layer; and five boundary layer flows consisting of an incompressible flat plate, a Mach 5 adiabatic flat plate, a separated boundary layer, an axisymmetric shock-wave/boundary layer interaction, and an RAE 2822 transonic airfoil. The experimental data for these flows are well established and have been extensively used in model developments. The results are shown in the following four sections: Part A describes the equations of motion and boundary conditions; Part B describes the model equations, constants, parameters, boundary conditions, and numerical implementation; and Parts C and D describe the experimental data and the performance of the models in the free-shear flows and the boundary layer flows, respectively.

  12. Validation of the Korean version Moorehead-Ardelt quality of life questionnaire II

    PubMed Central

    Lee, Yeon Ji; Song, Hyun Jin; Oh, Sung-Hee; Kwon, Jin Won; Moon, Kon-Hak; Park, Joong-Min; Lee, Sang Kuon

    2014-01-01

    Purpose To investigate the weight loss effects with higher sensitivity, disease specific quality of life (QoL) instruments were important. The Moorehead-Ardelt quality of life questionnaire II (MA-II) is widely used, because it was simple and validated the several languages. The aims of present study was performed the translation of MA-II Korean version and the validation compared with EuroQol-5 dimension (EQ-5D), obesity-related problems scale (OP-scale), and impact of weight quality of life-lite (IWQoL-Lite). Methods The study design was a multicenter, cross-sectional survey and this study was included the postoperative patients. The validation procedure is translation-back translation procedure, pilot study, and field study. The instruments of measuring QoL included the MA-II, EQ-5D, OP-scale, and IWQoL-lite. The reliability was checked through internal consistency using Cronbach alpha coefficients. The construct validity was assessed the Spearman rank correlation between 6 domains of MA-II and EQ-5D, OP-scale, and 5 domains of IWQoL-Lite. Results The Cronbach alpha of MA-II was 0.763, so the internal consistency was confirmed. The total score of MA-II was significantly correlated with all other instruments; EQ-5D, OP-scale, and IWQoL-Lite. IWQoL-lite (ρ = 0.623, P < 0.001) was showed the strongest correlation compared with MA-II, followed by OP-scale (ρ = 0.588, P < 0.001) and EQ-5D (ρ = 0.378, P < 0.01). Conclusion The Korean version MA-II was valid instrument of measuring the obesity-specific QoL. Through the present study, the MA-II was confirmed to have good reliability and validity and it was also answered simple for investigating. Thus, MA-II could be estimated sensitive and exact QoL in obesity patients. PMID:25368853

  13. Validation of Computational Models in Biomechanics

    PubMed Central

    Henninger, Heath B.; Reese, Shawn P.; Anderson, Andrew E.; Weiss, Jeffrey A.

    2010-01-01

    The topics of verification and validation (V&V) have increasingly been discussed in the field of computational biomechanics, and many recent articles have applied these concepts in an attempt to build credibility for models of complex biological systems. V&V are evolving techniques that, if used improperly, can lead to false conclusions about a system under study. In basic science these erroneous conclusions may lead to failure of a subsequent hypothesis, but they can have more profound effects if the model is designed to predict patient outcomes. While several authors have reviewed V&V as they pertain to traditional solid and fluid mechanics, it is the intent of this manuscript to present them in the context of computational biomechanics. Specifically, the task of model validation will be discussed with a focus on current techniques. It is hoped that this review will encourage investigators to engage and adopt the V&V process in an effort to increase peer acceptance of computational biomechanics models. PMID:20839648

  14. Session on validation of coupled models

    NASA Technical Reports Server (NTRS)

    Kuo, Bill

    1993-01-01

    The session on validation of coupled models is reviewed. The current use of a mesoscale model with a grid size of 20-km during STORM-FEST in 1992 has proven to be extremely valuable. The availability of forecast products at a much higher temporal and spatial resolution was very helpful for mesoscale forecasting, mission planning, and the guidance of research aircraft. Recent numerical simulation of ocean cyclones and mesoscsle convective systems using nonhydrostatic cloud/mesoscale models with a grid size as small as 2-km have demonstrated the potential of these models for predicting mesoscale convective systems, squall lines, hurricane rainbands, mesoscale gravity waves, and mesoscale frontal structures embedded within an extratropical cyclone. Although mesoscale/cloud scale models have demonstrated strong potential for use in operational forecasting, very limited quantitative evaluation (and verification) of these models were performed. As a result, the accuracy, the systematic biases, and the useful forecasts limits were not properly defined for these models. Also, no serious attempts were made to use these models for operational prediction of mesoscale convective systems.

  15. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  16. Plasma Reactor Modeling and Validation Experiments

    NASA Technical Reports Server (NTRS)

    Meyyappan, M.; Bose, D.; Hash, D.; Hwang, H.; Cruden, B.; Sharma, S. P.; Rao, M. V. V. S.; Arnold, Jim (Technical Monitor)

    2001-01-01

    Plasma processing is a key processing stop in integrated circuit manufacturing. Low pressure, high density plum reactors are widely used for etching and deposition. Inductively coupled plasma (ICP) source has become popular recently in many processing applications. In order to accelerate equipment and process design, an understanding of the physics and chemistry, particularly, plasma power coupling, plasma and processing uniformity and mechanism is important. This understanding is facilitated by comprehensive modeling and simulation as well as plasma diagnostics to provide the necessary data for model validation which are addressed in this presentation. We have developed a complete code for simulating an ICP reactor and the model consists of transport of electrons, ions, and neutrals, Poisson's equation, and Maxwell's equation along with gas flow and energy equations. Results will be presented for chlorine and fluorocarbon plasmas and compared with data from Langmuir probe, mass spectrometry and FTIR.

  17. Validation of Kp Estimation and Prediction Models

    NASA Astrophysics Data System (ADS)

    McCollough, J. P., II; Young, S. L.; Frey, W.

    2014-12-01

    Specifification and forecast of geomagnetic indices is an important capability for space weather operations. The University Partnering for Operational Support (UPOS) effort at the Applied Physics Laboratory of Johns Hopkins University (JHU/APL) produced many space weather models, including the Kp Predictor and Kp Estimator. We perform a validation of index forecast products against definitive indices computed by the Deutches GeoForschungsZentstrum Potsdam (GFZ). We compute continuous predictant skill scores, as well as 2x2 contingency tables and associated scalar quantities for different index thresholds. We also compute a skill score against a nowcast persistence model. We discuss various sources of error for the models and how they may potentially be improved.

  18. Concepts of Model Verification and Validation

    SciTech Connect

    B.H.Thacker; S.W.Doebling; F.M.Hemez; M.C. Anderson; J.E. Pepin; E.A. Rodriguez

    2004-10-30

    Model verification and validation (V&V) is an enabling methodology for the development of computational models that can be used to make engineering predictions with quantified confidence. Model V&V procedures are needed by government and industry to reduce the time, cost, and risk associated with full-scale testing of products, materials, and weapon systems. Quantifying the confidence and predictive accuracy of model calculations provides the decision-maker with the information necessary for making high-consequence decisions. The development of guidelines and procedures for conducting a model V&V program are currently being defined by a broad spectrum of researchers. This report reviews the concepts involved in such a program. Model V&V is a current topic of great interest to both government and industry. In response to a ban on the production of new strategic weapons and nuclear testing, the Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship Program (SSP). An objective of the SSP is to maintain a high level of confidence in the safety, reliability, and performance of the existing nuclear weapons stockpile in the absence of nuclear testing. This objective has challenged the national laboratories to develop high-confidence tools and methods that can be used to provide credible models needed for stockpile certification via numerical simulation. There has been a significant increase in activity recently to define V&V methods and procedures. The U.S. Department of Defense (DoD) Modeling and Simulation Office (DMSO) is working to develop fundamental concepts and terminology for V&V applied to high-level systems such as ballistic missile defense and battle management simulations. The American Society of Mechanical Engineers (ASME) has recently formed a Standards Committee for the development of V&V procedures for computational solid mechanics models. The Defense Nuclear Facilities Safety Board (DNFSB) has been a proponent of model V&V for all

  19. Validated predictive modelling of the environmental resistome.

    PubMed

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  20. Validated predictive modelling of the environmental resistome

    PubMed Central

    Amos, Gregory CA; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-01-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  1. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  2. Integrated Medical Model Verification, Validation, and Credibility

    NASA Technical Reports Server (NTRS)

    Walton, Marlei; Kerstman, Eric; Foy, Millennia; Shah, Ronak; Saile, Lynn; Boley, Lynn; Butler, Doug; Myers, Jerry

    2014-01-01

    The Integrated Medical Model (IMM) was designed to forecast relative changes for a specified set of crew health and mission success risk metrics by using a probabilistic (stochastic process) model based on historical data, cohort data, and subject matter expert opinion. A probabilistic approach is taken since exact (deterministic) results would not appropriately reflect the uncertainty in the IMM inputs. Once the IMM was conceptualized, a plan was needed to rigorously assess input information, framework and code, and output results of the IMM, and ensure that end user requests and requirements were considered during all stages of model development and implementation. METHODS: In 2008, the IMM team developed a comprehensive verification and validation (VV) plan, which specified internal and external review criteria encompassing 1) verification of data and IMM structure to ensure proper implementation of the IMM, 2) several validation techniques to confirm that the simulation capability of the IMM appropriately represents occurrences and consequences of medical conditions during space missions, and 3) credibility processes to develop user confidence in the information derived from the IMM. When the NASA-STD-7009 (7009) was published, the IMM team updated their verification, validation, and credibility (VVC) project plan to meet 7009 requirements and include 7009 tools in reporting VVC status of the IMM. RESULTS: IMM VVC updates are compiled recurrently and include 7009 Compliance and Credibility matrices, IMM VV Plan status, and a synopsis of any changes or updates to the IMM during the reporting period. Reporting tools have evolved over the lifetime of the IMM project to better communicate VVC status. This has included refining original 7009 methodology with augmentation from the NASA-STD-7009 Guidance Document. End user requests and requirements are being satisfied as evidenced by ISS Program acceptance of IMM risk forecasts, transition to an operational model and

  3. Boron-10 Lined Proportional Counter Model Validation

    SciTech Connect

    Lintereur, Azaree T.; Siciliano, Edward R.; Kouzes, Richard T.

    2012-06-30

    The Department of Energy Office of Nuclear Safeguards (NA-241) is supporting the project “Coincidence Counting With Boron-Based Alternative Neutron Detection Technology” at Pacific Northwest National Laboratory (PNNL) for the development of an alternative neutron coincidence counter. The goal of this project is to design, build and demonstrate a boron-lined proportional tube-based alternative system in the configuration of a coincidence counter. This report discusses the validation studies performed to establish the degree of accuracy of the computer modeling methods current used to simulate the response of boron-lined tubes. This is the precursor to developing models for the uranium neutron coincidence collar under Task 2 of this project.

  4. Doubtful outcome of the validation of the Rome II questionnaire: validation of a symptom based diagnostic tool

    PubMed Central

    2009-01-01

    Background Questionnaires are used in research and clinical practice. For gastrointestinal complaints the Rome II questionnaire is internationally known but not validated. The aim of this study was to validate a printed and a computerized version of Rome II, translated into Swedish. Results from various analyses are reported. Methods Volunteers from a population based colonoscopy study were included (n = 1011), together with patients seeking general practice (n = 45) and patients visiting a gastrointestinal specialists' clinic (n = 67). The questionnaire consists of 38 questions concerning gastrointestinal symptoms and complaints. Diagnoses are made after a special code. Our validation included analyses of the translation, feasibility, predictability, reproducibility and reliability. Kappa values and overall agreement were measured. The factor structures were confirmed using a principal component analysis and Cronbach's alpha was used to test the internal consistency. Results and Discussion Translation and back translation showed good agreement. The questionnaire was easy to understand and use. The reproducibility test showed kappa values of 0.60 for GERS, 0.52 for FD, and 0.47 for IBS. Kappa values and overall agreement for the predictability when the diagnoses by the questionnaire were compared to the diagnoses by the clinician were 0.26 and 90% for GERS, 0.18 and 85% for FD, and 0.49 and 86% for IBS. Corresponding figures for the agreement between the printed and the digital version were 0.50 and 92% for GERS, 0.64 and 95% for FD, and 0.76 and 95% for IBS. Cronbach's alpha coefficient for GERS was 0.75 with a span per item of 0.71 to 0.76. For FD the figures were 0.68 and 0.54 to 0.70 and for IBS 0.61 and 0.56 to 0.66. The Rome II questionnaire has never been thoroughly validated before even if diagnoses made by the Rome criteria have been compared to diagnoses made in clinical practice. Conclusion The accuracy of the Swedish version of the Rome II is of

  5. Concurrent Validity of the Online Version of the Keirsey Temperament Sorter II.

    ERIC Educational Resources Information Center

    Kelly, Kevin R.; Jugovic, Heidi

    2001-01-01

    Data from the Keirsey Temperament Sorter II online instrument and Myers Briggs Type Indicator (MBTI) for 203 college freshmen were analyzed. Positive correlations appeared between the concurrent MBTI and Keirsey measures of psychological type, giving preliminary support to the validity of the online version of Keirsey. (Contains 28 references.)…

  6. Recommendations for the determination of valid mode II fracture toughnesses K{sub IIc}

    SciTech Connect

    Hiese, W.; Kalthoff, J.F.

    1999-07-01

    From a discussion of the sizes of the plastic zones at the tip of a crack under shear (Mode II) and tensile (Mode I) conditions of loading, hypotheses on specimen size requirements are derived for determining valid values of the shear fracture toughness K{sub IIc}. The following conclusions are drawn: The minimum specimen thickness for a K{sub IIc} test can be smaller, but the minimum in-plane specimen dimensions should be larger than for a K{sub Ic} test. For verification of these hypotheses, Mode II and additionally Mode I fracture toughnesses were determined for the aluminum alloy 7075 and the tool steel 90 MnCrV 8. Measurements were performed with specimens of different sizes with respect to the size of the crack tip plastic zones. The obtained data are in good agreement with the derived criteria for measuring Mode II fracture toughnesses K{sub IIc} and confirm their validity.

  7. Unit testing, model validation, and biological simulation

    PubMed Central

    Watts, Mark D.; Ghayoomie, S. Vahid; Larson, Stephen D.; Gerkin, Richard C.

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  8. Validation and application of the SCALP model

    NASA Astrophysics Data System (ADS)

    Smith, D. A. J.; Martin, C. E.; Saunders, C. J.; Smith, D. A.; Stokes, P. H.

    The Satellite Collision Assessment for the UK Licensing Process (SCALP) model was first introduced in a paper presented at IAC 2003. As a follow-on, this paper details the steps taken to validate the model and describes some of its applications. SCALP was developed for the British National Space Centre (BNSC) to support liability assessments as part of the UK's satellite license application process. Specifically, the model determines the collision risk that a satellite will pose to other orbiting objects during both its operational and post-mission phases. To date SCALP has been used to assess several LEO and GEO satellites for BNSC, and subsequently to provide the necessary technical basis for licenses to be issued. SCALP utilises the current population of operational satellites residing in LEO and GEO (extracted from ESA's DISCOS database) as a starting point. Realistic orbital dynamics, including the approximate simulation of generic GEO station-keeping strategies are used to propagate the objects over time. The method takes into account all of the appropriate orbit perturbations for LEO and GEO altitudes and allows rapid run times for multiple objects over time periods of many years. The orbit of a target satellite is also propagated in a similar fashion. During these orbital evolutions, a collision prediction and close approach algorithm assesses the collision risk posed to the satellite population. To validate SCALP, specific cases were set up to enable the comparison of collision risk results with other established models, such as the ESA MASTER model. Additionally, the propagation of operational GEO satellites within SCALP was compared with the expected behaviour of controlled GEO objects. The sensitivity of the model to changing the initial conditions of the target satellite such as semi-major axis and inclination has also been demonstrated. A further study shows the effect of including extra objects from the GTO population (which can pass through the LEO

  9. Validation of Arabic and English versions of the ARSMA-II Acculturation Rating Scale.

    PubMed

    Jadalla, Ahlam; Lee, Jerry

    2015-02-01

    To translate and adapt the Acculturation Rating Scale of Mexican-Americans II (ARSMA-II) for Arab Americans. A multistage translation process followed by a pilot and a large study. The translated and adapted versions, Acculturation Rating Scale for Arabic Americans-II Arabic and English (ARSAA-IIA, ARSAA-IIE), were validated in a sample of 297 Arab Americans. Factor analyses with principal axis factoring extractions and direct oblimin rotations were used to identify the underlying structure of ARSAA-II. Factor analysis confirmed the underlying structure of ARSAA-II and produced two interpretable factors labeled as 'Attraction to American Culture' (AAmC) and 'Attraction to Arabic Culture' (AArC). The Cronbach's alphas of AAmC and AArC were .89 and .85 respectively. Findings support ARSAA-II A & E to assess acculturation among Arab Americans. The emergent factors of ARSAA-II support the theoretical structure of the original ARSMA-II tool and show high internal consistency. PMID:23934518

  10. Full-Scale Cookoff Model Validation Experiments

    SciTech Connect

    McClelland, M A; Rattanapote, M K; Heimdahl, E R; Erikson, W E; Curran, P O; Atwood, A I

    2003-11-25

    This paper presents the experimental results of the third and final phase of a cookoff model validation effort. In this phase of the work, two generic Heavy Wall Penetrators (HWP) were tested in two heating orientations. Temperature and strain gage data were collected over the entire test period. Predictions for time and temperature of reaction were made prior to release of the live data. Predictions were comparable to the measured values and were highly dependent on the established boundary conditions. Both HWP tests failed at a weld located near the aft closure of the device. More than 90 percent of unreacted explosive was recovered in the end heated experiment and less than 30 percent recovered in the side heated test.

  11. Some useful statistical methods for model validation.

    PubMed Central

    Marcus, A H; Elias, R W

    1998-01-01

    Although formal hypothesis tests provide a convenient framework for displaying the statistical results of empirical comparisons, standard tests should not be used without consideration of underlying measurement error structure. As part of the validation process, predictions of individual blood lead concentrations from models with site-specific input parameters are often compared with blood lead concentrations measured in field studies that also report lead concentrations in environmental media (soil, dust, water, paint) as surrogates for exposure. Measurements of these environmental media are subject to several sources of variability, including temporal and spatial sampling, sample preparation and chemical analysis, and data entry or recording. Adjustments for measurement error must be made before statistical tests can be used to empirically compare environmental data with model predictions. This report illustrates the effect of measurement error correction using a real dataset of child blood lead concentrations for an undisclosed midwestern community. We illustrate both the apparent failure of some standard regression tests and the success of adjustment of such tests for measurement error using the SIMEX (simulation-extrapolation) procedure. This procedure adds simulated measurement error to model predictions and then subtracts the total measurement error, analogous to the method of standard additions used by analytical chemists. Images Figure 1 Figure 3 PMID:9860913

  12. Diurnal ocean surface layer model validation

    NASA Technical Reports Server (NTRS)

    Hawkins, Jeffrey D.; May, Douglas A.; Abell, Fred, Jr.

    1990-01-01

    The diurnal ocean surface layer (DOSL) model at the Fleet Numerical Oceanography Center forecasts the 24-hour change in a global sea surface temperatures (SST). Validating the DOSL model is a difficult task due to the huge areas involved and the lack of in situ measurements. Therefore, this report details the use of satellite infrared multichannel SST imagery to provide day and night SSTs that can be directly compared to DOSL products. This water-vapor-corrected imagery has the advantages of high thermal sensitivity (0.12 C), large synoptic coverage (nearly 3000 km across), and high spatial resolution that enables diurnal heating events to be readily located and mapped. Several case studies in the subtropical North Atlantic readily show that DOSL results during extreme heating periods agree very well with satellite-imagery-derived values in terms of the pattern of diurnal warming. The low wind and cloud-free conditions necessary for these events to occur lend themselves well to observation via infrared imagery. Thus, the normally cloud-limited aspects of satellite imagery do not come into play for these particular environmental conditions. The fact that the DOSL model does well in extreme events is beneficial from the standpoint that these cases can be associated with the destruction of the surface acoustic duct. This so-called afternoon effect happens as the afternoon warming of the mixed layer disrupts the sound channel and the propagation of acoustic energy.

  13. Simultaneous heat and water model: Model use, calibration and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A discussion of calibration and validation procedures used for the Simultaneous Heat and Water model is presented. Three calibration approaches are presented and compared for simulating soil water content. Approaches included a stepwise local search methodology, trial-and-error calibration, and an...

  14. Outward Bound Outcome Model Validation and Multilevel Modeling

    ERIC Educational Resources Information Center

    Luo, Yuan-Chun

    2011-01-01

    This study was intended to measure construct validity for the Outward Bound Outcomes Instrument (OBOI) and to predict outcome achievement from individual characteristics and course attributes using multilevel modeling. A sample of 2,340 participants was collected by Outward Bound USA between May and September 2009 using the OBOI. Two phases of…

  15. Validation of HEDR models. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid.

  16. End-to-end modelling of He II flow systems

    NASA Technical Reports Server (NTRS)

    Mord, A. J.; Snyder, H. A.; Newell, D. A.

    1992-01-01

    A practical computer code has been developed which uses the accepted two-fluid model to simulate He II flow in complicated systems. The full set of equations are used, retaining the coupling between the pressure, temperature and velocity fields. This permits modeling He II flow over the full range of conditions, from strongly or weakly driven flow through large pipes, narrow channels and porous media. The system may include most of the components used in modern superfluid flow systems: non-ideal thermomechanical pumps, tapered sections, constrictions, lines with heated side walls and heat exchangers. The model is validated by comparison with published experimental data. It is applied to a complex system to show some of the non-intuitive feedback effects that can occur. This code is ready to be used as a design tool for practical applications of He II. It can also be used for the design of He II experiments and as a tool for comparison of experimental data with the standard two-fluid model.

  17. Validating agent based models through virtual worlds.

    SciTech Connect

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior

  18. The SEGUE Stellar Parameter Pipeline. II. Validation with Galactic Globular and Open Clusters

    SciTech Connect

    Lee, Y.S.; Beers, T.C.; Sivarani, T.; Johnson, J.A.; An, D.; Wilhelm, R.; Prieto, C.Allende; Koesterke, L.; Re Fiorentin, P.; Bailer-Jones, C.A.L.; Norris, J.E.

    2007-10-01

    The authors validate the performance and accuracy of the current SEGUE (Sloan Extension for Galactic Understanding and Exploration) Stellar Parameter Pipeline (SSPP), which determines stellar atmospheric parameters (effective temperature, surface gravity, and metallicity) by comparing derived overall metallicities and radial velocities from selected likely members of three globular clusters (M 13, M 15, and M 2) and two open clusters (NGC 2420 and M 67) to the literature values. Spectroscopic and photometric data obtained during the course of the original Sloan Digital Sky Survey (SDSS-1) and its first extension (SDSS-II/SEGUE) are used to determine stellar radial velocities and atmospheric parameter estimates for stars in these clusters. Based on the scatter in the metallicities derived for the members of each cluster, they quantify the typical uncertainty of the SSPP values, {sigma}([Fe/H]) = 0.13 dex for stars in the range of 4500 K {le} T{sub eff} {le} 7500 K and 2.0 {le} log g {le} 5.0, at least over the metallicity interval spanned by the clusters studied (-2.3 {le} [Fe/H] < 0). The surface gravities and effective temperatures derived by the SSPP are also compared with those estimated from the comparison of the color-magnitude diagrams with stellar evolution models; they find satisfactory agreement. At present, the SSPP underestimates [Fe/H] for near-solar-metallicity stars, represented by members of M 67 in this study, by {approx} 0.3 dex.

  19. Geochemistry Model Validation Report: Material Degradation and Release Model

    SciTech Connect

    H. Stockman

    2001-09-28

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17).

  20. Tutorial review on validation of liquid chromatography-mass spectrometry methods: part II.

    PubMed

    Kruve, Anneli; Rebane, Riin; Kipper, Karin; Oldekop, Maarja-Liisa; Evard, Hanno; Herodes, Koit; Ravio, Pekka; Leito, Ivo

    2015-04-22

    This is the part II of a tutorial review intending to give an overview of the state of the art of method validation in liquid chromatography mass spectrometry (LC-MS) and discuss specific issues that arise with MS (and MS-MS) detection in LC (as opposed to the "conventional" detectors). The Part II starts with briefly introducing the main quantitation methods and then addresses the performance related to quantification: linearity of signal, sensitivity, precision, trueness, accuracy, stability and measurement uncertainty. The last section is devoted to practical considerations in validation. With every performance characteristic its essence and terminology are addressed, the current status of treating it is reviewed and recommendations are given, how to handle it, specifically in the case of LC-MS methods. PMID:25819784

  1. Micromachined accelerometer design, modeling and validation

    SciTech Connect

    Davies, B.R.; Bateman, V.I.; Brown, F.A.; Montague, S.; Murray, J.R.; Rey, D.; Smith, J.H.

    1998-04-01

    Micromachining technologies enable the development of low-cost devices capable of sensing motion in a reliable and accurate manner. The development of various surface micromachined accelerometers and gyroscopes to sense motion is an ongoing activity at Sandia National Laboratories. In addition, Sandia has developed a fabrication process for integrating both the micromechanical structures and microelectronics circuitry of Micro-Electro-Mechanical Systems (MEMS) on the same chip. This integrated surface micromachining process provides substantial performance and reliability advantages in the development of MEMS accelerometers and gyros. A Sandia MEMS team developed a single-axis, micromachined silicon accelerometer capable of surviving and measuring very high accelerations, up to 50,000 times the acceleration due to gravity or 50 k-G (actually measured to 46,000 G). The Sandia integrated surface micromachining process was selected for fabrication of the sensor due to the extreme measurement sensitivity potential associated with integrated microelectronics. Measurement electronics capable of measuring at to Farad (10{sup {minus}18} Farad) changes in capacitance were required due to the very small accelerometer proof mass (< 200 {times} 10{sup {minus}9} gram) used in this surface micromachining process. The small proof mass corresponded to small sensor deflections which in turn required very sensitive electronics to enable accurate acceleration measurement over a range of 1 to 50 k-G. A prototype sensor, based on a suspended plate mass configuration, was developed and the details of the design, modeling, and validation of the device will be presented in this paper. The device was analyzed using both conventional lumped parameter modeling techniques and finite element analysis tools. The device was tested and performed well over its design range.

  2. External validation of a Cox prognostic model: principles and methods

    PubMed Central

    2013-01-01

    Background A prognostic model should not enter clinical practice unless it has been demonstrated that it performs a useful role. External validation denotes evaluation of model performance in a sample independent of that used to develop the model. Unlike for logistic regression models, external validation of Cox models is sparsely treated in the literature. Successful validation of a model means achieving satisfactory discrimination and calibration (prediction accuracy) in the validation sample. Validating Cox models is not straightforward because event probabilities are estimated relative to an unspecified baseline function. Methods We describe statistical approaches to external validation of a published Cox model according to the level of published information, specifically (1) the prognostic index only, (2) the prognostic index together with Kaplan-Meier curves for risk groups, and (3) the first two plus the baseline survival curve (the estimated survival function at the mean prognostic index across the sample). The most challenging task, requiring level 3 information, is assessing calibration, for which we suggest a method of approximating the baseline survival function. Results We apply the methods to two comparable datasets in primary breast cancer, treating one as derivation and the other as validation sample. Results are presented for discrimination and calibration. We demonstrate plots of survival probabilities that can assist model evaluation. Conclusions Our validation methods are applicable to a wide range of prognostic studies and provide researchers with a toolkit for external validation of a published Cox model. PMID:23496923

  3. Techniques and Issues in Agent-Based Modeling Validation

    SciTech Connect

    Pullum, Laura L; Cui, Xiaohui

    2012-01-01

    Validation of simulation models is extremely important. It ensures that the right model has been built and lends confidence to the use of that model to inform critical decisions. Agent-based models (ABM) have been widely deployed in different fields for studying the collective behavior of large numbers of interacting agents. However, researchers have only recently started to consider the issues of validation. Compared to other simulation models, ABM has many differences in model development, usage and validation. An ABM is inherently easier to build than a classical simulation, but more difficult to describe formally since they are closer to human cognition. Using multi-agent models to study complex systems has attracted criticisms because of the challenges involved in their validation [1]. In this report, we describe the challenge of ABM validation and present a novel approach we recently developed for an ABM system.

  4. Model for Use of Sociometry to Validate Attitude Measures.

    ERIC Educational Resources Information Center

    McGuiness, Thomas P.; Stank, Peggy L.

    A study concerning the development and validation of an instrument intended to measure Goal II of quality education is presented. This goal is that quality education should help every child acquire understanding and appreciation of persons belonging to social, cultural and ethnic groups different from his own. The rationale for measurement…

  5. A methodology for validating numerical ground water models.

    PubMed

    Hassan, Ahmed E

    2004-01-01

    Ground water validation is one of the most challenging issues facing modelers and hydrogeologists. Increased complexity in ground water models has created a gap between model predictions and the ability to validate or build confidence in predictions. Specific procedures and tests that can be easily adapted and applied to determine the validity of site-specific ground water models do not exist. This is true for both deterministic and stochastic models, with stochastic models posing the more difficult validation problem. The objective of this paper is to propose a general validation approach that addresses important issues recognized in previous validation studies, conferences, and symposia. The proposed method links the processes for building, calibrating, evaluating, and validating models in an iterative loop. The approach focuses on using collected validation data to reduce uncertainty in the model and narrow the range of possible outcomes. This method is designed for stochastic numerical models utilizing Monte Carlo simulation approaches, but it can be easily adapted for deterministic models. The proposed methodology relies on the premise that absolute validity is not theoretically possible, nor is it a regulatory requirement. Rather, the proposed methodology highlights the importance of testing various aspects of the model and using diverse statistical tools for rigorous checking and confidence building in the model and its predictions. It is this confidence that will encourage regulators and the public to accept decisions based on the model predictions. This validation approach will be applied to a model, described in this paper, dealing with an underground nuclear test site in rural Nevada. PMID:15161152

  6. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    SciTech Connect

    Freeman, J.; Whitmore, J.; Kaffine, L.; Blair, N.; Dobos, A. P.

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  7. SAGE III Aerosol Extinction Validation in the Arctic Winter: Comparisons with SAGE II and POAM III

    NASA Technical Reports Server (NTRS)

    Thomason, L. W.; Poole, L. R.; Randall, C. E.

    2007-01-01

    The use of SAGE III multiwavelength aerosol extinction coefficient measurements to infer PSC type is contingent on the robustness of both the extinction magnitude and its spectral variation. Past validation with SAGE II and other similar measurements has shown that the SAGE III extinction coefficient measurements are reliable though the comparisons have been greatly weighted toward measurements made at mid-latitudes. Some aerosol comparisons made in the Arctic winter as a part of SOLVE II suggested that SAGE III values, particularly at longer wavelengths, are too small with the implication that both the magnitude and the wavelength dependence are not reliable. Comparisons with POAM III have also suggested a similar discrepancy. Herein, we use SAGE II data as a common standard for comparison of SAGE III and POAM III measurements in the Arctic winters of 2002/2003 through 2004/2005. During the winter, SAGE II measurements are made infrequently at the same latitudes as these instruments. We have mitigated this problem through the use potential vorticity as a spatial coordinate and thus greatly increased the number of coincident events. We find that SAGE II and III extinction coefficient measurements show a high degree of compatibility at both 1020 nm and 450 nm except a 10-20% bias at both wavelengths. In addition, the 452 to 1020-nm extinction ratio shows a consistent bias of approx. 30% throughout the lower stratosphere. We also find that SAGE II and POAM III are on average consistent though the comparisons show a much higher variability and larger bias than SAGE II/III comparisons. In addition, we find that the two data sets are not well correlated below 18 km. Overall, we find both the extinction values and the spectral dependence from SAGE III are robust and we find no evidence of a significant defect within the Arctic vortex.

  8. Homology modeling, binding site identification and docking study of human angiotensin II type I (Ang II-AT1) receptor.

    PubMed

    Vyas, Vivek K; Ghate, Manjunath; Patel, Kinjal; Qureshi, Gulamnizami; Shah, Surmil

    2015-08-01

    Ang II-AT1 receptors play an important role in mediating virtually all of the physiological actions of Ang II. Several drugs (SARTANs) are available, which can block the AT1 receptor effectively and lower the blood pressure in the patients with hypertension. Currently, there is no experimental Ang II-AT1 structure available; therefore, in this study we modeled Ang II-AT1 receptor structure using homology modeling followed by identification and characterization of binding sites and thereby assessing druggability of the receptor. Homology models were constructed using MODELLER and I-TASSER server, refined and validated using PROCHECK in which 96.9% of 318 residues were present in the favoured regions of the Ramachandran plots. Various Ang II-AT1 receptor antagonist drugs are available in the market as antihypertensive drug, so we have performed docking study with the binding site prediction algorithms to predict different binding pockets on the modeled proteins. The identification of 3D structures and binding sites for various known drugs will guide us for the structure-based drug design of novel compounds as Ang II-AT1 receptor antagonists for the treatment of hypertension. PMID:26349961

  9. Development and Validation of the Juvenile Sexual Offense Recidivism Risk Assessment Tool-II.

    PubMed

    Epperson, Douglas L; Ralston, Christopher A

    2015-12-01

    This article describes the development and initial validation of the Juvenile Sexual Offense Recidivism Risk Assessment Tool-II (JSORRAT-II). Potential predictor variables were extracted from case file information for an exhaustive sample of 636 juveniles in Utah who sexually offended between 1990 and 1992. Simultaneous and hierarchical logistic regression analyses were used to identify the group of variables that was most predictive of subsequent juvenile sexual recidivism. A simple categorical scoring system was applied to these variables without meaningful loss of accuracy in the development sample for any sexual (area under the curve [AUC] = .89) and sexually violent (AUC = .89) juvenile recidivism. The JSORRAT-II was cross-validated on an exhaustive sample of 566 juveniles who had sexually offended in Utah in 1996 and 1997. Reliability of scoring the tool across five coders was quite high (intraclass correlation coefficient [ICC] = .96). Relative to the development sample, however, there was considerable shrinkage in the indices of predictive accuracy for any sexual (AUC = .65) and sexually violent (AUC = .65) juvenile recidivism. The reduced level of accuracy was not explained by severity of the index sexual offense, time at risk, or missing data. Capitalization on chance and other explanations for the possible reduction in predictive accuracy are explored, and potential uses and limitations of the tool are discussed. PMID:24492618

  10. A comprehensive model for piezoceramic actuators: modelling, validation and application

    NASA Astrophysics Data System (ADS)

    Quant, Mario; Elizalde, Hugo; Flores, Abiud; Ramírez, Ricardo; Orta, Pedro; Song, Gangbing

    2009-12-01

    This paper presents a comprehensive model for piezoceramic actuators (PAs), which accounts for hysteresis, non-linear electric field and dynamic effects. The hysteresis model is based on the widely used general Maxwell slip model, while an enhanced electro-mechanical non-linear model replaces the linear constitutive equations commonly used. Further on, a linear second order model compensates the frequency response of the actuator. Each individual model is fully characterized from experimental data yielded by a specific PA, then incorporated into a comprehensive 'direct' model able to determine the output strain based on the applied input voltage, fully compensating the aforementioned effects, where the term 'direct' represents an electrical-to-mechanical operating path. The 'direct' model was implemented in a Matlab/Simulink environment and successfully validated via experimental results, exhibiting higher accuracy and simplicity than many published models. This simplicity would allow a straightforward inclusion of other behaviour such as creep, ageing, material non-linearity, etc, if such parameters are important for a particular application. Based on the same formulation, two other models are also presented: the first is an 'alternate' model intended to operate within a force-controlled scheme (instead of a displacement/position control), thus able to capture the complex mechanical interactions occurring between a PA and its host structure. The second development is an 'inverse' model, able to operate within an open-loop control scheme, that is, yielding a 'linearized' PA behaviour. The performance of the developed models is demonstrated via a numerical sample case simulated in Matlab/Simulink, consisting of a PA coupled to a simple mechanical system, aimed at shifting the natural frequency of the latter.

  11. Searching For Valid Psychiatric Phenotypes: Discrete Latent Variable Models

    PubMed Central

    Leoutsakos, Jeannie-Marie S.; Zandi, Peter P.; Bandeen-Roche, Karen; Lyketsos, Constantine G.

    2010-01-01

    Introduction A primary challenge in psychiatric genetics is the lack of a completely validated system of classification for mental disorders. Appropriate statistical methods are needed to empirically derive more homogenous disorder subtypes. Methods Using the framework of Robins & Guze’s (1970) five phases, latent variable models to derive and validate diagnostic groups are described. A process of iterative validation is proposed through which refined phenotypes would facilitate research on genetics, pathogenesis, and treatment, which would in turn aid further refinement of disorder definitions. Conclusions Latent variable methods are useful tools for defining and validating psychiatric phenotypes. Further methodological research should address sample size issues and application to iterative validation. PMID:20187060

  12. Three Dimensional Vapor Intrusion Modeling: Model Validation and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Akbariyeh, S.; Patterson, B.; Rakoczy, A.; Li, Y.

    2013-12-01

    Volatile organic chemicals (VOCs), such as chlorinated solvents and petroleum hydrocarbons, are prevalent groundwater contaminants due to their improper disposal and accidental spillage. In addition to contaminating groundwater, VOCs may partition into the overlying vadose zone and enter buildings through gaps and cracks in foundation slabs or basement walls, a process termed vapor intrusion. Vapor intrusion of VOCs has been recognized as a detrimental source for human exposures to potential carcinogenic or toxic compounds. The simulation of vapor intrusion from a subsurface source has been the focus of many studies to better understand the process and guide field investigation. While multiple analytical and numerical models were developed to simulate the vapor intrusion process, detailed validation of these models against well controlled experiments is still lacking, due to the complexity and uncertainties associated with site characterization and soil gas flux and indoor air concentration measurement. In this work, we present an effort to validate a three-dimensional vapor intrusion model based on a well-controlled experimental quantification of the vapor intrusion pathways into a slab-on-ground building under varying environmental conditions. Finally, a probabilistic approach based on Monte Carlo simulations is implemented to determine the probability distribution of indoor air concentration based on the most uncertain input parameters.

  13. Cost model validation: a technical and cultural approach

    NASA Technical Reports Server (NTRS)

    Hihn, J.; Rosenberg, L.; Roust, K.; Warfield, K.

    2001-01-01

    This paper summarizes how JPL's parametric mission cost model (PMCM) has been validated using both formal statistical methods and a variety of peer and management reviews in order to establish organizational acceptance of the cost model estimates.

  14. Translation, adaptation and validation of a Portuguese version of the Moorehead-Ardelt Quality of Life Questionnaire II.

    PubMed

    Maciel, João; Infante, Paulo; Ribeiro, Susana; Ferreira, André; Silva, Artur C; Caravana, Jorge; Carvalho, Manuel G

    2014-11-01

    The prevalence of obesity has increased worldwide. An assessment of the impact of obesity on health-related quality of life (HRQoL) requires specific instruments. The Moorehead-Ardelt Quality of Life Questionnaire II (MA-II) is a widely used instrument to assess HRQoL in morbidly obese patients. The objective of this study was to translate and validate a Portuguese version of the MA-II.The study included forward and backward translations of the original MA-II. The reliability of the Portuguese MA-II was estimated using the internal consistency and test-retest methods. For validation purposes, the Spearman's rank correlation coefficient was used to evaluate the correlation between the Portuguese MA-II and the Portuguese versions of two other questionnaires, the 36-item Short Form Health Survey (SF-36) and the Impact of Weight on Quality of Life-Lite (IWQOL-Lite).One hundred and fifty morbidly obese patients were randomly assigned to test the reliability and validity of the Portuguese MA-II. Good internal consistency was demonstrated by a Cronbach's alpha coefficient of 0.80, and a very good agreement in terms of test-retest reliability was recorded, with an overall intraclass correlation coefficient (ICC) of 0.88. The total sums of MA-II scores and each item of MA-II were significantly correlated with all domains of SF-36 and IWQOL-Lite. A statistically significant negative correlation was found between the MA-II total score and BMI. Moreover, age, gender and surgical status were independent predictors of MA-II total score.A reliable and valid Portuguese version of the MA-II was produced, thus enabling the routine use of MA-II in the morbidly obese Portuguese population. PMID:24817428

  15. ECOLOGICAL MODEL TESTING: VERIFICATION, VALIDATION OR NEITHER?

    EPA Science Inventory

    Consider the need to make a management decision about a declining animal population. Two models are available to help. Before a decision is made based on model results, the astute manager or policy maker may ask, "Do the models work?" Or, "Have the models been verified or validat...

  16. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  17. Using virtual reality to validate system models

    SciTech Connect

    Winter, V.L.; Caudell, T.P.

    1999-12-09

    To date most validation techniques are highly biased towards calculations involving symbolic representations of problems. These calculations are either formal (in the case of consistency and completeness checks), or informal in the case of code inspections. The authors believe that an essential type of evidence of the correctness of the formalization process must be provided by (i.e., must originate from) human-based calculation. They further believe that human calculation can by significantly amplified by shifting from symbolic representations to graphical representations. This paper describes their preliminary efforts in realizing such a representational shift.

  18. Validation of Numerical Shallow Water Models for Tidal Lagoons

    SciTech Connect

    Eliason, D.; Bourgeois, A.

    1999-11-01

    An analytical solution is presented for the case of a stratified, tidally forced lagoon. This solution, especially its energetics, is useful for the validation of numerical shallow water models under stratified, tidally forced conditions. The utility of the analytical solution for validation is demonstrated for a simple finite difference numerical model. A comparison is presented of the energetics of the numerical and analytical solutions in terms of the convergence of model results to the analytical solution with increasing spatial and temporal resolution.

  19. Validation of vehicle dynamics simulation models - a review

    NASA Astrophysics Data System (ADS)

    Kutluay, Emir; Winner, Hermann

    2014-02-01

    In this work, a literature survey on the validation of vehicle dynamics simulation models is presented. Estimating the dynamic responses of existing or proposed vehicles has a wide array of applications in the development of vehicle technologies, e.g. active suspensions, controller design, driver assistance systems, etc. Although simulation environments, measurement tools and mathematical theories on vehicle dynamics are well established, the methodical link between the experimental test data and validity analysis of the simulation model is still lacking. This report presents different views on the definition of validation, and its usage in vehicle dynamics simulation models.

  20. Adolescent Personality: A Five-Factor Model Construct Validation

    ERIC Educational Resources Information Center

    Baker, Spencer T.; Victor, James B.; Chambers, Anthony L.; Halverson, Jr., Charles F.

    2004-01-01

    The purpose of this study was to investigate convergent and discriminant validity of the five-factor model of adolescent personality in a school setting using three different raters (methods): self-ratings, peer ratings, and teacher ratings. The authors investigated validity through a multitrait-multimethod matrix and a confirmatory factor…

  1. Teacher Change Beliefs: Validating a Scale with Structural Equation Modelling

    ERIC Educational Resources Information Center

    Kin, Tai Mei; Abdull Kareem, Omar; Nordin, Mohamad Sahari; Wai Bing, Khuan

    2015-01-01

    The objectives of the study were to validate a substantiated Teacher Change Beliefs Model (TCBM) and an instrument to identify critical components of teacher change beliefs (TCB) in Malaysian secondary schools. Five different pilot test approaches were applied to ensure the validity and reliability of the instrument. A total of 936 teachers from…

  2. Validation of an Evaluation Model for Learning Management Systems

    ERIC Educational Resources Information Center

    Kim, S. W.; Lee, M. G.

    2008-01-01

    This study aims to validate a model for evaluating learning management systems (LMS) used in e-learning fields. A survey of 163 e-learning experts, regarding 81 validation items developed through literature review, was used to ascertain the importance of the criteria. A concise list of explanatory constructs, including two principle factors, was…

  3. Validation of a heteroscedastic hazards regression model.

    PubMed

    Wu, Hong-Dar Isaac; Hsieh, Fushing; Chen, Chen-Hsin

    2002-03-01

    A Cox-type regression model accommodating heteroscedasticity, with a power factor of the baseline cumulative hazard, is investigated for analyzing data with crossing hazards behavior. Since the approach of partial likelihood cannot eliminate the baseline hazard, an overidentified estimating equation (OEE) approach is introduced in the estimation procedure. It by-product, a model checking statistic, is presented to test for the overall adequacy of the heteroscedastic model. Further, under the heteroscedastic model setting, we propose two statistics to test the proportional hazards assumption. Implementation of this model is illustrated in a data analysis of a cancer clinical trial. PMID:11878222

  4. Validation of the Millon Clinical Multiaxial Inventory for Axis II disorders: does it meet the Daubert standard?

    PubMed

    Rogers, R; Salekin, R T; Sewell, K W

    1999-08-01

    Relevant to forensic practice, the Supreme Court in Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993) established the boundaries for the admissibility of scientific evidence that take into account its trustworthiness as assessed via evidentiary reliability. In conducting forensic evaluations, psychologists and other mental health professionals must be able to offer valid diagnoses, including Axis II disorders. The most widely available measure of personality disorders is the Million Clinical Multiaxial Inventory (MCMI) and its subsequent revisions (MCMI-II and MCMI-III). We address the critical question, "Do the MCMI-II and MCMI-III meet the requirements of Daubert?" Fundamental problems in the scientific validity and error rates for MCMI-III appear to preclude its admissibility under Daubert for the assessment of Axis II disorders. We address the construct validity for the MCMI and MCMI-II via a meta-analysis of 33 studies. The resulting multitrait-multimethod approach allowed us to address their convergent and discriminant validity through method effects (Marsh, 1990). With reference to Daubert, the results suggest a circumscribed use for the MCMI-II with good evidence of construct validity for Avoidant, Schizotypal, and Borderline personality disorders. PMID:10439726

  5. Line emission from H II blister models

    NASA Technical Reports Server (NTRS)

    Rubin, R. H.

    1984-01-01

    Numerical techniques to calculate the thermal and geometric properties of line emission from H II 'blister' regions are presented. It is assumed that the density distributions of the H II regions are a function of two dimensions, with rotational symmetry specifying the shape in three-dimensions. The thermal and ionization equilibrium equations of the problem are solved by spherical modeling, and a spherical sector approximation is used to simplify the three-dimensional treatment of diffuse ionizing radiation. The global properties of H II 'blister' regions near the edges of a molecular cloud are simulated by means of the geometry/density distribution, and the results are compared with observational data. It is shown that there is a monotonic increase of peak surface brightness from the i = 0 deg (pole-on) observational position to the i = 90 deg (edge-on) position. The enhancement of the line peak intensity from the edge-on to the pole-on positions is found to depend on the density, stratification, ionization, and electron temperature weighting. It is found that as i increases, the position of peak line brightness of the lower excitation species is displaced to the high-density side of the high excitation species.

  6. Psychometric validation of the BDI-II among HIV-positive CHARTER study participants.

    PubMed

    Hobkirk, Andréa L; Starosta, Amy J; De Leo, Joseph A; Marra, Christina M; Heaton, Robert K; Earleywine, Mitch

    2015-06-01

    Rates of depression are high among individuals living with HIV. Accurate assessment of depressive symptoms among this population is important for ensuring proper diagnosis and treatment. The Beck Depression Inventory-II (BDI-II) is a widely used measure for assessing depression, however its psychometric properties have not yet been investigated for use with HIV-positive populations in the United States. The current study was the first to assess the psychometric properties of the BDI-II among a large cohort of HIV-positive participants sampled at multiple sites across the United States as part of the CNS HIV Antiretroviral Therapy Effects Research (CHARTER) study. The BDI-II test scores showed good internal consistency (α = .93) and adequate test-retest reliability (internal consistency coefficient = 0.83) over a 6-mo period. Using a "gold standard" of major depressive disorder determined by the Composite International Diagnostic Interview, sensitivity and specificity were maximized at a total cut-off score of 17 and a receiver operating characteristic analysis confirmed that the BDI-II is an adequate diagnostic measure for the sample (area under the curve = 0.83). The sensitivity and specificity of each score are provided graphically. Confirmatory factor analyses confirmed the best fit for a three-factor model over one-factor and two-factor models and models with a higher-order factor included. The results suggest that the BDI-II is an adequate measure for assessing depressive symptoms among U.S. HIV-positive patients. Cut-off scores should be adjusted to enhance sensitivity or specificity as needed and the measure can be differentiated into cognitive, affective, and somatic depressive symptoms. PMID:25419643

  7. Exploring the Validity of Valproic Acid Animal Model of Autism

    PubMed Central

    Mabunga, Darine Froy N.; Gonzales, Edson Luck T.; Kim, Ji-woon; Kim, Ki Chan

    2015-01-01

    The valproic acid (VPA) animal model of autism spectrum disorder (ASD) is one of the most widely used animal model in the field. Like any other disease models, it can't model the totality of the features seen in autism. Then, is it valid to model autism? This model demonstrates many of the structural and behavioral features that can be observed in individuals with autism. These similarities enable the model to define relevant pathways of developmental dysregulation resulting from environmental manipulation. The uncovering of these complex pathways resulted to the growing pool of potential therapeutic candidates addressing the core symptoms of ASD. Here, we summarize the validity points of VPA that may or may not qualify it as a valid animal model of ASD. PMID:26713077

  8. Economic analysis of model validation for a challenge problem

    DOE PAGESBeta

    Paez, Paul J.; Paez, Thomas L.; Hasselman, Timothy K.

    2016-02-19

    It is now commonplace for engineers to build mathematical models of the systems they are designing, building, or testing. And, it is nearly universally accepted that phenomenological models of physical systems must be validated prior to use for prediction in consequential scenarios. Yet, there are certain situations in which testing only or no testing and no modeling may be economically viable alternatives to modeling and its associated testing. This paper develops an economic framework within which benefit–cost can be evaluated for modeling and model validation relative to other options. The development is presented in terms of a challenge problem. Asmore » a result, we provide a numerical example that quantifies when modeling, calibration, and validation yield higher benefit–cost than a testing only or no modeling and no testing option.« less

  9. Exploring the Validity of Valproic Acid Animal Model of Autism.

    PubMed

    Mabunga, Darine Froy N; Gonzales, Edson Luck T; Kim, Ji-Woon; Kim, Ki Chan; Shin, Chan Young

    2015-12-01

    The valproic acid (VPA) animal model of autism spectrum disorder (ASD) is one of the most widely used animal model in the field. Like any other disease models, it can't model the totality of the features seen in autism. Then, is it valid to model autism? This model demonstrates many of the structural and behavioral features that can be observed in individuals with autism. These similarities enable the model to define relevant pathways of developmental dysregulation resulting from environmental manipulation. The uncovering of these complex pathways resulted to the growing pool of potential therapeutic candidates addressing the core symptoms of ASD. Here, we summarize the validity points of VPA that may or may not qualify it as a valid animal model of ASD. PMID:26713077

  10. Photon number conserving models of H II bubbles during reionization

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem; Choudhury, T. Roy; Padmanabhan, Hamsa

    2016-08-01

    Traditional excursion-set-based models of H II bubble growth during the epoch of reionization are known to violate photon number conservation, in the sense that the mass fraction in ionized bubbles in these models does not equal the ratio of the number of ionizing photons produced by sources and the number of hydrogen atoms in the intergalactic medium. E.g. for a Planck13 cosmology with electron scattering optical depth τ ≃ 0.066, the discrepancy is ˜15 per cent for x_{H II}=0.1 and ˜5 per cent for x_{H II}=0.5. We demonstrate that this problem arises from a fundamental conceptual shortcoming of the excursion-set approach (already recognized in the literature on this formalism) which only tracks average mass fractions instead of the exact, stochastic source counts. With this insight, we build an approximately photon number conserving Monte Carlo model of bubble growth based on partitioning regions of dark matter into haloes. Our model, which is formally valid for white noise initial conditions (ICs), shows dramatic improvements in photon number conservation, as well as substantial differences in the bubble size distribution, as compared to traditional models. We explore the trends obtained on applying our algorithm to more realistic ICs, finding that these improvements are robust to changes in the ICs. Since currently popular seminumerical schemes of bubble growth also violate photon number conservation, we argue that it will be worthwhile to pursue new, explicitly photon number conserving approaches. Along the way, we clarify some misconceptions regarding this problem that have appeared in the literature.

  11. Gear Windage Modeling Progress - Experimental Validation Status

    NASA Technical Reports Server (NTRS)

    Kunz, Rob; Handschuh, Robert F.

    2008-01-01

    In the Subsonics Rotary Wing (SRW) Project being funded for propulsion work at NASA Glenn Research Center, performance of the propulsion system is of high importance. In current rotorcraft drive systems many gearing components operate at high rotational speed (pitch line velocity > 24000 ft/ min). In our testing of high speed helical gear trains at NASA Glenn we have found that the work done on the air - oil mist within the gearbox can become a significant part of the power loss of the system. This loss mechanism is referred to as windage. The effort described in this presentation is to try to understand the variables that affect windage, develop a good experimental data base to validate, the analytical project being conducted at Penn State University by Dr. Rob Kunz under a NASA SRW NRA. The presentation provides an update to the status of these efforts.

  12. Measurements of Humidity in the Atmosphere and Validation Experiments (Mohave, Mohave II): Results Overview

    NASA Technical Reports Server (NTRS)

    Leblanc, Thierry; McDermid, Iain S.; McGee, Thomas G.; Twigg, Laurence W.; Sumnicht, Grant K.; Whiteman, David N.; Rush, Kurt D.; Cadirola, Martin P.; Venable, Demetrius D.; Connell, R.; Demoz, Belay B.; Vomel, Holger; Miloshevich, L.

    2008-01-01

    The Measurements of Humidity in the Atmosphere and Validation Experiments (MOHAVE, MOHAVE-II) inter-comparison campaigns took place at the Jet Propulsion Laboratory (JPL) Table Mountain Facility (TMF, 34.5(sup o)N) in October 2006 and 2007 respectively. Both campaigns aimed at evaluating the capability of three Raman lidars for the measurement of water vapor in the upper troposphere and lower stratosphere (UT/LS). During each campaign, more than 200 hours of lidar measurements were compared to balloon borne measurements obtained from 10 Cryogenic Frost-point Hygrometer (CFH) flights and over 50 Vaisala RS92 radiosonde flights. During MOHAVE, fluorescence in all three lidar receivers was identified, causing a significant wet bias above 10-12 km in the lidar profiles as compared to the CFH. All three lidars were reconfigured after MOHAVE, and no such bias was observed during the MOHAVE-II campaign. The lidar profiles agreed very well with the CFH up to 13-17 km altitude, where the lidar measurements become noise limited. The results from MOHAVE-II have shown that the water vapor Raman lidar will be an appropriate technique for the long-term monitoring of water vapor in the UT/LS given a slight increase in its power-aperture, as well as careful calibration.

  13. Reliability and validity of the modified Conconi test on concept II rowing ergometers.

    PubMed

    Celik, Ozgür; Koşar, Sükran Nazan; Korkusuz, Feza; Bozkurt, Murat

    2005-11-01

    The purpose of this study was to assess the reliability and validity of the modified Conconi test on Concept II rowing ergometers. Twenty-eight oarsmen conducted 3 performance tests on separate days. Reliability was assessed using the break point in heart rate (HR) linearity called the Conconi test (CT) and Conconi retest (CRT) for the noninvasive measurement of anaerobic threshold (AT). Blood lactate measurement was considered the gold standard for the assessment of the AT, and the validity of the CT was assessed by blood samples taken during an incremental load test (ILT) on ergometers. According to the results, the mean power output (PO) scores for the CT, CRT, and ILT were 234.2 +/- 40.3 W, 232.5 +/- 39.7 W, and 229.7 +/- 39.6 W, respectively. The mean HR values at the AT for the CT, CRT, and ILT were 165.4 +/- 11.2 b.min, 160.4 +/- 10.8 b.min, and 158.3 +/- 8.8 b.min, respectively. Interclass correlation coefficient (ICC) analysis indicated a significant correlation between the 3 tests with one another. Also, Bland and Altman plots showed that there was an association between noninvasive tests and the ILT PO scores and HRs (95% confidence interval [CI]). In conclusion, this study showed that the modified CT is a reliable and valid method for determining the AT of elite men rowers. PMID:16287355

  14. Ionospheric model validation at VLF and LF

    NASA Astrophysics Data System (ADS)

    Ferguson, Jerry A.

    1995-05-01

    A reliable knowledge of radio signal amplitude and phase characteristics is required to design and maintain communications and navigational circuits at VLF and LF. The ability to accurately calculate signal levels as a function of frequency, position, and time is of considerable importance in achieving reliable assessment of communication coverage. Detailed computer models based on multiple mode waveguide theory have been developed. These models have been found to produce good comparisons between measurements and calculations of signal variations as a function of propagation distance. However, results can be very sensitive to the ionospheric inputs to these computer models. This paper has two purposes. The first is to present the results of a systematic comparison of a set of measurements of signal strength from various transmitters over a number of propagation paths using a simple model of the ionosphere. The variation of the parameters of this simple model with basic propagation parameters is examined. The second purpose is to examine the built-in version of this simple model of the ionosphere as implemented in the Long Wave Propagation Capability. This model is found to adequately represent a set of in-flight signal strength measurements. It is also clear that there is still room for improvements in this ionospheric model.

  15. SWAT: Model use, calibration, and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT including manual calibration procedures...

  16. Validating the Mexican American Intergenerational Caregiving Model

    ERIC Educational Resources Information Center

    Escandon, Socorro

    2011-01-01

    The purpose of this study was to substantiate and further develop a previously formulated conceptual model of Role Acceptance in Mexican American family caregivers by exploring the theoretical strengths of the model. The sample consisted of women older than 21 years of age who self-identified as Hispanic, were related through consanguinal or…

  17. Uncertainty Quantification and Validation for RANS Turbulence Models

    NASA Astrophysics Data System (ADS)

    Oliver, Todd; Moser, Robert

    2011-11-01

    Uncertainty quantification and validation procedures for RANS turbulence models are developed and applied. The procedures used here rely on a Bayesian view of probability. In particular, the uncertainty quantification methodology requires stochastic model development, model calibration, and model comparison, all of which are pursued using tools from Bayesian statistics. Model validation is also pursued in a probabilistic framework. The ideas and processes are demonstrated on a channel flow example. Specifically, a set of RANS models--including Baldwin-Lomax, Spalart-Allmaras, k- ɛ, k- ω, and v2- f--and uncertainty representations are analyzed using DNS data for fully-developed channel flow. Predictions of various quantities of interest and the validity (or invalidity) of the various models for making those predictions will be examined. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].

  18. Highlights of Transient Plume Impingement Model Validation and Applications

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael

    2011-01-01

    This paper describes highlights of an ongoing validation effort conducted to assess the viability of applying a set of analytic point source transient free molecule equations to model behavior ranging from molecular effusion to rocket plumes. The validation effort includes encouraging comparisons to both steady and transient studies involving experimental data and direct simulation Monte Carlo results. Finally, this model is applied to describe features of two exotic transient scenarios involving NASA Goddard Space Flight Center satellite programs.

  19. HEDR model validation plan. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ``tools`` for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ``validate`` these tools. In the sense of the HEDR Project, ``validation`` is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model.

  20. VERIFICATION AND VALIDATION OF THE SPARC MODEL

    EPA Science Inventory

    Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values--that is, the physical and chemical constants that govern reactivity. Although empirical structure-activity relationships that allow estimation of some ...

  1. Validation of geometric models for fisheye lenses

    NASA Astrophysics Data System (ADS)

    Schneider, D.; Schwalbe, E.; Maas, H.-G.

    The paper focuses on the photogrammetric investigation of geometric models for different types of optical fisheye constructions (equidistant, equisolid-angle, sterographic and orthographic projection). These models were implemented and thoroughly tested in a spatial resection and a self-calibrating bundle adjustment. For this purpose, fisheye images were taken with a Nikkor 8 mm fisheye lens on a Kodak DSC 14n Pro digital camera in a hemispherical calibration room. Both, the spatial resection and the bundle adjustment resulted in a standard deviation of unit weight of 1/10 pixel with a suitable set of simultaneous calibration parameters introduced into the camera model. The camera-lens combination was treated with all of the four basic models mentioned above. Using the same set of additional lens distortion parameters, the differences between the models can largely be compensated, delivering almost the same precision parameters. The relative object space precision obtained from the bundle adjustment was ca. 1:10 000 of the object dimensions. This value can be considered as a very satisfying result, as fisheye images generally have a lower geometric resolution as a consequence of their large field of view and also have a inferior imaging quality in comparison to most central perspective lenses.

  2. Validation of Model Forecasts of the Ambient Solar Wind

    NASA Technical Reports Server (NTRS)

    Macneice, P. J.; Hesse, M.; Kuznetsova, M. M.; Rastaetter, L.; Taktakishvili, A.

    2009-01-01

    Independent and automated validation is a vital step in the progression of models from the research community into operational forecasting use. In this paper we describe a program in development at the CCMC to provide just such a comprehensive validation for models of the ambient solar wind in the inner heliosphere. We have built upon previous efforts published in the community, sharpened their definitions, and completed a baseline study. We also provide first results from this program of the comparative performance of the MHD models available at the CCMC against that of the Wang-Sheeley-Arge (WSA) model. An important goal of this effort is to provide a consistent validation to all available models. Clearly exposing the relative strengths and weaknesses of the different models will enable forecasters to craft more reliable ensemble forecasting strategies. Models of the ambient solar wind are developing rapidly as a result of improvements in data supply, numerical techniques, and computing resources. It is anticipated that in the next five to ten years, the MHD based models will supplant semi-empirical potential based models such as the WSA model, as the best available forecast models. We anticipate that this validation effort will track this evolution and so assist policy makers in gauging the value of past and future investment in modeling support.

  3. Validation of Model Forecasts of the Ambient Solar Wind (Invited)

    NASA Astrophysics Data System (ADS)

    MacNeice, P. J.; Hesse, M.; Kuznetsova, M. M.; Rastaetter, L.; Taktakishvili, A.

    2009-12-01

    Independent and automated validation is a vital step in the progression of models from the research community into operational forecasting use. In this paper we describe a program in development at the CCMC to provide just such a comprehensive validation for models of the ambient solar wind in the inner heliosphere. We have built upon previous efforts published in the community, sharpened their definitions, and completed a baseline study. We also provide first results from this program of the comparative performance of the MHD models available at the CCMC against that of the Wang-Sheeley-Arge(WSA) model. An important goal of this effort is to provide a consistent validation to all available models. Clearly exposing the relative strengths and weaknesses of the different models will enable forecasters to craft more reliable ensemble forecasting strategies. Models of the ambient solar wind are developing rapidly as a result of improvements in data supply, numerical techniques, and computing resources. It is anticipated that in the next five to ten years, the MHD based models will supplant semi-empirical potential based models such as the WSA model, as the best available forecast models. We anticipate that this validation effort will track this evolution and so assist policy makers in gauging the value of past and future investment in modeling support.

  4. Shuttle Spacesuit: Fabric/LCVG Model Validation

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tweed, J.; Zeitlin, C.; Kim, M.-H. Y.; Anderson, B. M.; Cucinotta, F. A.; Ware, J.; Persans, A. E.

    2001-01-01

    A detailed spacesuit computational model is being developed at the Langley Research Center for radiation exposure evaluation studies. The details of the construction of the spacesuit are critical to estimation of exposures and assessing the risk to the astronaut on EVA. Past evaluations of spacesuit shielding properties assumed the basic fabric lay-up (Thermal Micrometeroid Garment, fabric restraints, and pressure envelope) and Liquid Cooling and Ventilation Garment (LCVG) could be homogenized as a single layer overestimating the protective properties over 60 percent of the fabric area. The present spacesuit model represents the inhomogeneous distributions of LCVG materials (mainly the water filled cooling tubes). An experimental test is performed using a 34-MeV proton beam and highresolution detectors to compare with model-predicted transmission factors. Some suggestions are made on possible improved construction methods to improve the spacesuit's protection properties.

  5. Shuttle Spacesuit: Fabric/LCVG Model Validation

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tweed, J.; Zeitflin, C.; Kim, M.-H.; Anderson, B. M.; Cucinotta, F. A.; Ware, J.; Persans, A. E.

    2001-01-01

    A detailed spacesuit computational model is being developed at the Langley Research Center for radiation exposure evaluation studies. The details of the construction of the spacesuit are critical to estimation of exposures and assessing the risk to the astronaut on EVA. Past evaluations of spacesuit shielding properties assumed the basic fabric lay-up (Thermal Micrometeroid Garment, fabric restraints, and pressure envelope) and Liquid Cooling and Ventilation Garment (LCVG) could be homogenized as a single layer overestimating the protective properties over 60 percent of the fabric area. The present spacesuit model represents the inhomogeneous distributions of LCVG materials (mainly the water filled cooling tubes). An experimental test is performed using a 34-MeV proton beam and high- resolution detectors to compare with model-predicted transmission factors. Some suggestions are made on possible improved construction methods to improve the spacesuit's protection properties.

  6. Angular Distribution Models for Top-of-Atmosphere Radiative Flux Estimation from the Clouds and the Earth's Radiant Energy System Instrument on the Tropical Rainfall Measuring Mission Satellite. Part II; Validation

    NASA Technical Reports Server (NTRS)

    Loeb, N. G.; Loukachine, K.; Wielicki, B. A.; Young, D. F.

    2003-01-01

    Top-of-atmosphere (TOA) radiative fluxes from the Clouds and the Earth s Radiant Energy System (CERES) are estimated from empirical angular distribution models (ADMs) that convert instantaneous radiance measurements to TOA fluxes. This paper evaluates the accuracy of CERES TOA fluxes obtained from a new set of ADMs developed for the CERES instrument onboard the Tropical Rainfall Measuring Mission (TRMM). The uncertainty in regional monthly mean reflected shortwave (SW) and emitted longwave (LW) TOA fluxes is less than 0.5 W/sq m, based on comparisons with TOA fluxes evaluated by direct integration of the measured radiances. When stratified by viewing geometry, TOA fluxes from different angles are consistent to within 2% in the SW and 0.7% (or 2 W/sq m) in the LW. In contrast, TOA fluxes based on ADMs from the Earth Radiation Budget Experiment (ERBE) applied to the same CERES radiance measurements show a 10% relative increase with viewing zenith angle in the SW and a 3.5% (9 W/sq m) decrease with viewing zenith angle in the LW. Based on multiangle CERES radiance measurements, 18 regional instantaneous TOA flux errors from the new CERES ADMs are estimated to be 10 W/sq m in the SW and, 3.5 W/sq m in the LW. The errors show little or no dependence on cloud phase, cloud optical depth, and cloud infrared emissivity. An analysis of cloud radiative forcing (CRF) sensitivity to differences between ERBE and CERES TRMM ADMs, scene identification, and directional models of albedo as a function of solar zenith angle shows that ADM and clear-sky scene identification differences can lead to an 8 W/sq m root-mean-square (rms) difference in 18 daily mean SW CRF and a 4 W/sq m rms difference in LW CRF. In contrast, monthly mean SW and LW CRF differences reach 3 W/sq m. CRF is found to be relatively insensitive to differences between the ERBE and CERES TRMM directional models.

  7. WEPP: Model use, calibration and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Water Erosion Prediction Project (WEPP) model is a process-based, continuous simulation, distributed parameter, hydrologic and soil erosion prediction system. It has been developed over the past 25 years to allow for easy application to a large number of land management scenarios. Most general o...

  8. Distance Education in Taiwan: A Model Validated.

    ERIC Educational Resources Information Center

    Shih, Mei-Yau; Zvacek, Susan M.

    The Triad Perspective Model of Distance Education (TPMDE) guides researchers in developing research questions, gathering data, and producing a comprehensive description of a distance education program. It was developed around three theoretical perspectives: (1) curriculum development theory (Tyler's four questions, 1949); (2) systems theory…

  9. WEPP: Model use, calibration, and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Water Erosion Prediction Project (WEPP) model is a process-based, continuous simulation, distributed parameter, hydrologic and soil erosion prediction system. It has been developed over the past 25 years to allow for easy application to a large number of land management scenarios. Most general o...

  10. Validation of the NATO-standard ship signature model (SHIPIR)

    NASA Astrophysics Data System (ADS)

    Vaitekunas, David A.; Fraedrich, Douglas S.

    1999-07-01

    An integrated naval infrared target, threat and countermeasure simulator (SHIPIR/NTCS) has been developed. The SHIPIR component of the model has been adopted by both NATO and the US Navy as a common tool for predicting the infrared (IR) signature of naval ships in their background. The US Navy has taken a lead role in further developing and validating SHIPIR for use in the Twenty-First Century Destroyer (DD-21) program. As a result, the US Naval Research Laboratory (NRL) has performed an in-depth validation of SHIPIR. This paper presents an overview of SHIPIR, the model validation methodology developed by NRL, and the results of the NRL validation study. The validation consists of three parts: a review of existing validation information, the design, execution, and analysis of a new panel test experiment, and the comparison of experiment with predictions from the latest version of SHIPIR (v2.5). The results show high levels of accuracy in the radiometric components of the model under clear-sky conditions, but indicate the need for more detailed measurement of solar irradiance and cloud model data for input to the heat transfer and in-band sky radiance sub-models, respectively.

  11. Theory and Implementation of Nuclear Safety System Codes - Part II: System Code Closure Relations, Validation, and Limitations

    SciTech Connect

    Glenn A Roth; Fatih Aydogan

    2014-09-01

    This is Part II of two articles describing the details of thermal-hydraulic sys- tem codes. In this second part of the article series, the system code closure relationships (used to model thermal and mechanical non-equilibrium and the coupling of the phases) for the governing equations are discussed and evaluated. These include several thermal and hydraulic models, such as heat transfer coefficients for various flow regimes, two phase pressure correlations, two phase friction correlations, drag coefficients and interfacial models be- tween the fields. These models are often developed from experimental data. The experiment conditions should be understood to evaluate the efficacy of the closure models. Code verification and validation, including Separate Effects Tests (SETs) and Integral effects tests (IETs) is also assessed. It can be shown from the assessments that the test cases cover a significant section of the system code capabilities, but some of the more advanced reactor designs will push the limits of validation for the codes. Lastly, the limitations of the codes are discussed by considering next generation power plants, such as Small Modular Reactors (SMRs), analyz- ing not only existing nuclear power plants, but also next generation nuclear power plants. The nuclear industry is developing new, innovative reactor designs, such as Small Modular Reactors (SMRs), High-Temperature Gas-cooled Reactors (HTGRs) and others. Sub-types of these reactor designs utilize pebbles, prismatic graphite moderators, helical steam generators, in- novative fuel types, and many other design features that may not be fully analyzed by current system codes. This second part completes the series on the comparison and evaluation of the selected reactor system codes by discussing the closure relations, val- idation and limitations. These two articles indicate areas where the models can be improved to adequately address issues with new reactor design and development.

  12. Validation of the Poisson Stochastic Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    Zhuravleva, Tatiana; Marshak, Alexander

    2004-01-01

    A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.

  13. Experiments for foam model development and validation.

    SciTech Connect

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F.; Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  14. Field Validation of the Career Education Curriculum Project Modules. Phase II. K-6 Validation. Final Report. Part I.

    ERIC Educational Resources Information Center

    Moore, Earl; Wellman, Frank

    Field validation of the Missouri Career Education Curriculum Project Modules, K-6, was conducted in two phases. In phase 1, three sets of evaluation instruments were produced: K-1, 2-3, and 4-6. In phase 2, the field validation of the K-6 modules was conducted (reported here). (An additional goal of phase 2 was to develop evaluation instruments…

  15. Analysis of Experimental Data for High Burnup PWR Spent Fuel Isotopic Validation - Vandellos II Reactor

    SciTech Connect

    Ilas, Germina; Gauld, Ian C

    2011-01-01

    This report is one of the several recent NUREG/CR reports documenting benchmark-quality radiochemical assay data and the use of the data to validate computer code predictions of isotopic composition for spent nuclear fuel, to establish the uncertainty and bias associated with code predictions. The experimental data analyzed in the current report were acquired from a high-burnup fuel program coordinated by Spanish organizations. The measurements included extensive actinide and fission product data of importance to spent fuel safety applications, including burnup credit, decay heat, and radiation source terms. Six unique spent fuel samples from three uranium oxide fuel rods were analyzed. The fuel rods had a 4.5 wt % {sup 235}U initial enrichment and were irradiated in the Vandellos II pressurized water reactor operated in Spain. The burnups of the fuel samples range from 42 to 78 GWd/MTU. The measurements were used to validate the two-dimensional depletion sequence TRITON in the SCALE computer code system.

  16. Validating Requirements for Fault Tolerant Systems Using Model Checking

    NASA Technical Reports Server (NTRS)

    Schneider, Francis; Easterbrook, Steve M.; Callahan, John R.; Holzmann, Gerard J.

    1997-01-01

    Model checking is shown to be an effective tool in validating the behavior of a fault tolerant embedded spacecraft controller. The case study presented here shows that by judiciously abstracting away extraneous complexity, the state space of the model could be exhaustively searched allowing critical functional requirements to be validated down to the design level. Abstracting away detail not germane to the problem of interest leaves by definition a partial specification behind. The success of this procedure shows that it is feasible to effectively validate a partial specification with this technique. Three anomalies were found in the system one of which is an error in the detailed requirements, and the other two are missing/ambiguous requirements. Because the method allows validation of partial specifications, it also is an effective methodology towards maintaining fidelity between a co-evolving specification and an implementation.

  17. Validation of the Serpent 2 code on TRIGA Mark II benchmark experiments.

    PubMed

    Ćalić, Dušan; Žerovnik, Gašper; Trkov, Andrej; Snoj, Luka

    2016-01-01

    The main aim of this paper is the development and validation of a 3D computational model of TRIGA research reactor using Serpent 2 code. The calculated parameters were compared to the experimental results and to calculations performed with the MCNP code. The results show that the calculated normalized reaction rates and flux distribution within the core are in good agreement with MCNP and experiment, while in the reflector the flux distribution differ up to 3% from the measurements. PMID:26516989

  18. Systematic review and validation of prognostic models in liver transplantation.

    PubMed

    Jacob, Matthew; Lewsey, James D; Sharpin, Carlos; Gimson, Alexander; Rela, Mohammed; van der Meulen, Jan H P

    2005-07-01

    A model that can accurately predict post-liver transplant mortality would be useful for clinical decision making, would help to provide patients with prognostic information, and would facilitate fair comparisons of surgical performance between transplant units. A systematic review of the literature was carried out to assess the quality of the studies that developed and validated prognostic models for mortality after liver transplantation and to validate existing models in a large data set of patients transplanted in the United Kingdom (UK) and Ireland between March 1994 and September 2003. Five prognostic model papers were identified. The quality of the development and validation of all prognostic models was suboptimal according to an explicit assessment tool of the internal, external, and statistical validity, model evaluation, and practicality. The discriminatory ability of the identified models in the UK and Ireland data set was poor (area under the receiver operating characteristic curve always smaller than 0.7 for adult populations). Due to the poor quality of the reporting, the methodology used for the development of the model could not always be determined. In conclusion, these findings demonstrate that currently available prognostic models of mortality after liver transplantation can have only a limited role in clinical practice, audit, and research. PMID:15973726

  19. Model-Based Verification and Validation of Spacecraft Avionics

    NASA Technical Reports Server (NTRS)

    Khan, M. Omair; Sievers, Michael; Standley, Shaun

    2012-01-01

    Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.

  20. Validation of Orthorectified Interferometric Radar Imagery and Digital Elevation Models

    NASA Technical Reports Server (NTRS)

    Smith Charles M.

    2004-01-01

    This work was performed under NASA's Verification and Validation (V&V) Program as an independent check of data supplied by EarthWatch, Incorporated, through the Earth Science Enterprise Scientific Data Purchase (SDP) Program. This document serves as the basis of reporting results associated with validation of orthorectified interferometric interferometric radar imagery and digital elevation models (DEM). This validation covers all datasets provided under the first campaign (Central America & Virginia Beach) plus three earlier missions (Indonesia, Red River: and Denver) for a total of 13 missions.

  1. Functional state modelling approach validation for yeast and bacteria cultivations

    PubMed Central

    Roeva, Olympia; Pencheva, Tania

    2014-01-01

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778

  2. Dynamic Model Validation with Governor Deadband on the Eastern Interconnection

    SciTech Connect

    Kou, Gefei; Hadley, Stanton W; Liu, Yilu

    2014-04-01

    This report documents the efforts to perform dynamic model validation on the Eastern Interconnection (EI) by modeling governor deadband. An on-peak EI dynamic model is modified to represent governor deadband characteristics. Simulation results are compared with synchrophasor measurements collected by the Frequency Monitoring Network (FNET/GridEye). The comparison shows that by modeling governor deadband the simulated frequency response can closely align with the actual system response.

  3. Validation of a terrestrial food chain model

    SciTech Connect

    Travis, C.C.; Blaylock, B.P. )

    1992-04-01

    An increasingly important topic in risk assessment is the estimation of human exposure to environmental pollutants through pathways other than inhalation. The Environmental Protection Agency (EPA) has recently developed a computerized methodology (EPA, 1990) to estimate indirect exposure to toxic pollutants from Municipal Waste Combuster emissions. This methodology estimates health risks from exposure to toxic pollutants from the terrestrial food chain (TFC), soil ingestion, drinking water ingestion, fish ingestion, and dermal absorption via soil and water. Of these, one of the most difficult to estimate is exposure through the food chain. This paper estimates the accuracy of the EPA methodology for estimating food chain contamination. To our knowledge, no data exist on measured concentrations of pollutants in food grown around Municipal Waste Incinerators, and few field-scale studies have been performed on the uptake of pollutants in the food chain. Therefore, to evaluate the EPA methodology, we compare actual measurements of background contaminant levels in food with estimates made using EPA's computerized methodology. Background levels of contaminants in air, water, and soil were used as input to the EPA food chain model to predict background levels of contaminants in food. These predicted values were then compared with the measured background contaminant levels. Comparisons were performed for dioxin, pentachlorophenol, polychlorinated biphenyls, benzene, benzo(a)pyrene, mercury, and lead.

  4. Validation of a metabolic cotton seedling emergence model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A seedling emergence model based on thermal dependence of enzyme activity in germinating cotton was developed. The model was validated under both laboratory and field conditions with several cotton lines under diverse temperature regimes. Four commercial lines were planted on four dates in Lubbock T...

  5. Validation of 1-D transport and sawtooth models for ITER

    SciTech Connect

    Connor, J.W.; Turner, M.F.; Attenberger, S.E.; Houlberg, W.A.

    1996-12-31

    In this paper the authors describe progress on validating a number of local transport models by comparing their predictions with relevant experimental data from a range of tokamaks in the ITER profile database. This database, the testing procedure and results are discussed. In addition a model for sawtooth oscillations is used to investigate their effect in an ITER plasma with alpha-particles.

  6. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  7. Validating a Technology Enhanced Student-Centered Learning Model

    ERIC Educational Resources Information Center

    Kang, Myunghee; Hahn, Jungsun; Chung, Warren

    2015-01-01

    The Technology Enhanced Student Centered Learning (TESCL) Model in this study presents the core factors that ensure the quality of learning in a technology-supported environment. Although the model was conceptually constructed using a student-centered learning framework and drawing upon previous studies, it should be validated through real-world…

  8. Validating Physics-based Space Weather Models for Operational Use

    NASA Astrophysics Data System (ADS)

    Gombosi, Tamas; Singer, Howard; Millward, George; Toth, Gabor; Welling, Daniel

    2016-07-01

    The Geospace components of the Space Weather Modeling Framework developed at the University of Michigan is presently transitioned to operational use by the NOAA Space Weather Prediction Center. This talk will discuss the various ways the model is validated and skill scores are calculated.

  9. Validating regional-scale surface energy balance models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    One of the major challenges in developing reliable regional surface flux models is the relative paucity of scale-appropriate validation data. Direct comparisons between coarse-resolution model flux estimates and flux tower data can often be dominated by sub-pixel heterogeneity effects, making it di...

  10. Predicting the ungauged basin: Model validation and realism assessment

    NASA Astrophysics Data System (ADS)

    van Emmerik, Tim; Mulder, Gert; Eilander, Dirk; Piet, Marijn; Savenije, Hubert

    2015-10-01

    The hydrological decade on Predictions in Ungauged Basins (PUB) led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this paper we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. This paper does not present a generic approach that can be transferred to other ungauged catchments, but it aims to show how clever model design and alternative data acquisition can result in a valuable hydrological model for an ungauged catchment.

  11. A model for the separation of cloud and aerosol in SAGE II occultation data

    NASA Technical Reports Server (NTRS)

    Kent, G. S.; Winker, D. M.; Osborn, M. T.; Skeens, K. M.

    1993-01-01

    The Stratospheric Aerosol and Gas Experiment (SAGE) II satellite experiment measures the extinction due to aerosols and thin cloud, at wavelengths of 0.525 and 1.02 micrometers, down to an altitude of 6 km. The wavelength dependence of the extinction due to aerosols differs from that of the extinction due to cloud and is used as the basis of a model for separating these two components. The model is presented and its validation using airborne lidar data, obtained coincident with SAGE II observations, is described. This comparison shows that smaller SAGE II cloud extinction values correspond to the presence of subvisible cirrus cloud in the lidar record. Examples of aerosol and cloud data products obtained using this model to interpret SAGE II upper tropospheric and lower stratospheric data are also shown.

  12. Control Oriented Modeling and Validation of Aeroservoelastic Systems

    NASA Technical Reports Server (NTRS)

    Crowder, Marianne; deCallafon, Raymond (Principal Investigator)

    2002-01-01

    Lightweight aircraft design emphasizes the reduction of structural weight to maximize aircraft efficiency and agility at the cost of increasing the likelihood of structural dynamic instabilities. To ensure flight safety, extensive flight testing and active structural servo control strategies are required to explore and expand the boundary of the flight envelope. Aeroservoelastic (ASE) models can provide online flight monitoring of dynamic instabilities to reduce flight time testing and increase flight safety. The success of ASE models is determined by the ability to take into account varying flight conditions and the possibility to perform flight monitoring under the presence of active structural servo control strategies. In this continued study, these aspects are addressed by developing specific methodologies and algorithms for control relevant robust identification and model validation of aeroservoelastic structures. The closed-loop model robust identification and model validation are based on a fractional model approach where the model uncertainties are characterized in a closed-loop relevant way.

  13. A prediction model for ocular damage - Experimental validation.

    PubMed

    Heussner, Nico; Vagos, Márcia; Spitzer, Martin S; Stork, Wilhelm

    2015-08-01

    With the increasing number of laser applications in medicine and technology, accidental as well as intentional exposure of the human eye to laser sources has become a major concern. Therefore, a prediction model for ocular damage (PMOD) is presented within this work and validated for long-term exposure. This model is a combination of a raytracing model with a thermodynamical model of the human and an application which determines the thermal damage by the implementation of the Arrhenius integral. The model is based on our earlier work and is here validated against temperature measurements taken with porcine eye samples. For this validation, three different powers were used: 50mW, 100mW and 200mW with a spot size of 1.9mm. Also, the measurements were taken with two different sensing systems, an infrared camera and a fibre optic probe placed within the tissue. The temperatures were measured up to 60s and then compared against simulations. The measured temperatures were found to be in good agreement with the values predicted by the PMOD-model. To our best knowledge, this is the first model which is validated for both short-term and long-term irradiations in terms of temperature and thus demonstrates that temperatures can be accurately predicted within the thermal damage regime. PMID:26267496

  14. Pharmacophore modeling studies of type I and type II kinase inhibitors of Tie2.

    PubMed

    Xie, Qing-Qing; Xie, Huan-Zhang; Ren, Ji-Xia; Li, Lin-Li; Yang, Sheng-Yong

    2009-02-01

    In this study, chemical feature based pharmacophore models of type I and type II kinase inhibitors of Tie2 have been developed with the aid of HipHop and HypoRefine modules within Catalyst program package. The best HipHop pharmacophore model Hypo1_I for type I kinase inhibitors contains one hydrogen-bond acceptor, one hydrogen-bond donor, one general hydrophobic, one hydrophobic aromatic, and one ring aromatic feature. And the best HypoRefine model Hypo1_II for type II kinase inhibitors, which was characterized by the best correlation coefficient (0.976032) and the lowest RMSD (0.74204), consists of two hydrogen-bond donors, one hydrophobic aromatic, and two general hydrophobic features, as well as two excluded volumes. These pharmacophore models have been validated by using either or both test set and cross validation methods, which shows that both the Hypo1_I and Hypo1_II have a good predictive ability. The space arrangements of the pharmacophore features in Hypo1_II are consistent with the locations of the three portions making up a typical type II kinase inhibitor, namely, the portion occupying the ATP binding region (ATP-binding-region portion, AP), that occupying the hydrophobic region (hydrophobic-region portion, HP), and that linking AP and HP (bridge portion, BP). Our study also reveals that the ATP-binding-region portion of the type II kinase inhibitors plays an important role to the bioactivity of the type II kinase inhibitors. Structural modifications on this portion should be helpful to further improve the inhibitory potency of type II kinase inhibitors. PMID:19138543

  15. The Validation of Climate Models: The Development of Essential Practice

    NASA Astrophysics Data System (ADS)

    Rood, R. B.

    2011-12-01

    It is possible from both a scientific and philosophical perspective to state that climate models cannot be validated. However, with the realization that the scientific investigation of climate change is as much a subject of politics as of science, maintaining this formal notion of "validation" has significant consequences. For example, it relegates the bulk of work of many climate scientists to an exercise of model evaluation that can be construed as ill-posed. Even within the science community this motivates criticism of climate modeling as an exercise of weak scientific practice. Stepping outside of the science community, statements that validation is impossible are used in political arguments to discredit the scientific investigation of climate, to maintain doubt about projections of climate change, and hence, to prohibit the development of public policy to regulate the emissions of greenhouse gases. With the acceptance of the impossibility of validation, scientists often state that the credibility of models can be established through an evaluation process. A robust evaluation process leads to the quantitative description of the modeling system against a standard set of measures. If this process is standardized as institutional practice, then this provides a measure of model performance from one modeling release to the next. It is argued, here, that such a robust and standardized evaluation of climate models can be structured and quantified as "validation." Arguments about the nuanced meaning of validation and evaluation are a subject about which the climate modeling community needs to develop a standard. It does injustice to a body of science-based knowledge to maintain that validation is "impossible." Rather than following such a premise, which immediately devalues the knowledge base, it is more useful to develop a systematic, standardized approach to robust, appropriate validation. This stands to represent the complexity of the Earth's climate and its

  16. Modeling Topaz-II system performance

    SciTech Connect

    Lee, H.H.; Klein, A.C. )

    1993-01-01

    The US acquisition of the Topaz-11 in-core thermionic space reactor test system from Russia provides a good opportunity to perform a comparison of the Russian reported data and the results from computer codes such as MCNP (Ref. 3) and TFEHX (Ref. 4). The comparison study includes both neutronic and thermionic performance analyses. The Topaz II thermionic reactor is modeled with MCNP using actual Russian dimensions and parameters. The computation of the neutronic performance considers several important aspects such as the fuel enrichment and location of the thermionic fuel elements (TFES) in the reactor core. The neutronic analysis included the calculation of both radial and axial power distribution, which are then used in the TFEHX code for electrical performance. The reactor modeled consists of 37 single-cell TFEs distributed in a 13-cm-radius zirconium hydride block surrounded by 8 cm of beryllium metal reflector. The TFEs use 90% enriched [sup 235]U and molybdenum coated with a thin layer of [sup 184]W for emitter surface. Electrons emitted are captured by a collector surface with a gap filled with cesium vapor between the collector and emitter surfaces. The collector surface is electrically insulated with alumina. Liquid NaK provides the cooling system for the TFEs. The axial thermal power distribution is obtained by dividing the TFE into 40 axial nodes. Comparison of the true axial power distribution with that produced by electrical heaters was also performed.

  17. Analysis of the absorptive behavior of photopolymer materials. Part II. Experimental validation

    NASA Astrophysics Data System (ADS)

    Li, Haoyu; Qi, Yue; Tolstik, Elen; Guo, Jinxin; Sheridan, John T.

    2015-01-01

    In the first part of this paper, a model describing photopolymer materials, which incorporates both the physical electromagnetic and photochemical effects taking place, was developed. This model is now validated by applying it to fit experimental data for two different types of photopolymer materials. The first photopolymer material, acrylamide/polyvinyl alcohol, is studied when four photosensitizers are used, i.e. Erythrosine B, Eosin Y, Phloxine B and Rose Bengal. The second type of photopolymer material involves phenanthrenequinone in a polymethylmethacrylate matrix. Using our model, the values of physical parameters, are extracted by numerical fitting experimentally obtained normalized transmittance growth curves. Experimental data sets for different exposure intensities, dye concentrations, and exposure geometries are studied. The advantages of our approach are demonstrated and it is shown that the parameters proposed by us to quantify the absorptive behavior in our model are both physical and can be estimated.

  18. Comparison with CLPX II airborne data using DMRT model

    USGS Publications Warehouse

    Xu, X.; Liang, D.; Andreadis, K.M.; Tsang, L.; Josberger, E.G.

    2009-01-01

    In this paper, we considered a physical-based model which use numerical solution of Maxwell Equations in three-dimensional simulations and apply into Dense Media Radiative Theory (DMRT). The model is validated in two specific dataset from the second Cold Land Processes Experiment (CLPX II) at Alaska and Colorado. The data were all obtain by the Ku-band (13.95GHz) observations using airborne imaging polarimetric scatterometer (POLSCAT). Snow is a densely packed media. To take into account the collective scattering and incoherent scattering, analytical Quasi-Crystalline Approximation (QCA) and Numerical Maxwell Equation Method of 3-D simulation (NMM3D) are used to calculate the extinction coefficient and phase matrix. DMRT equations were solved by iterative solution up to 2nd order for the case of small optical thickness and full multiple scattering solution by decomposing the diffuse intensities into Fourier series was used when optical thickness exceed unity. It was shown that the model predictions agree with the field experiment not only co-polarization but also cross-polarization. For Alaska region, the input snow structure data was obtain by the in situ ground observations, while for Colorado region, we combined the VIC model to get the snow profile. ??2009 IEEE.

  19. Sub-nanometer Level Model Validation of the SIM Interferometer

    NASA Technical Reports Server (NTRS)

    Korechoff, Robert P.; Hoppe, Daniel; Wang, Xu

    2004-01-01

    The Space Interferometer Mission (SIM) flight instrument will not undergo a full performance, end-to-end system test on the ground due to a number of constraints. Thus, analysis and physics-based models will play a significant role in providing confidence that SIM will meet its science goals on orbit. The various models themselves are validated against the experimental results obtained from the MicroArcsecond Metrology (MAM) testbed adn the Diffraction testbed (DTB). The metric for validation is provided by the SIM astrometric error budget.

  20. Low-order dynamic modeling of the Experimental Breeder Reactor II

    SciTech Connect

    Berkan, R.C. . Dept. of Nuclear Engineering); Upadhyaya, B.R.; Kisner, R.A. )

    1990-07-01

    This report describes the development of a low-order, linear model of the Experimental Breeder Reactor II (EBR-II), including the primary system, intermediate heat exchanger, and steam generator subsystems. The linear model is developed to represent full-power steady state dynamics for low-level perturbations. Transient simulations are performed using model building and simulation capabilities of the computer software Matrix{sub x}. The inherently safe characteristics of the EBR-II are verified through the simulation studies. The results presented in this report also indicate an agreement between the linear model and the actual dynamics of the plant for several transients. Such models play a major role in the learning and in the improvement of nuclear reactor dynamics for control and signal validation studies. This research and development is sponsored by the Advanced Controls Program in the Instrumentation and Controls Division of the Oak Ridge National Laboratory. 17 refs., 67 figs., 15 tabs.

  1. Wavelet spectrum analysis approach to model validation of dynamic systems

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaomo; Mahadevan, Sankaran

    2011-02-01

    Feature-based validation techniques for dynamic system models could be unreliable for nonlinear, stochastic, and transient dynamic behavior, where the time series is usually non-stationary. This paper presents a wavelet spectral analysis approach to validate a computational model for a dynamic system. Continuous wavelet transform is performed on the time series data for both model prediction and experimental observation using a Morlet wavelet function. The wavelet cross-spectrum is calculated for the two sets of data to construct a time-frequency phase difference map. The Box-plot, an exploratory data analysis technique, is applied to interpret the phase difference for validation purposes. In addition, wavelet time-frequency coherence is calculated using the locally and globally smoothed wavelet power spectra of the two data sets. Significance tests are performed to quantitatively verify whether the wavelet time-varying coherence is significant at a specific time and frequency point, considering uncertainties in both predicted and observed time series data. The proposed wavelet spectrum analysis approach is illustrated with a dynamics validation challenge problem developed at the Sandia National Laboratories. A comparison study is conducted to demonstrate the advantages of the proposed methodologies over classical frequency-independent cross-correlation analysis and time-independent cross-coherence analysis for the validation of dynamic systems.

  2. Validation techniques of agent based modelling for geospatial simulations

    NASA Astrophysics Data System (ADS)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  3. Data for model validation summary report. A summary of data for validation and benchmarking of recovery boiler models

    SciTech Connect

    Grace, T.; Lien, S.; Schmidl, W.; Salcudean, M.; Abdullah, Z.

    1997-07-01

    One of the tasks in the project was to obtain data from operating recovery boilers for the purpose of model validation. Another task was to obtain water model data and computer output from University of British Columbia for purposes of benchmarking the UBC model against other codes. In the course of discussions on recovery boiler modeling over the course of this project, it became evident that there would be value in having some common cases for carrying out benchmarking exercises with different recovery boiler models. In order to facilitate such a benchmarking exercise, the data that was obtained on this project for validation and benchmarking purposes has been brought together in a single, separate report. The intent is to make this data available to anyone who may want to use it for model validation. The report contains data from three different cases. Case 1 is an ABBCE recovery boiler which was used for model validation. The data are for a single set of operating conditions. Case 2 is a Babcock & Wilcox recovery boiler that was modified by Tampella. In this data set, several different operating conditions were employed. The third case is water flow data supplied by UBC, along with computational output using the UBC code, for benchmarking purposes.

  4. Validity of NBME Parts I and II for the Selection of Residents: The Case of Orthopaedic Surgery.

    ERIC Educational Resources Information Center

    Case, Susan M.

    The predictive validity of scores on the National Board of Medical Examiners (NBME) Part I and Part II examinations for the selection of residents in orthopaedic surgery was investigated. Use of NBME scores has been criticized because of the time lag between taking Part I and entering residency and because Part I content is not directly linked to…

  5. Validating the Thinking Styles Inventory-Revised II among Chinese University Students with Hearing Impairment through Test Accommodations

    ERIC Educational Resources Information Center

    Cheng, Sanyin; Zhang, Li-Fang

    2014-01-01

    The present study pioneered in adopting test accommodations to validate the Thinking Styles Inventory-Revised II (TSI-R2; Sternberg, Wagner, & Zhang, 2007) among Chinese university students with hearing impairment. A series of three studies were conducted that drew their samples from the same two universities, in which accommodating test…

  6. Concurrent Validity of the WISC-IV and DAS-II in Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Kuriakose, Sarah

    2014-01-01

    Cognitive assessments are used for a variety of research and clinical purposes in children with autism spectrum disorder (ASD). This study establishes concurrent validity of the Wechsler Intelligence Scales for Children-fourth edition (WISC-IV) and Differential Ability Scales-second edition (DAS-II) in a sample of children with ASD with a broad…

  7. Modeling and Validation of Microwave Ablations with Internal Vaporization

    PubMed Central

    Chiang, Jason; Birla, Sohan; Bedoya, Mariajose; Jones, David; Subbiah, Jeyam; Brace, Christopher L.

    2014-01-01

    Numerical simulation is increasingly being utilized for computer-aided design of treatment devices, analysis of ablation growth, and clinical treatment planning. Simulation models to date have incorporated electromagnetic wave propagation and heat conduction, but not other relevant physics such as water vaporization and mass transfer. Such physical changes are particularly noteworthy during the intense heat generation associated with microwave heating. In this work, a numerical model was created that integrates microwave heating with water vapor generation and transport by using porous media assumptions in the tissue domain. The heating physics of the water vapor model was validated through temperature measurements taken at locations 5, 10 and 20 mm away from the heating zone of the microwave antenna in homogenized ex vivo bovine liver setup. Cross-sectional area of water vapor transport was validated through intra-procedural computed tomography (CT) during microwave ablations in homogenized ex vivo bovine liver. Iso-density contours from CT images were compared to vapor concentration contours from the numerical model at intermittent time points using the Jaccard Index. In general, there was an improving correlation in ablation size dimensions as the ablation procedure proceeded, with a Jaccard Index of 0.27, 0.49, 0.61, 0.67 and 0.69 at 1, 2, 3, 4, and 5 minutes. This study demonstrates the feasibility and validity of incorporating water vapor concentration into thermal ablation simulations and validating such models experimentally. PMID:25330481

  8. Using the split Hopkinson pressure bar to validate material models

    PubMed Central

    Church, Philip; Cornish, Rory; Cullis, Ian; Gould, Peter; Lewtas, Ian

    2014-01-01

    This paper gives a discussion of the use of the split-Hopkinson bar with particular reference to the requirements of materials modelling at QinetiQ. This is to deploy validated material models for numerical simulations that are physically based and have as little characterization overhead as possible. In order to have confidence that the models have a wide range of applicability, this means, at most, characterizing the models at low rate and then validating them at high rate. The split Hopkinson pressure bar (SHPB) is ideal for this purpose. It is also a very useful tool for analysing material behaviour under non-shock wave loading. This means understanding the output of the test and developing techniques for reliable comparison of simulations with SHPB data. For materials other than metals comparison with an output stress v strain curve is not sufficient as the assumptions built into the classical analysis are generally violated. The method described in this paper compares the simulations with as much validation data as can be derived from deployed instrumentation including the raw strain gauge data on the input and output bars, which avoids any assumptions about stress equilibrium. One has to take into account Pochhammer–Chree oscillations and their effect on the specimen and recognize that this is itself also a valuable validation test of the material model. PMID:25071238

  9. Using the split Hopkinson pressure bar to validate material models.

    PubMed

    Church, Philip; Cornish, Rory; Cullis, Ian; Gould, Peter; Lewtas, Ian

    2014-08-28

    This paper gives a discussion of the use of the split-Hopkinson bar with particular reference to the requirements of materials modelling at QinetiQ. This is to deploy validated material models for numerical simulations that are physically based and have as little characterization overhead as possible. In order to have confidence that the models have a wide range of applicability, this means, at most, characterizing the models at low rate and then validating them at high rate. The split Hopkinson pressure bar (SHPB) is ideal for this purpose. It is also a very useful tool for analysing material behaviour under non-shock wave loading. This means understanding the output of the test and developing techniques for reliable comparison of simulations with SHPB data. For materials other than metals comparison with an output stress v strain curve is not sufficient as the assumptions built into the classical analysis are generally violated. The method described in this paper compares the simulations with as much validation data as can be derived from deployed instrumentation including the raw strain gauge data on the input and output bars, which avoids any assumptions about stress equilibrium. One has to take into account Pochhammer-Chree oscillations and their effect on the specimen and recognize that this is itself also a valuable validation test of the material model. PMID:25071238

  10. Modeling and validation of microwave ablations with internal vaporization.

    PubMed

    Chiang, Jason; Birla, Sohan; Bedoya, Mariajose; Jones, David; Subbiah, Jeyam; Brace, Christopher L

    2015-02-01

    Numerical simulation is increasingly being utilized for computer-aided design of treatment devices, analysis of ablation growth, and clinical treatment planning. Simulation models to date have incorporated electromagnetic wave propagation and heat conduction, but not other relevant physics such as water vaporization and mass transfer. Such physical changes are particularly noteworthy during the intense heat generation associated with microwave heating. In this paper, a numerical model was created that integrates microwave heating with water vapor generation and transport by using porous media assumptions in the tissue domain. The heating physics of the water vapor model was validated through temperature measurements taken at locations 5, 10, and 20 mm away from the heating zone of the microwave antenna in homogenized ex vivo bovine liver setup. Cross-sectional area of water vapor transport was validated through intraprocedural computed tomography (CT) during microwave ablations in homogenized ex vivo bovine liver. Iso-density contours from CT images were compared to vapor concentration contours from the numerical model at intermittent time points using the Jaccard index. In general, there was an improving correlation in ablation size dimensions as the ablation procedure proceeded, with a Jaccard index of 0.27, 0.49, 0.61, 0.67, and 0.69 at 1, 2, 3, 4, and 5 min, respectively. This study demonstrates the feasibility and validity of incorporating water vapor concentration into thermal ablation simulations and validating such models experimentally. PMID:25330481

  11. Validation of Global Gravitational Field Models in Norway

    NASA Astrophysics Data System (ADS)

    Pettersen, B. R.; Sprlak, M.; Gerlach, C.

    2015-03-01

    We compare global gravitational field models obtained from GOCE to terrestrial datasets over Norway. Models based on the time-wise and the direct approaches are validated against height anomalies, free-air gravity anomalies, and deflections of the vertical. The spectral enhancement method is employed to overcome the spectral inconsistency between the gravitational models and the terrestrial datasets. All models are very similar up to degree/order 160. Higher degrees/orders improved systematically as more observations from GOCE were made available throughout five releases of data. Release 5 models compare well with EGM2008 up to degree/order 220. Validation by height anomalies suggests possible GOCE improvements to the gravity field over Norway between degree/order 100-200.

  12. Experimental Validation of a Thermoelastic Model for SMA Hybrid Composites

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.

    2001-01-01

    This study presents results from experimental validation of a recently developed model for predicting the thermomechanical behavior of shape memory alloy hybrid composite (SMAHC) structures, composite structures with an embedded SMA constituent. The model captures the material nonlinearity of the material system with temperature and is capable of modeling constrained, restrained, or free recovery behavior from experimental measurement of fundamental engineering properties. A brief description of the model and analysis procedures is given, followed by an overview of a parallel effort to fabricate and characterize the material system of SMAHC specimens. Static and dynamic experimental configurations for the SMAHC specimens are described and experimental results for thermal post-buckling and random response are presented. Excellent agreement is achieved between the measured and predicted results, fully validating the theoretical model for constrained recovery behavior of SMAHC structures.

  13. Open-source MFIX-DEM software for gas-solids flows: Part II Validation studies

    SciTech Connect

    Li, Tingwen; Garg, Rahul; Galvin, Janine; Pannala, Sreekanth

    2012-01-01

    With rapid advancements in computer hardware and numerical algorithms, computational fluid dynamics (CFD) has been increasingly employed as a useful tool for investigating the complex hydrodynamics inherent in multiphase flows. An important step during the development of a CFD model and prior to its application is conducting careful and comprehensive verification and validation studies. Accordingly, efforts to verify and validate the open-source MFIX-DEM software, which can be used for simulating the gas solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles, have been made at the National Energy Technology Laboratory (NETL). In part I of this paper, extensive verification studies were presented and in this part, detailed validation studies of MFIX-DEM are presented. A series of test cases covering a range of gas solids flow applications were conducted. In particular the numerical results for the random packing of a binary particle mixture, the repose angle of a sandpile formed during a side charge process, velocity, granular temperature, and voidage profiles from a bounded granular shear flow, lateral voidage and velocity profiles from a monodisperse bubbling fluidized bed, lateral velocity profiles from a spouted bed, and the dynamics of segregation of a binary mixture in a bubbling bed were compared with available experimental data, and in some instances with empirical correlations. In addition, sensitivity studies were conducted for various parameters to quantify the error in the numerical simulation.

  14. Open-Source MFIX-DEM Software for Gas-Solids Flows: Part II - Validation Studies

    SciTech Connect

    Li, Tingwen

    2012-04-01

    With rapid advancements in computer hardware and numerical algorithms, computational fluid dynamics (CFD) has been increasingly employed as a useful tool for investigating the complex hydrodynamics inherent in multiphase flows. An important step during the development of a CFD model and prior to its application is conducting careful and comprehensive verification and validation studies. Accordingly, efforts to verify and validate the open-source MFIX-DEM software, which can be used for simulating the gas–solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles, have been made at the National Energy Technology Laboratory (NETL). In part I of this paper, extensive verification studies were presented and in this part, detailed validation studies of MFIX-DEM are presented. A series of test cases covering a range of gas–solids flow applications were conducted. In particular the numerical results for the random packing of a binary particle mixture, the repose angle of a sandpile formed during a side charge process, velocity, granular temperature, and voidage profiles from a bounded granular shear flow, lateral voidage and velocity profiles from a monodisperse bubbling fluidized bed, lateral velocity profiles from a spouted bed, and the dynamics of segregation of a binary mixture in a bubbling bed were compared with available experimental data, and in some instances with empirical correlations. In addition, sensitivity studies were conducted for various parameters to quantify the error in the numerical simulation.

  15. Progress in Geant4 Electromagnetic Physics Modelling and Validation

    NASA Astrophysics Data System (ADS)

    Apostolakis, J.; Asai, M.; Bagulya, A.; Brown, J. M. C.; Burkhardt, H.; Chikuma, N.; Cortes-Giraldo, M. A.; Elles, S.; Grichine, V.; Guatelli, S.; Incerti, S.; Ivanchenko, V. N.; Jacquemier, J.; Kadri, O.; Maire, M.; Pandola, L.; Sawkey, D.; Toshito, T.; Urban, L.; Yamashita, T.

    2015-12-01

    In this work we report on recent improvements in the electromagnetic (EM) physics models of Geant4 and new validations of EM physics. Improvements have been made in models of the photoelectric effect, Compton scattering, gamma conversion to electron and muon pairs, fluctuations of energy loss, multiple scattering, synchrotron radiation, and high energy positron annihilation. The results of these developments are included in the new Geant4 version 10.1 and in patches to previous versions 9.6 and 10.0 that are planned to be used for production for run-2 at LHC. The Geant4 validation suite for EM physics has been extended and new validation results are shown in this work. In particular, the effect of gamma-nuclear interactions on EM shower shape at LHC energies is discussed.

  16. Development and Validation of a Mass Casualty Conceptual Model

    PubMed Central

    Culley, Joan M.; Effken, Judith A.

    2012-01-01

    Purpose To develop and validate a conceptual model that provides a framework for the development and evaluation of information systems for mass casualty events. Design The model was designed based on extant literature and existing theoretical models. A purposeful sample of 18 experts validated the model. Open-ended questions, as well as a 7-point Likert scale, were used to measure expert consensus on the importance of each construct and its relationship in the model and the usefulness of the model to future research. Methods Computer-mediated applications were used to facilitate a modified Delphi technique through which a panel of experts provided validation for the conceptual model. Rounds of questions continued until consensus was reached, as measured by an interquartile range (no more than 1 scale point for each item); stability (change in the distribution of responses less than 15% between rounds); and percent agreement (70% or greater) for indicator questions. Findings Two rounds of the Delphi process were needed to satisfy the criteria for consensus or stability related to the constructs, relationships, and indicators in the model. The panel reached consensus or sufficient stability to retain all 10 constructs, 9 relationships, and 39 of 44 indicators. Experts viewed the model as useful (mean of 5.3 on a 7-point scale). Conclusions Validation of the model provides the first step in understanding the context in which mass casualty events take place and identifying variables that impact outcomes of care. Clinical Relevance This study provides a foundation for understanding the complexity of mass casualty care, the roles that nurses play in mass casualty events, and factors that must be considered in designing and evaluating information-communication systems to support effective triage under these conditions. PMID:20487188

  17. Verification and Validation of Model-Based Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles; Koga, Dennis (Technical Monitor)

    2001-01-01

    This paper presents a three year project (FY99 to FY01) on the verification and validation of model based autonomous systems. The topics include: 1) Project Profile; 2) Model-Based Autonomy; 3) The Livingstone MIR; 4) MPL2SMV; 5) Livingstone to SMV Translation; 6) Symbolic Model Checking; 7) From Livingstone Models to SMV Models; 8) Application In-Situ Propellant Production; 9) Closed-Loop Verification Principle; 10) Livingstone PathFinder (LPF); 11) Publications and Presentations; and 12) Future Directions. This paper is presented in viewgraph form.

  18. Validation of a tuber blight (Phytophthora infestans) prediction model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Potato tuber blight caused by Phytophthora infestans accounts for significant losses in storage. There is limited published quantitative data on predicting tuber blight. We validated a tuber blight prediction model developed in New York with cultivars Allegany, NY 101, and Katahdin using independent...

  19. Bibliometric Modeling Processes and the Empirical Validity of Lotka's Law.

    ERIC Educational Resources Information Center

    Nicholls, Paul Travis

    1989-01-01

    Examines the elements involved in fitting a bibliometric model to empirical data, proposes a consistent methodology for applying Lotka's law, and presents the results of an empirical test of the methodology. The results are discussed in terms of the validity of Lotka's law and the suitability of the proposed methodology. (49 references) (CLB)

  20. A Model for Investigating Predictive Validity at Highly Selective Institutions.

    ERIC Educational Resources Information Center

    Gross, Alan L.; And Others

    A statistical model for investigating predictive validity at highly selective institutions is described. When the selection ratio is small, one must typically deal with a data set containing relatively large amounts of missing data on both criterion and predictor variables. Standard statistical approaches are based on the strong assumption that…

  1. Hydrologic and water quality models: Use, calibration, and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper introduces a special collection of 22 research articles that present and discuss calibration and validation concepts in detail for hydrologic and water quality models by their developers and presents a broad framework for developing the American Society of Agricultural and Biological Engi...

  2. ID Model Construction and Validation: A Multiple Intelligences Case

    ERIC Educational Resources Information Center

    Tracey, Monica W.; Richey, Rita C.

    2007-01-01

    This is a report of a developmental research study that aimed to construct and validate an instructional design (ID) model that incorporates the theory and practice of multiple intelligences (MI). The study consisted of three phases. In phase one, the theoretical foundations of multiple Intelligences and ID were examined to guide the development…

  3. Validating Work Discrimination and Coping Strategy Models for Sexual Minorities

    ERIC Educational Resources Information Center

    Chung, Y. Barry; Williams, Wendi; Dispenza, Franco

    2009-01-01

    The purpose of this study was to validate and expand on Y. B. Chung's (2001) models of work discrimination and coping strategies among lesbian, gay, and bisexual persons. In semistructured individual interviews, 17 lesbians and gay men reported 35 discrimination incidents and their related coping strategies. Responses were coded based on Chung's…

  4. Hydrologic and water quality models: Key calibration and validation topics

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As a continuation of efforts to provide a common background and platform for accordant development of calibration and validation (C/V) engineering practices, ASABE members worked to determine critical topics related to model C/V, perform a synthesis of the Moriasi et al. (2012) special collection of...

  5. MODELS FOR SUBMARINE OUTFALL - VALIDATION AND PREDICTION UNCERTAINTIES

    EPA Science Inventory

    This address reports on some efforts to verify and validate dilution models, including those found in Visual Plumes. This is done in the context of problem experience: a range of problems, including different pollutants such as bacteria; scales, including near-field and far-field...

  6. Dynamic modelling and experimental validation of three wheeled tilting vehicles

    NASA Astrophysics Data System (ADS)

    Amati, Nicola; Festini, Andrea; Pelizza, Luigi; Tonoli, Andrea

    2011-06-01

    The present paper describes the study of the stability in the straight running of a three-wheeled tilting vehicle for urban and sub-urban mobility. The analysis was carried out by developing a multibody model in the Matlab/SimulinkSimMechanics environment. An Adams-Motorcycle model and an equivalent analytical model were developed for the cross-validation and for highlighting the similarities with the lateral dynamics of motorcycles. Field tests were carried out to validate the model and identify some critical parameters, such as the damping on the steering system. The stability analysis demonstrates that the lateral dynamic motions are characterised by vibration modes that are similar to that of a motorcycle. Additionally, it shows that the wobble mode is significantly affected by the castor trail, whereas it is only slightly affected by the dynamics of the front suspension. For the present case study, the frame compliance also has no influence on the weave and wobble.

  7. Climate Model Datasets on Earth System Grid II (ESG II)

    DOE Data Explorer

    Earth System Grid (ESG) is a project that combines the power and capacity of supercomputers, sophisticated analysis servers, and datasets on the scale of petabytes. The goal is to provide a seamless distributed environment that allows scientists in many locations to work with large-scale data, perform climate change modeling and simulation,and share results in innovative ways. Though ESG is more about the computing environment than the data, still there are several catalogs of data available at the web site that can be browsed or search. Most of the datasets are restricted to registered users, but several are open to any access.

  8. Development, Selection, and Validation of Tumor Growth Models

    NASA Astrophysics Data System (ADS)

    Shahmoradi, Amir; Lima, Ernesto; Oden, J. Tinsley

    In recent years, a multitude of different mathematical approaches have been taken to develop multiscale models of solid tumor growth. Prime successful examples include the lattice-based, agent-based (off-lattice), and phase-field approaches, or a hybrid of these models applied to multiple scales of tumor, from subcellular to tissue level. Of overriding importance is the predictive power of these models, particularly in the presence of uncertainties. This presentation describes our attempt at developing lattice-based, agent-based and phase-field models of tumor growth and assessing their predictive power through new adaptive algorithms for model selection and model validation embodied in the Occam Plausibility Algorithm (OPAL), that brings together model calibration, determination of sensitivities of outputs to parameter variances, and calculation of model plausibilities for model selection. Institute for Computational Engineering and Sciences.

  9. Aggregating validity indicators embedded in Conners' CPT-II outperforms individual cutoffs at separating valid from invalid performance in adults with traumatic brain injury.

    PubMed

    Erdodi, Laszlo A; Roth, Robert M; Kirsch, Ned L; Lajiness-O'neill, Renee; Medoff, Brent

    2014-08-01

    Continuous performance tests (CPT) provide a useful paradigm to assess vigilance and sustained attention. However, few established methods exist to assess the validity of a given response set. The present study examined embedded validity indicators (EVIs) previously found effective at dissociating valid from invalid performance in relation to well-established performance validity tests in 104 adults with TBI referred for neuropsychological testing. Findings suggest that aggregating EVIs increases their signal detection performance. While individual EVIs performed well at their optimal cutoffs, two specific combinations of these five indicators generally produced the best classification accuracy. A CVI-5A ≥3 had a specificity of .92-.95 and a sensitivity of .45-.54. At ≥4 the CVI-5B had a specificity of .94-.97 and sensitivity of .40-.50. The CVI-5s provide a single numerical summary of the cumulative evidence of invalid performance within the CPT-II. Results support the use of a flexible, multivariate approach to performance validity assessment. PMID:24957927

  10. Finite Element Model Development and Validation for Aircraft Fuselage Structures

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Fleming, Gary A.; Pappa, Richard S.; Grosveld, Ferdinand W.

    2000-01-01

    The ability to extend the valid frequency range for finite element based structural dynamic predictions using detailed models of the structural components and attachment interfaces is examined for several stiffened aircraft fuselage structures. This extended dynamic prediction capability is needed for the integration of mid-frequency noise control technology. Beam, plate and solid element models of the stiffener components are evaluated. Attachment models between the stiffener and panel skin range from a line along the rivets of the physical structure to a constraint over the entire contact surface. The finite element models are validated using experimental modal analysis results. The increased frequency range results in a corresponding increase in the number of modes, modal density and spatial resolution requirements. In this study, conventional modal tests using accelerometers are complemented with Scanning Laser Doppler Velocimetry and Electro-Optic Holography measurements to further resolve the spatial response characteristics. Whenever possible, component and subassembly modal tests are used to validate the finite element models at lower levels of assembly. Normal mode predictions for different finite element representations of components and assemblies are compared with experimental results to assess the most accurate techniques for modeling aircraft fuselage type structures.

  11. Validating the BHR RANS model for variable density turbulence

    SciTech Connect

    Israel, Daniel M; Gore, Robert A; Stalsberg - Zarling, Krista L

    2009-01-01

    The BHR RANS model is a turbulence model for multi-fluid flows in which density variation plays a strong role in the turbulence processes. In this paper they demonstrate the usefulness of BHR over a wide range of flows which include the effects of shear, buoyancy, and shocks. The results are in good agreement with experimental and DNS data across the entire set of validation cases, with no need to retune model coefficients between cases. The model has potential application to a number of aerospace related flow problems.

  12. Criteria for Validating Mouse Models of Psychiatric Diseases

    PubMed Central

    Chadman, Kathryn K.; Yang, Mu; Crawley, Jacqueline N.

    2010-01-01

    Animal models of human diseases are in widespread use for biomedical research. Mouse models with a mutation in a single gene or multiple genes are excellent research tools for understanding the role of a specific gene in the etiology of a human genetic disease. Ideally, the mouse phenotypes will recapitulate the human phenotypes exactly. However, exact matches are rare, particularly in mouse models of neuropsychiatric disorders. This article summarizes the current strategies for optimizing the validity of a mouse model of a human brain dysfunction. We address the common question raised by molecular geneticists and clinical researchers in psychiatry, “what is a ‘good enough’ mouse model”? PMID:18484083

  13. Validation and upgrading of physically based mathematical models

    NASA Technical Reports Server (NTRS)

    Duval, Ronald

    1992-01-01

    The validation of the results of physically-based mathematical models against experimental results was discussed. Systematic techniques are used for: (1) isolating subsets of the simulator mathematical model and comparing the response of each subset to its experimental response for the same input conditions; (2) evaluating the response error to determine whether it is the result of incorrect parameter values, incorrect structure of the model subset, or unmodeled external effects of cross coupling; and (3) modifying and upgrading the model and its parameter values to determine the most physically appropriate combination of changes.

  14. Experimentally validated finite element model of electrocaloric multilayer ceramic structures

    SciTech Connect

    Smith, N. A. S. E-mail: maciej.rokosz@npl.co.uk Correia, T. M. E-mail: maciej.rokosz@npl.co.uk; Rokosz, M. K. E-mail: maciej.rokosz@npl.co.uk

    2014-07-28

    A novel finite element model to simulate the electrocaloric response of a multilayer ceramic capacitor (MLCC) under real environment and operational conditions has been developed. The two-dimensional transient conductive heat transfer model presented includes the electrocaloric effect as a source term, as well as accounting for radiative and convective effects. The model has been validated with experimental data obtained from the direct imaging of MLCC transient temperature variation under application of an electric field. The good agreement between simulated and experimental data, suggests that the novel experimental direct measurement methodology and the finite element model could be used to support the design of optimised electrocaloric units and operating conditions.

  15. Experimentally validated finite element model of electrocaloric multilayer ceramic structures

    NASA Astrophysics Data System (ADS)

    Smith, N. A. S.; Rokosz, M. K.; Correia, T. M.

    2014-07-01

    A novel finite element model to simulate the electrocaloric response of a multilayer ceramic capacitor (MLCC) under real environment and operational conditions has been developed. The two-dimensional transient conductive heat transfer model presented includes the electrocaloric effect as a source term, as well as accounting for radiative and convective effects. The model has been validated with experimental data obtained from the direct imaging of MLCC transient temperature variation under application of an electric field. The good agreement between simulated and experimental data, suggests that the novel experimental direct measurement methodology and the finite element model could be used to support the design of optimised electrocaloric units and operating conditions.

  16. Propeller aircraft interior noise model utilization study and validation

    NASA Technical Reports Server (NTRS)

    Pope, L. D.

    1984-01-01

    Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.

  17. Propeller aircraft interior noise model utilization study and validation

    NASA Astrophysics Data System (ADS)

    Pope, L. D.

    1984-09-01

    Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.

  18. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and Space Station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  19. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  20. Validation of a finite element model of the human metacarpal.

    PubMed

    Barker, D S; Netherway, D J; Krishnan, J; Hearn, T C

    2005-03-01

    Implant loosening and mechanical failure of components are frequently reported following metacarpophalangeal (MCP) joint replacement. Studies of the mechanical environment of the MCP implant-bone construct are rare. The objective of this study was to evaluate the predictive ability of a finite element model of the intact second human metacarpal to provide a validated baseline for further mechanical studies. A right index human metacarpal was subjected to torsion and combined axial/bending loading using strain gauge (SG) and 3D finite element (FE) analysis. Four different representations of bone material properties were considered. Regression analyses were performed comparing maximum and minimum principal surface strains taken from the SG and FE models. Regression slopes close to unity and high correlation coefficients were found when the diaphyseal cortical shell was modelled as anisotropic and cancellous bone properties were derived from quantitative computed tomography. The inclusion of anisotropy for cortical bone was strongly influential in producing high model validity whereas variation in methods of assigning stiffness to cancellous bone had only a minor influence. The validated FE model provides a tool for future investigations of current and novel MCP joint prostheses. PMID:15642506

  1. Model validation and selection based on inverse fuzzy arithmetic

    NASA Astrophysics Data System (ADS)

    Haag, Thomas; Carvajal González, Sergio; Hanss, Michael

    2012-10-01

    In this work, a method for the validation of models in general, and the selection of the most appropriate model in particular, is presented. As an industrially relevant example, a Finite Element (FE) model of a brake pad is investigated and identified with particular respect to uncertainties. The identification is based on inverse fuzzy arithmetic and consists of two stages. In the first stage, the eigenfrequencies of the brake pad are considered, and for three different material models, a set of fuzzy-valued parameters is identified on the basis of measurement values. Based on these identified parameters and a resimulation of the system with these parameters, a model validation is performed which takes into account both the model uncertainties and the output uncertainties. In the second stage, the most appropriate material model is used in the FE model for the computation of frequency response functions between excitation point and three measurement points. Again, the parameters of the model are identified on the basis of three corresponding measurement signals and a resimulation is conducted.

  2. Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns.

    PubMed

    Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain

    2015-01-01

    Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model's predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917

  3. Templeton prediction model underestimates IVF success in an external validation.

    PubMed

    van Loendersloot, L L; van Wely, M; Repping, S; van der Veen, F; Bossuyt, P M M

    2011-06-01

    Prediction models for IVF can be used to identify couples that will benefit from IVF treatment. Currently there is only one prediction model with a good predictive performance that can be used for predicting pregnancy chances after IVF. That model was developed almost 15 years ago and since IVF has progressed substantially during the last two decades it is questionable whether the model is still valid in current clinical practice. The objective of this study was to validate the prediction model of Templeton for calculating pregnancy chances after IVF. The performance of the prediction model was assessed in terms of discrimination, i.e. the area under the receiver operation characteristic (ROC) curve and calibration. Likely causes for miscalibration were evaluated by refitting the Templeton model to the study data. The area under the ROC curve for the Templeton model was 0.61. Calibration showed a significant and systematic underestimation of success in IVF. Although the Templeton model can distinguish somewhat between women with a high and low success rate in IVF, it systematically underestimates pregnancy chances and has therefore no real value for current IVF practice. PMID:21493154

  4. Potential of Ceilometer Networks for Validation of models

    NASA Astrophysics Data System (ADS)

    Wagner, Frank; Mattis, Ina; Flentje, Harald

    2016-04-01

    There exist various models which can treat aerosol particles within the model. Due to the limited availability of high quality profiles of particle properties most models are only validated with ground based particle measurements and/or with columnar particle amounts, e.g. aerosol optical depth, derived from satellites. Modern ceilometers are capable of providing aerosol vertical profiles and they are not too expensive and hence several national weather services operate a network of ceilometers. The Deutscher Wetterdienst operates currently a ceilometer network of about 75 devices providing aerosol profiles. Within the next few years the number of instruments will double. Each station has always several neighboring stations within 100km distance. Recently automated routines for quality checks and calibration of the devices were developed and implemented. Such automated tools together with the good spatial coverage make the DWD ceilometer network an excellent tool for model validation with respect to aerosol particle properties. The Copernicus Atmosphere service provides operational forecast of five aerosol species (sea-salt, dust, sulphate as well as organic and black carbon which are summarized as biomass burning aerosol) and the boundary layer height. These parameters can be compared with the outcome of ceilometer measurements and consequently the model can be validated. Especially long-range transported aerosol particles above the boundary layer can be investigated. At the conference the network will be presented, the validation strategy of the CAMS models by using ceilometer measurements will be explained and results will be shown. An outlook to international measuring networks will be given.

  5. Predicting the ungauged basin: model validation and realism assessment

    NASA Astrophysics Data System (ADS)

    van Emmerik, Tim; Mulder, Gert; Eilander, Dirk; Piet, Marijn; Savenije, Hubert

    2016-04-01

    The hydrological decade on Predictions in Ungauged Basins (PUB) [1] led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this study [2] we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. We do not present a generic approach that can be transferred to other ungauged catchments, but we aim to show how clever model design and alternative data acquisition can result in a valuable hydrological model for ungauged catchments. [1] Sivapalan, M., Takeuchi, K., Franks, S., Gupta, V., Karambiri, H., Lakshmi, V., et al. (2003). IAHS decade on predictions in ungauged basins (PUB), 2003-2012: shaping an exciting future for the hydrological sciences. Hydrol. Sci. J. 48, 857-880. doi: 10.1623/hysj.48.6.857.51421 [2] van Emmerik, T., Mulder, G., Eilander, D., Piet, M. and Savenije, H. (2015). Predicting the ungauged basin: model validation and realism assessment

  6. A verification and validation process for model-driven engineering

    NASA Astrophysics Data System (ADS)

    Delmas, R.; Pires, A. F.; Polacsek, T.

    2013-12-01

    Model Driven Engineering practitioners already benefit from many well established verification tools, for Object Constraint Language (OCL), for instance. Recently, constraint satisfaction techniques have been brought to Model-Driven Engineering (MDE) and have shown promising results on model verification tasks. With all these tools, it becomes possible to provide users with formal support from early model design phases to model instantiation phases. In this paper, a selection of such tools and methods is presented, and an attempt is made to define a verification and validation process for model design and instance creation centered on UML (Unified Modeling Language) class diagrams and declarative constraints, and involving the selected tools. The suggested process is illustrated with a simple example.

  7. Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns

    PubMed Central

    Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain

    2015-01-01

    Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model’s predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917

  8. KINEROS2-AGWA: Model Use, Calibration, and Validation

    NASA Technical Reports Server (NTRS)

    Goodrich, D C.; Burns, I. S.; Unkrich, C. L.; Semmens, D. J.; Guertin, D. P.; Hernandez, M.; Yatheendradas, S.; Kennedy, J. R.; Levick, L. R..

    2013-01-01

    KINEROS (KINematic runoff and EROSion) originated in the 1960s as a distributed event-based model that conceptualizes a watershed as a cascade of overland flow model elements that flow into trapezoidal channel model elements. KINEROS was one of the first widely available watershed models that interactively coupled a finite difference approximation of the kinematic overland flow equations to a physically based infiltration model. Development and improvement of KINEROS continued from the 1960s on a variety of projects for a range of purposes, which has resulted in a suite of KINEROS-based modeling tools. This article focuses on KINEROS2 (K2), a spatially distributed, event-based watershed rainfall-runoff and erosion model, and the companion ArcGIS-based Automated Geospatial Watershed Assessment (AGWA) tool. AGWA automates the time-consuming tasks of watershed delineation into distributed model elements and initial parameterization of these elements using commonly available, national GIS data layers. A variety of approaches have been used to calibrate and validate K2 successfully across a relatively broad range of applications (e.g., urbanization, pre- and post-fire, hillslope erosion, erosion from roads, runoff and recharge, and manure transport). The case studies presented in this article (1) compare lumped to stepwise calibration and validation of runoff and sediment at plot, hillslope, and small watershed scales; and (2) demonstrate an uncalibrated application to address relative change in watershed response to wildfire.

  9. Development of a Validated Model of Ground Coupling

    SciTech Connect

    Metz, P. D.

    1980-01-01

    A research program at Brookhaven National Laboratory (BNL) studies ground coupling, the use of the earth as a heat source/sink or storage element for solar heat pump space conditioning systems. This paper outlines the analytical and experimental research to date toward the development of an experimentally validated model of ground coupling and based on experimental results from December, 1978 to September, 1979, expores sensitivity of present model predictions to variations in thermal conductivity and other factors. Ways in which the model can be further refined are discussed.

  10. Verifying and Validating Proposed Models for FSW Process Optimization

    NASA Technical Reports Server (NTRS)

    Schneider, Judith

    2008-01-01

    This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms

  11. Validation of the SUNY Satellite Model in a Meteosat Evironment

    SciTech Connect

    Perez, R.; Schlemmer, J.; Renne, D.; Cowlin, S.; George, R.; Bandyopadhyay, B.

    2009-01-01

    The paper presents a validation of the SUNY satellite-to-irradiance model against four ground-truth stations from the Indian solar radiation network located in and around the province of Rajasthan, India. The SUNY model had initially been developed and tested to process US weather satellite data from the GOES series and has been used as part of the production of the US National Solar Resource Data Base (NSRDB). Here the model is applied to processes data from the European weather satellites Meteosat 5 and 7.

  12. Development and validation of a building design waste reduction model.

    PubMed

    Llatas, C; Osmani, M

    2016-10-01

    Reduction in construction waste is a pressing need in many countries. The design of building elements is considered a pivotal process to achieve waste reduction at source, which enables an informed prediction of their wastage reduction levels. However the lack of quantitative methods linking design strategies to waste reduction hinders designing out waste practice in building projects. Therefore, this paper addresses this knowledge gap through the design and validation of a Building Design Waste Reduction Strategies (Waste ReSt) model that aims to investigate the relationships between design variables and their impact on onsite waste reduction. The Waste ReSt model was validated in a real-world case study involving 20 residential buildings in Spain. The validation process comprises three stages. Firstly, design waste causes were analyzed. Secondly, design strategies were applied leading to several alternative low waste building elements. Finally, their potential source reduction levels were quantified and discussed within the context of the literature. The Waste ReSt model could serve as an instrumental tool to simulate designing out strategies in building projects. The knowledge provided by the model could help project stakeholders to better understand the correlation between the design process and waste sources and subsequently implement design practices for low-waste buildings. PMID:27292581

  13. Calibration of Predictor Models Using Multiple Validation Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  14. PASTIS: Bayesian extrasolar planet validation - II. Constraining exoplanet blend scenarios using spectroscopic diagnoses

    NASA Astrophysics Data System (ADS)

    Santerne, A.; Díaz, R. F.; Almenara, J.-M.; Bouchy, F.; Deleuil, M.; Figueira, P.; Hébrard, G.; Moutou, C.; Rodionov, S.; Santos, N. C.

    2015-08-01

    The statistical validation of transiting exoplanets proved to be an efficient technique to secure the nature of small exoplanet signals which cannot be established by purely spectroscopic means. However, the spectroscopic diagnoses are providing us with useful constraints on the presence of blended stellar contaminants. In this paper, we present how a contaminating star affects the measurements of the various spectroscopic diagnoses as a function of the parameters of the target and contaminating stars using the model implemented into the PASTIS planet-validation software. We find particular cases for which a blend might produce a large radial velocity signal but no bisector variation. It might also produce a bisector variation anticorrelated with the radial velocity one, as in the case of stellar spots. In those cases, the full width at half-maximum variation provides complementary constraints. These results can be used to constrain blend scenarios for transiting planet candidates or radial velocity planets. We review all the spectroscopic diagnoses reported in the literature so far, especially the ones to monitor the line asymmetry. We estimate their uncertainty and compare their sensitivity to blends. Based on that, we recommend the use of BiGauss which is the most sensitive diagnosis to monitor line-profile asymmetry. In this paper, we also investigate the sensitivity of the radial velocities to constrain blend scenarios and develop a formalism to estimate the level of dilution of a blended signal. Finally, we apply our blend model to re-analyse the spectroscopic diagnoses of HD 16702, an unresolved face-on binary which exhibits bisector variations.

  15. A benchmark for the validation of solidification modelling algorithms

    NASA Astrophysics Data System (ADS)

    Kaschnitz, E.; Heugenhauser, S.; Schumacher, P.

    2015-06-01

    This work presents two three-dimensional solidification models, which were solved by several commercial solvers (MAGMASOFT, FLOW-3D, ProCAST, WinCast, ANSYS, and OpenFOAM). Surprisingly, the results show noticeable differences. The results are analyzed similar to a round-robin test procedure to obtain reference values for temperatures and their uncertainties at selected positions in the model. The first model is similar to an adiabatic calorimeter with an aluminum alloy solidifying in a copper block. For this model, an analytical solution for the overall temperature at steady state can be calculated. The second model implements additional heat transfer boundary conditions at outer faces. The geometry of the models, the initial and boundary conditions as well as the material properties are kept as simple as possible but, nevertheless, close to a realistic solidification situation. The gained temperature results can be used to validate self-written solidification solvers and check the accuracy of commercial solidification programs.

  16. Rationality Validation of a Layered Decision Model for Network Defense

    SciTech Connect

    Wei, Huaqiang; Alves-Foss, James; Zhang, Du; Frincke, Deb

    2007-08-31

    We propose a cost-effective network defense strategy built on three key: three decision layers: security policies, defense strategies, and real-time defense tactics for countering immediate threats. A layered decision model (LDM) can be used to capture this decision process. The LDM helps decision-makers gain insight into the hierarchical relationships among inter-connected entities and decision types, and supports the selection of cost-effective defense mechanisms to safeguard computer networks. To be effective as a business tool, it is first necessary to validate the rationality of model before applying it to real-world business cases. This paper describes our efforts in validating the LDM rationality through simulation.

  17. Approaches to Validation of Models for Low Gravity Fluid Behavior

    NASA Technical Reports Server (NTRS)

    Chato, David J.; Marchetta, Jeffery; Hochstein, John I.; Kassemi, Mohammad

    2005-01-01

    This paper details the author experiences with the validation of computer models to predict low gravity fluid behavior. It reviews the literature of low gravity fluid behavior as a starting point for developing a baseline set of test cases. It examines authors attempts to validate their models against these cases and the issues they encountered. The main issues seem to be that: Most of the data is described by empirical correlation rather than fundamental relation; Detailed measurements of the flow field have not been made; Free surface shapes are observed but through thick plastic cylinders, and therefore subject to a great deal of optical distortion; and Heat transfer process time constants are on the order of minutes to days but the zero-gravity time available has been only seconds.

  18. Seine estuary modelling and AirSWOT measurements validation

    NASA Astrophysics Data System (ADS)

    Chevalier, Laetitia; Lyard, Florent; Laignel, Benoit

    2013-04-01

    In the context of global climate change, knowing water fluxes and storage, from the global scale to the local scale, is a crucial issue. The future satellite SWOT (Surface Water and Ocean Topography) mission, dedicated to the surface water observation, is proposed to meet this challenge. SWOT main payload will be a Ka-band Radar Interferometer (KaRIn). To validate this new kind of measurements, preparatory airborne campaigns (called AirSWOT) are currently being designed. AirSWOT will carry an interferometer similar to Karin: Kaspar-Ka-band SWOT Phenomenology Airborne Radar. Some campaigns are planned in France in 2014. During these campaigns, the plane will fly over the Seine River basin, especially to observe its estuary, the upstream river main channel (to quantify river-aquifer exchange) and some wetlands. The present work objective is to validate the ability of AirSWOT and SWOT, using a Seine estuary hydrodynamic modelling. In this context, field measurements will be collected by different teams such as GIP (Public Interest Group) Seine Aval, the GPMR (Rouen Seaport), SHOM (Hydrographic and Oceanographic Service of the Navy), the IFREMER (French Research Institute for Sea Exploitation), Mercator-Ocean, LEGOS (Laboratory of Space Study in Geophysics and Oceanography), ADES (Data Access Groundwater) ... . These datasets will be used first to validate locally AirSWOT measurements, and then to improve a hydrodynamic simulations (using tidal boundary conditions, river and groundwater inflows ...) for AirSWOT data 2D validation. This modelling will also be used to estimate the benefit of the future SWOT mission for mid-latitude river hydrology. To do this modelling,the TUGOm barotropic model (Toulouse Unstructured Grid Ocean model 2D) is used. Preliminary simulations have been performed by first modelling and then combining to different regions: first the Seine River and its estuarine area and secondly the English Channel. These two simulations h are currently being

  19. ESEEM Analysis of Multi-Histidine Cu(II)-Coordination in Model Complexes, Peptides, and Amyloid-β

    PubMed Central

    2015-01-01

    We validate the use of ESEEM to predict the number of 14N nuclei coupled to a Cu(II) ion by the use of model complexes and two small peptides with well-known Cu(II) coordination. We apply this method to gain new insight into less explored aspects of Cu(II) coordination in amyloid-β (Aβ). Aβ has two coordination modes of Cu(II) at physiological pH. A controversy has existed regarding the number of histidine residues coordinated to the Cu(II) ion in component II, which is dominant at high pH (∼8.7) values. Importantly, with an excess amount of Zn(II) ions, as is the case in brain tissues affected by Alzheimer’s disease, component II becomes the dominant coordination mode, as Zn(II) selectively substitutes component I bound to Cu(II). We confirm that component II only contains single histidine coordination, using ESEEM and set of model complexes. The ESEEM experiments carried out on systematically 15N-labeled peptides reveal that, in component II, His 13 and His 14 are more favored as equatorial ligands compared to His 6. Revealing molecular level details of subcomponents in metal ion coordination is critical in understanding the role of metal ions in Alzheimer’s disease etiology. PMID:25014537

  20. In-Drift Microbial Communities Model Validation Calculations

    SciTech Connect

    D. M. Jolley

    2001-09-24

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  1. In-Drift Microbial Communities Model Validation Calculation

    SciTech Connect

    D. M. Jolley

    2001-10-31

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  2. IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS

    SciTech Connect

    D.M. Jolley

    2001-12-18

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data.

  3. Methods for Geometric Data Validation of 3d City Models

    NASA Astrophysics Data System (ADS)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges

  4. PARALLEL MEASUREMENT AND MODELING OF TRANSPORT IN THE DARHT II BEAMLINE ON ETA II

    SciTech Connect

    Chambers, F W; Raymond, B A; Falabella, S; Lee, B S; Richardson, R A; Weir, J T; Davis, H A; Schultze, M E

    2005-05-31

    To successfully tune the DARHT II transport beamline requires the close coupling of a model of the beam transport and the measurement of the beam observables as the beam conditions and magnet settings are varied. For the ETA II experiment using the DARHT II beamline components this was achieved using the SUICIDE (Simple User Interface Connecting to an Integrated Data Environment) data analysis environment and the FITS (Fully Integrated Transport Simulation) model. The SUICIDE environment has direct access to the experimental beam transport data at acquisition and the FITS predictions of the transport for immediate comparison. The FITS model is coupled into the control system where it can read magnet current settings for real time modeling. We find this integrated coupling is essential for model verification and the successful development of a tuning aid for the efficient convergence on a useable tune. We show the real time comparisons of simulation and experiment and explore the successes and limitations of this close coupled approach.

  5. Cross-Validation of Aerobic Capacity Prediction Models in Adolescents.

    PubMed

    Burns, Ryan Donald; Hannon, James C; Brusseau, Timothy A; Eisenman, Patricia A; Saint-Maurice, Pedro F; Welk, Greg J; Mahar, Matthew T

    2015-08-01

    Cardiorespiratory endurance is a component of health-related fitness. FITNESSGRAM recommends the Progressive Aerobic Cardiovascular Endurance Run (PACER) or One mile Run/Walk (1MRW) to assess cardiorespiratory endurance by estimating VO2 Peak. No research has cross-validated prediction models from both PACER and 1MRW, including the New PACER Model and PACER-Mile Equivalent (PACER-MEQ) using current standards. The purpose of this study was to cross-validate prediction models from PACER and 1MRW against measured VO2 Peak in adolescents. Cardiorespiratory endurance data were collected on 90 adolescents aged 13-16 years (Mean = 14.7 ± 1.3 years; 32 girls, 52 boys) who completed the PACER and 1MRW in addition to a laboratory maximal treadmill test to measure VO2 Peak. Multiple correlations among various models with measured VO2 Peak were considered moderately strong (R = .74-0.78), and prediction error (RMSE) ranged from 5.95 ml·kg⁻¹,min⁻¹ to 8.27 ml·kg⁻¹.min⁻¹. Criterion-referenced agreement into FITNESSGRAM's Healthy Fitness Zones was considered fair-to-good among models (Kappa = 0.31-0.62; Agreement = 75.5-89.9%; F = 0.08-0.65). In conclusion, prediction models demonstrated moderately strong linear relationships with measured VO2 Peak, fair prediction error, and fair-to-good criterion referenced agreement with measured VO2 Peak into FITNESSGRAM's Healthy Fitness Zones. PMID:26186536

  6. Validation of an Urban Parameterization in a Mesoscale Model

    SciTech Connect

    Leach, M.J.; Chin, H.

    2001-07-19

    The Atmospheric Science Division at Lawrence Livermore National Laboratory uses the Naval Research Laboratory's Couple Ocean-Atmosphere Mesoscale Prediction System (COAMPS) for both operations and research. COAMPS is a non-hydrostatic model, designed as a multi-scale simulation system ranging from synoptic down to meso, storm and local terrain scales. As model resolution increases, the forcing due to small-scale complex terrain features including urban structures and surfaces, intensifies. An urban parameterization has been added to the Naval Research Laboratory's mesoscale model, COAMPS. The parameterization attempts to incorporate the effects of buildings and urban surfaces without explicitly resolving them, and includes modeling the mean flow to turbulence energy exchange, radiative transfer, the surface energy budget, and the addition of anthropogenic heat. The Chemical and Biological National Security Program's (CBNP) URBAN field experiment was designed to collect data to validate numerical models over a range of length and time scales. The experiment was conducted in Salt Lake City in October 2000. The scales ranged from circulation around single buildings to flow in the entire Salt Lake basin. Data from the field experiment includes tracer data as well as observations of mean and turbulence atmospheric parameters. Wind and turbulence predictions from COAMPS are used to drive a Lagrangian particle model, the Livermore Operational Dispersion Integrator (LODI). Simulations with COAMPS and LODI are used to test the sensitivity to the urban parameterization. Data from the field experiment, including the tracer data and the atmospheric parameters, are also used to validate the urban parameterization.

  7. Shoulder model validation and joint contact forces during wheelchair activities

    PubMed Central

    Morrow, Melissa M.B.; Kaufman, Kenton R.; An, Kai-Nan

    2010-01-01

    Chronic shoulder impingement is a common problem for manual wheelchair users. The loading associated with performing manual wheelchair activities of daily living is substantial and often at a high frequency. Musculoskeletal modeling and optimization techniques can be used to estimate the joint contact forces occurring at the shoulder to assess the soft tissue loading during an activity and to possibly identify activities and strategies that place manual wheelchair users at risk for shoulder injuries. The purpose of this study was to validate an upper extremity musculoskeletal model and apply the model to wheelchair activities for analysis of the estimated joint contact forces. Upper extremity kinematics and handrim wheelchair kinetics were measured over three conditions: level propulsion, ramp propulsion, and a weight relief lift. The experimental data were used as input to a subject-specific musculoskeletal model utilizing optimization to predict joint contact forces of the shoulder during all conditions. The model was validated using a mean absolute error calculation. Model results confirmed that ramp propulsion and weight relief lifts place the shoulder under significantly higher joint contact loading than level propulsion. In addition, they exhibit large superior contact forces that could contribute to impingement. This study highlights the potential impingement risk associated with both the ramp and weight relief lift activities. Level propulsion was shown to have a low relative risk of causing injury, but with consideration of the frequency with which propulsion is performed, this observation is not conclusive. PMID:20840833

  8. Statistical validation of high-dimensional models of growing networks

    NASA Astrophysics Data System (ADS)

    Medo, Matúš

    2014-03-01

    The abundance of models of complex networks and the current insufficient validation standards make it difficult to judge which models are strongly supported by data and which are not. We focus here on likelihood maximization methods for models of growing networks with many parameters and compare their performance on artificial and real datasets. While high dimensionality of the parameter space harms the performance of direct likelihood maximization on artificial data, this can be improved by introducing a suitable penalization term. Likelihood maximization on real data shows that the presented approach is able to discriminate among available network models. To make large-scale datasets accessible to this kind of analysis, we propose a subset sampling technique and show that it yields substantial model evidence in a fraction of time necessary for the analysis of the complete data.

  9. Validation of a Hertzian contact model with nonlinear damping

    NASA Astrophysics Data System (ADS)

    Sierakowski, Adam

    2015-11-01

    Due to limited spatial resolution, most disperse particle simulation methods rely on simplified models for incorporating short-range particle interactions. In this presentation, we introduce a contact model that combines the Hertz elastic restoring force with a nonlinear damping force, requiring only material properties and no tunable parameters. We have implemented the model in a resolved-particle flow solver that implements the Physalis method, which accurately captures hydrodynamic interactions by analytically enforcing the no-slip condition on the particle surface. We summarize the results of a few numerical studies that suggest the validity of the contact model over a range of particle interaction intensities (i.e., collision Stokes numbers) when compared with experimental data. This work was supported by the National Science Foundation under Grant Number CBET1335965 and the Johns Hopkins University Modeling Complex Systems IGERT program.

  10. Experimental Validation and Applications of a Fluid Infiltration Model

    PubMed Central

    Kao, Cindy S.; Hunt, James R.

    2010-01-01

    Horizontal infiltration experiments were performed to validate a plug flow model that minimizes the number of parameters that must be measured. Water and silicone oil at three different viscosities were infiltrated into glass beads, desert alluvium, and silica powder. Experiments were also performed with negative inlet heads on air-dried silica powder, and with water and oil infiltrating into initially water moist silica powder. Comparisons between the data and model were favorable in most cases, with predictions usually within 40% of the measured data. The model is extended to a line source and small areal source at the ground surface to analytically predict the shape of two-dimensional wetting fronts. Furthermore, a plug flow model for constant flux infiltration agrees well with field data and suggests that the proposed model for a constant-head boundary condition can be effectively used to predict wetting front movement at heterogeneous field sites if averaged parameter values are used. PMID:20428480

  11. Experimental validation of flexible robot arm modeling and control

    NASA Technical Reports Server (NTRS)

    Ulsoy, A. Galip

    1989-01-01

    Flexibility is important for high speed, high precision operation of lightweight manipulators. Accurate dynamic modeling of flexible robot arms is needed. Previous work has mostly been based on linear elasticity with prescribed rigid body motions (i.e., no effect of flexible motion on rigid body motion). Little or no experimental validation of dynamic models for flexible arms is available. Experimental results are also limited for flexible arm control. Researchers include the effects of prismatic as well as revolute joints. They investigate the effect of full coupling between the rigid and flexible motions, and of axial shortening, and consider the control of flexible arms using only additional sensors.

  12. Validating a spatially distributed hydrological model with soil morphology data

    NASA Astrophysics Data System (ADS)

    Doppler, T.; Honti, M.; Zihlmann, U.; Weisskopf, P.; Stamm, C.

    2013-10-01

    Spatially distributed hydrological models are popular tools in hydrology and they are claimed to be useful to support management decisions. Despite the high spatial resolution of the computed variables, calibration and validation is often carried out only on discharge time-series at specific locations due to the lack of spatially distributed reference data. Because of this restriction, the predictive power of these models, with regard to predicted spatial patterns, can usually not be judged. An example of spatial predictions in hydrology is the prediction of saturated areas in agricultural catchments. These areas can be important source areas for the transport of agrochemicals to the stream. We set up a spatially distributed model to predict saturated areas in a 1.2 km2 catchment in Switzerland with moderate topography. Around 40% of the catchment area are artificially drained. We measured weather data, discharge and groundwater levels in 11 piezometers for 1.5 yr. For broadening the spatially distributed data sets that can be used for model calibration and validation, we translated soil morphological data available from soil maps into an estimate of the duration of soil saturation in the soil horizons. We used redox-morphology signs for these estimates. This resulted in a data set with high spatial coverage on which the model predictions were validated. In general, these saturation estimates corresponded well to the measured groundwater levels. We worked with a model that would be applicable for management decisions because of its fast calculation speed and rather low data requirements. We simultaneously calibrated the model to the groundwater levels in the piezometers and discharge. The model was able to reproduce the general hydrological behavior of the catchment in terms of discharge and absolute groundwater levels. However, the accuracy of the groundwater level predictions was not high enough to be used for the prediction of saturated areas. The groundwater

  13. Modeling of copper(II) and zinc(II) extraction from chloride media with Kelex 100

    SciTech Connect

    Bogacki, M.B.; Zhivkova, S.; Kyuchoukov, G.; Szymanowski, J.

    2000-03-01

    The extraction of copper(II) and zinc(II) from acidic chloride solutions with protonated Kelex 100 (HL) was studied and the extraction isotherms were determined for systems containing individual metal ions and their mixtures. A chemical model was proposed and verified. It considers the coextraction of the following species: MCl{sub 4}(H{sub 2}L){sub 2}, MCl{sub 4}(H{sub 2}L){sub 2}{center_dot}HCl, MCl{sub 3}(H{sub 2}L), ML{sub 2}, and H{sub 2}L{center_dot}HCl. Zinc(II) is extracted as the metal ion pairs, while copper(II) can be extracted as the metal ion pair and the chelate. The model can be used to predict the effect of experimental conditions on extraction and coextraction of the metal ions considered.

  14. Cosmological Parameter Uncertainties from SALT-II Type Ia Supernova Light Curve Models

    SciTech Connect

    Mosher, J.; Guy, J.; Kessler, R.; Astier, P.; Marriner, J.; Betoule, M.; Sako, M.; El-Hage, P.; Biswas, R.; Pain, R.; Kuhlmann, S.; Regnault, N.; Frieman, J. A.; Schneider, D. P.

    2014-08-29

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ~120 low-redshift (z < 0.1) SNe Ia, ~255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ~290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w (input) – w (recovered)) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty, the average bias on w is –0.014 ± 0.007.

  15. Cosmological parameter uncertainties from SALT-II type Ia supernova light curve models

    SciTech Connect

    Mosher, J.; Sako, M.; Guy, J.; Astier, P.; Betoule, M.; El-Hage, P.; Pain, R.; Regnault, N.; Marriner, J.; Biswas, R.; Kuhlmann, S.; Schneider, D. P.

    2014-09-20

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ∼120 low-redshift (z < 0.1) SNe Ia, ∼255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ∼290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w {sub input} – w {sub recovered}) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty; the average bias on w is –0.014 ± 0.007.

  16. Packed bed heat storage: Continuum mechanics model and validation

    NASA Astrophysics Data System (ADS)

    Knödler, Philipp; Dreißigacker, Volker; Zunft, Stefan

    2016-05-01

    Thermal energy storage (TES) systems are key elements for various types of new power plant concepts. As possible cost-effective storage inventory option, packed beds of miscellaneous material come into consideration. However, high technical risks arise due to thermal expansion and shrinking of the packed bed's particles during cyclic thermal operation, possibly leading to material failure. Therefore, suitable tools for designing the heat storage system are mandatory. While particle discrete models offer detailed simulation results, the computing time for large scale applications is inefficient. In contrast, continuous models offer time-efficient simulation results but are in need of effective packed bed parameters. This work focuses on providing insight into some basic methods and tools on how to obtain such parameters and on how they are implemented into a continuum model. In this context, a particle discrete model as well as a test rig for carrying out uniaxial compression tests (UCT) is introduced. Performing of experimental validation tests indicate good agreement with simulated UCT results. In this process, effective parameters required for a continuous packed bed model were identified and used for continuum simulation. This approach is validated by comparing the simulated results with experimental data from another test rig. The presented method significantly simplifies subsequent design studies.

  17. Corrosion model validation in high level nuclear package research

    SciTech Connect

    McNeil, M.B.; Moody, J.B.

    1993-12-31

    The strategies for waste package (WP) performance validation will be based on site specific geologic and hydrogeochemical information plus models which can be used to predict potential WP lifetimes. The development and application of such models will include the evaluation of natural analogues (NA). These analogues are needed to resolve issues related to the validation of models. Natural analogues have not had extensive use or widespread acceptance in the area of waste package failure prediction. This lack of acceptance is due to the anticipated choice of alloys for waste package containers. Few of these alloys are similar to naturally occurring metals, and the proposed HLW repositories are in general in geologic settings not very similar to those in which naturally occurring metals are generally found. Natural and archaeological analogues can be used, however, in analysis of possible waste package failures as a means of testing proposed models for failure. In fact, the analogues are the only available mechanisms for testing models of long-term waste package behavior. A strategy is outlined for incorporating natural and archaeological analogue studies into waste package research, and examples are discussed. The natural/archaeological analogous approach which appears most promising is to use archaeological and mineral samples to develop an understanding of the identities and rates of the mineral alteration reactions at or near the surface of the package, improving present capability for estimating the lifetimes of metallic waste package containers.

  18. Robust design and model validation of nonlinear compliant micromechanisms.

    SciTech Connect

    Howell, Larry L.; Baker, Michael Sean; Wittwer, Jonathan W.

    2005-02-01

    Although the use of compliance or elastic flexibility in microelectromechanical systems (MEMS) helps eliminate friction, wear, and backlash, compliant MEMS are known to be sensitive to variations in material properties and feature geometry, resulting in large uncertainties in performance. This paper proposes an approach for design stage uncertainty analysis, model validation, and robust optimization of nonlinear MEMS to account for critical process uncertainties including residual stress, layer thicknesses, edge bias, and material stiffness. A fully compliant bistable micromechanism (FCBM) is used as an example, demonstrating that the approach can be used to handle complex devices involving nonlinear finite element models. The general shape of the force-displacement curve is validated by comparing the uncertainty predictions to measurements obtained from in situ force gauges. A robust design is presented, where simulations show that the estimated force variation at the point of interest may be reduced from {+-}47 {micro}N to {+-}3 {micro}N. The reduced sensitivity to process variations is experimentally validated by measuring the second stable position at multiple locations on a wafer.

  19. Higgs potential in the type II seesaw model

    SciTech Connect

    Arhrib, A.; Benbrik, R.; Chabab, M.; Rahili, L.; Ramadan, J.; Moultaka, G.; Peyranere, M. C.

    2011-11-01

    The standard model Higgs sector, extended by one weak gauge triplet of scalar fields with a very small vacuum expectation value, is a very promising setting to account for neutrino masses through the so-called type II seesaw mechanism. In this paper we consider the general renormalizable doublet/triplet Higgs potential of this model. We perform a detailed study of its main dynamical features that depend on five dimensionless couplings and two mass parameters after spontaneous symmetry breaking, and highlight the implications for the Higgs phenomenology. In particular, we determine (i) the complete set of tree-level unitarity constraints on the couplings of the potential and (ii) the exact tree-level boundedness from below constraints on these couplings, valid for all directions. When combined, these constraints delineate precisely the theoretically allowed parameter space domain within our perturbative approximation. Among the seven physical Higgs states of this model, the mass of the lighter (heavier) CP{sub even} state h{sup 0} (H{sup 0}) will always satisfy a theoretical upper (lower) bound that is reached for a critical value {mu}{sub c} of {mu} (the mass parameter controlling triple couplings among the doublet/triplet Higgses). Saturating the unitarity bounds, we find an upper bound m{sub h}{sup 0} or approx. {mu}{sub c} and {mu} < or approx. {mu}{sub c}. In the first regime the Higgs sector is typically very heavy, and only h{sup 0} that becomes SM-like could be accessible to the LHC. In contrast, in the second regime, somewhat overlooked in the literature, most of the Higgs sector is light. In particular, the heaviest state H{sup 0} becomes SM-like, the lighter states being the CP{sub odd} Higgs, the (doubly) charged Higgses, and a decoupled h{sup 0}, possibly

  20. Higgs potential in the type II seesaw model

    NASA Astrophysics Data System (ADS)

    Arhrib, A.; Benbrik, R.; Chabab, M.; Moultaka, G.; Peyranère, M. C.; Rahili, L.; Ramadan, J.

    2011-11-01

    The standard model Higgs sector, extended by one weak gauge triplet of scalar fields with a very small vacuum expectation value, is a very promising setting to account for neutrino masses through the so-called type II seesaw mechanism. In this paper we consider the general renormalizable doublet/triplet Higgs potential of this model. We perform a detailed study of its main dynamical features that depend on five dimensionless couplings and two mass parameters after spontaneous symmetry breaking, and highlight the implications for the Higgs phenomenology. In particular, we determine (i) the complete set of tree-level unitarity constraints on the couplings of the potential and (ii) the exact tree-level boundedness from below constraints on these couplings, valid for all directions. When combined, these constraints delineate precisely the theoretically allowed parameter space domain within our perturbative approximation. Among the seven physical Higgs states of this model, the mass of the lighter (heavier) CPeven state h0 (H0) will always satisfy a theoretical upper (lower) bound that is reached for a critical value μc of μ (the mass parameter controlling triple couplings among the doublet/triplet Higgses). Saturating the unitarity bounds, we find an upper bound mh0

  1. Organic acid modeling and model validation: Workshop summary. Final report

    SciTech Connect

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E&S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled ``Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.`` The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  2. Organic acid modeling and model validation: Workshop summary

    SciTech Connect

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.'' The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  3. Modeling the Arm II core in MicroCap IV

    SciTech Connect

    Dalton, A.C.

    1996-11-01

    This paper reports on how an electrical model for the core of the Arm II machine was created and how to use this model. We wanted to get a model for the electrical characteristics of the ARM II core, in order to simulate this machine and to assist in the design of a future machine. We wanted this model to be able to simulate saturation, variable loss, and reset. Using the Hodgdon model and the circuit analysis program MicroCap IV, this was accomplished. This paper is written in such a way as to allow someone not familiar with the project to understand it.

  4. Validation of chemistry models employed in a particle simulation method

    NASA Technical Reports Server (NTRS)

    Haas, Brian L.; Mcdonald, Jeffrey D.

    1991-01-01

    The chemistry models employed in a statistical particle simulation method, as implemented in the Intel iPSC/860 multiprocessor computer, are validated and applied. Chemical relaxation of five-species air in these reservoirs involves 34 simultaneous dissociation, recombination, and atomic-exchange reactions. The reaction rates employed in the analytic solutions are obtained from Arrhenius experimental correlations as functions of temperature for adiabatic gas reservoirs in thermal equilibrium. Favorable agreement with the analytic solutions validates the simulation when applied to relaxation of O2 toward equilibrium in reservoirs dominated by dissociation and recombination, respectively, and when applied to relaxation of air in the temperature range 5000 to 30,000 K. A flow of O2 over a circular cylinder at high Mach number is simulated to demonstrate application of the method to multidimensional reactive flows.

  5. Validating and Verifying Biomathematical Models of Human Fatigue

    NASA Technical Reports Server (NTRS)

    Martinez, Siera Brooke; Quintero, Luis Ortiz; Flynn-Evans, Erin

    2015-01-01

    Airline pilots experience acute and chronic sleep deprivation, sleep inertia, and circadian desynchrony due to the need to schedule flight operations around the clock. This sleep loss and circadian desynchrony gives rise to cognitive impairments, reduced vigilance and inconsistent performance. Several biomathematical models, based principally on patterns observed in circadian rhythms and homeostatic drive, have been developed to predict a pilots levels of fatigue or alertness. These models allow for the Federal Aviation Administration (FAA) and commercial airlines to make decisions about pilot capabilities and flight schedules. Although these models have been validated in a laboratory setting, they have not been thoroughly tested in operational environments where uncontrolled factors, such as environmental sleep disrupters, caffeine use and napping, may impact actual pilot alertness and performance. We will compare the predictions of three prominent biomathematical fatigue models (McCauley Model, Harvard Model, and the privately-sold SAFTE-FAST Model) to actual measures of alertness and performance. We collected sleep logs, movement and light recordings, psychomotor vigilance task (PVT), and urinary melatonin (a marker of circadian phase) from 44 pilots in a short-haul commercial airline over one month. We will statistically compare with the model predictions to lapses on the PVT and circadian phase. We will calculate the sensitivity and specificity of each model prediction under different scheduling conditions. Our findings will aid operational decision-makers in determining the reliability of each model under real-world scheduling situations.

  6. Validation of coupled atmosphere-fire behavior models

    SciTech Connect

    Bossert, J.E.; Reisner, J.M.; Linn, R.R.; Winterkamp, J.L.; Schaub, R.; Riggan, P.J.

    1998-12-31

    Recent advances in numerical modeling and computer power have made it feasible to simulate the dynamical interaction and feedback between the heat and turbulence induced by wildfires and the local atmospheric wind and temperature fields. At Los Alamos National Laboratory, the authors have developed a modeling system that includes this interaction by coupling a high resolution atmospheric dynamics model, HIGRAD, with a fire behavior model, BEHAVE, to predict the spread of wildfires. The HIGRAD/BEHAVE model is run at very high resolution to properly resolve the fire/atmosphere interaction. At present, these coupled wildfire model simulations are computationally intensive. The additional complexity of these models require sophisticated methods for assuring their reliability in real world applications. With this in mind, a substantial part of the research effort is directed at model validation. Several instrumented prescribed fires have been conducted with multi-agency support and participation from chaparral, marsh, and scrub environments in coastal areas of Florida and inland California. In this paper, the authors first describe the data required to initialize the components of the wildfire modeling system. Then they present results from one of the Florida fires, and discuss a strategy for further testing and improvement of coupled weather/wildfire models.

  7. Validation of high displacement piezoelectric actuator finite element models

    NASA Astrophysics Data System (ADS)

    Taleghani, Barmac K.

    2000-08-01

    The paper presents the results obtained by using NASTRAN and ANSYS finite element codes to predict doming of the THUNDER piezoelectric actuators during the manufacturing process and subsequent straining due to an applied input voltage. To effectively use such devices in engineering applications, modeling and characterization are essential. Length, width, dome height, and thickness and important parameters for users of such devices. Therefore, finite element models were used to assess the effects of these parameters. NASTRAN and ANSYS used different methods for modeling piezoelectric effects. In NASTRAN, a thermal analogy was used to represent voltage at nodes as equivalent temperatures, while ANSYS processed the voltage directly using piezoelectric finite elements. The results of finite element models were validated by using the experimental results.

  8. On the validity of the nonholonomic model of the rattleback

    NASA Astrophysics Data System (ADS)

    Kuznetsov, S. P.

    2015-12-01

    In connection with the problem of a convex-shaped solid body on a rough horizontal plane (the rattleback or Celtic stone), the paper discusses the validity of the nonholonomic model which postulates that the contact point has zero velocity and, hence, friction performs no mechanical work. While abstract, this model is undoubtedly constructive, similar to many idealizations commonly used in science. Despite its energy-conserving nature, the model does not obey Liouville's theorem on phase volume conservation, thus allowing the occurrence in the phase space of objects characteristic of dissipative dynamics (attractors) and thereby leading to phenomena like the spontaneous reversal of rotations. Nonholonomic models, intermediate between conservative and dissipative systems, should take their deserved place in the general picture of the modern theory of dynamical systems.

  9. Leading compounds for the validation of animal models of psychopathology.

    PubMed

    Micale, Vincenzo; Kucerova, Jana; Sulcova, Alexandra

    2013-10-01

    Modelling of complex psychiatric disorders, e.g., depression and schizophrenia, in animals is a major challenge, since they are characterized by certain disturbances in functions that are absolutely unique to humans. Furthermore, we still have not identified the genetic and neurobiological mechanisms, nor do we know precisely the circuits in the brain that function abnormally in mood and psychotic disorders. Consequently, the pharmacological treatments used are mostly variations on a theme that was started more than 50 years ago. Thus, progress in novel drug development with improved therapeutic efficacy would benefit greatly from improved animal models. Here, we review the available animal models of depression and schizophrenia and focus on the way that they respond to various types of potential candidate molecules, such as novel antidepressant or antipsychotic drugs, as an index of predictive validity. We conclude that the generation of convincing and useful animal models of mental illnesses could be a bridge to success in drug discovery. PMID:23942897

  10. Validation of High Displacement Piezoelectric Actuator Finite Element Models

    NASA Technical Reports Server (NTRS)

    Taleghani, B. K.

    2000-01-01

    The paper presents the results obtained by using NASTRAN(Registered Trademark) and ANSYS(Regitered Trademark) finite element codes to predict doming of the THUNDER piezoelectric actuators during the manufacturing process and subsequent straining due to an applied input voltage. To effectively use such devices in engineering applications, modeling and characterization are essential. Length, width, dome height, and thickness are important parameters for users of such devices. Therefore, finite element models were used to assess the effects of these parameters. NASTRAN(Registered Trademark) and ANSYS(Registered Trademark) used different methods for modeling piezoelectric effects. In NASTRAN(Registered Trademark), a thermal analogy was used to represent voltage at nodes as equivalent temperatures, while ANSYS(Registered Trademark) processed the voltage directly using piezoelectric finite elements. The results of finite element models were validated by using the experimental results.

  11. MODEL VALIDATION FOR A NONINVASIVE ARTERIAL STENOSIS DETECTION PROBLEM

    PubMed Central

    BANKS, H. THOMAS; HU, SHUHUA; KENZ, ZACKARY R.; KRUSE, CAROLA; SHAW, SIMON; WHITEMAN, JOHN; BREWIN, MARK P.; GREENWALD, STEPHEN E.; BIRCH, MALCOLM J.

    2014-01-01

    A current thrust in medical research is the development of a non-invasive method for detection, localization, and characterization of an arterial stenosis (a blockage or partial blockage in an artery). A method has been proposed to detect shear waves in the chest cavity which have been generated by disturbances in the blood flow resulting from a stenosis. In order to develop this methodology further, we use one-dimensional shear wave experimental data from novel acoustic phantoms to validate a corresponding viscoelastic mathematical model. We estimate model parameters which give a good fit (in a sense to be precisely defined) to the experimental data, and use asymptotic error theory to provide confidence intervals for parameter estimates. Finally, since a robust error model is necessary for accurate parameter estimates and confidence analysis, we include a comparison of absolute and relative models for measurement error. PMID:24506547

  12. Coupled Thermal-Chemical-Mechanical Modeling of Validation Cookoff Experiments

    SciTech Connect

    ERIKSON,WILLIAM W.; SCHMITT,ROBERT G.; ATWOOD,A.I.; CURRAN,P.D.

    2000-11-27

    The cookoff of energetic materials involves the combined effects of several physical and chemical processes. These processes include heat transfer, chemical decomposition, and mechanical response. The interaction and coupling between these processes influence both the time-to-event and the violence of reaction. The prediction of the behavior of explosives during cookoff, particularly with respect to reaction violence, is a challenging task. To this end, a joint DoD/DOE program has been initiated to develop models for cookoff, and to perform experiments to validate those models. In this paper, a series of cookoff analyses are presented and compared with data from a number of experiments for the aluminized, RDX-based, Navy explosive PBXN-109. The traditional thermal-chemical analysis is used to calculate time-to-event and characterize the heat transfer and boundary conditions. A reaction mechanism based on Tarver and McGuire's work on RDX{sup 2} was adjusted to match the spherical one-dimensional time-to-explosion data. The predicted time-to-event using this reaction mechanism compares favorably with the validation tests. Coupled thermal-chemical-mechanical analysis is used to calculate the mechanical response of the confinement and the energetic material state prior to ignition. The predicted state of the material includes the temperature, stress-field, porosity, and extent of reaction. There is little experimental data for comparison to these calculations. The hoop strain in the confining steel tube gives an estimation of the radial stress in the explosive. The inferred pressure from the measured hoop strain and calculated radial stress agree qualitatively. However, validation of the mechanical response model and the chemical reaction mechanism requires more data. A post-ignition burn dynamics model was applied to calculate the confinement dynamics. The burn dynamics calculations suffer from a lack of characterization of the confinement for the flaw

  13. Traveling with Cognitive Tests: Testing the Validity of a KABC-II Adaptation in India

    ERIC Educational Resources Information Center

    Malda, Maike; van de Vijver, Fons J. R.; Srinivasan, Krishnamachari; Transler, Catherine; Sukumar, Prathima

    2010-01-01

    The authors evaluated the adequacy of an extensive adaptation of the American Kaufman Assessment Battery for Children, second edition (KABC-II), for 6- to 10-year-old Kannada-speaking children of low socioeconomic status in Bangalore, South India. The adapted KABC-II was administered to 598 children. Subtests showed high reliabilities, the…

  14. Development and Validation of a 3-Dimensional CFB Furnace Model

    NASA Astrophysics Data System (ADS)

    Vepsäläinen, Arl; Myöhänen, Karl; Hyppäneni, Timo; Leino, Timo; Tourunen, Antti

    At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents

  15. Validation of thermal models for a prototypical MEMS thermal actuator.

    SciTech Connect

    Gallis, Michail A.; Torczynski, John Robert; Piekos, Edward Stanley; Serrano, Justin Raymond; Gorby, Allen D.; Phinney, Leslie Mary

    2008-09-01

    This report documents technical work performed to complete the ASC Level 2 Milestone 2841: validation of thermal models for a prototypical MEMS thermal actuator. This effort requires completion of the following task: the comparison between calculated and measured temperature profiles of a heated stationary microbeam in air. Such heated microbeams are prototypical structures in virtually all electrically driven microscale thermal actuators. This task is divided into four major subtasks. (1) Perform validation experiments on prototypical heated stationary microbeams in which material properties such as thermal conductivity and electrical resistivity are measured if not known and temperature profiles along the beams are measured as a function of electrical power and gas pressure. (2) Develop a noncontinuum gas-phase heat-transfer model for typical MEMS situations including effects such as temperature discontinuities at gas-solid interfaces across which heat is flowing, and incorporate this model into the ASC FEM heat-conduction code Calore to enable it to simulate these effects with good accuracy. (3) Develop a noncontinuum solid-phase heat transfer model for typical MEMS situations including an effective thermal conductivity that depends on device geometry and grain size, and incorporate this model into the FEM heat-conduction code Calore to enable it to simulate these effects with good accuracy. (4) Perform combined gas-solid heat-transfer simulations using Calore with these models for the experimentally investigated devices, and compare simulation and experimental temperature profiles to assess model accuracy. These subtasks have been completed successfully, thereby completing the milestone task. Model and experimental temperature profiles are found to be in reasonable agreement for all cases examined. Modest systematic differences appear to be related to uncertainties in the geometric dimensions of the test structures and in the thermal conductivity of the

  16. Modeling, Robust Control, and Experimental Validation of a Supercavitating Vehicle

    NASA Astrophysics Data System (ADS)

    Escobar Sanabria, David

    This dissertation considers the mathematical modeling, control under uncertainty, and experimental validation of an underwater supercavitating vehicle. By traveling inside a gas cavity, a supercavitating vehicle reduces hydrodynamic drag, increases speed, and minimizes power consumption. The attainable speed and power efficiency make these vehicles attractive for undersea exploration, high-speed transportation, and defense. However, the benefits of traveling inside a cavity come with difficulties in controlling the vehicle dynamics. The main challenge is the nonlinear force that arises when the back-end of the vehicle pierces the cavity. This force, referred to as planing, leads to oscillatory motion and instability. Control technologies that are robust to planing and suited for practical implementation need to be developed. To enable these technologies, a low-order vehicle model that accounts for inaccuracy in the characterization of planing is required. Additionally, an experimental method to evaluate possible pitfalls in the models and controllers is necessary before undersea testing. The major contribution of this dissertation is a unified framework for mathematical modeling, robust control synthesis, and experimental validation of a supercavitating vehicle. First, we introduce affordable experimental methods for mathematical modeling and controller testing under planing and realistic flow conditions. Then, using experimental observations and physical principles, we create a low-order nonlinear model of the longitudinal vehicle motion. This model quantifies the planing uncertainty and is suitable for robust controller synthesis. Next, based on the vehicle model, we develop automated tools for synthesizing controllers that deliver a certificate of performance in the face of nonlinear and uncertain planing forces. We demonstrate theoretically and experimentally that the proposed controllers ensure higher performance when the uncertain planing dynamics are

  17. Multicomponent aerosol dynamics model UHMA: model development and validation

    NASA Astrophysics Data System (ADS)

    Korhonen, H.; Lehtinen, K. E. J.; Kulmala, M.

    2004-05-01

    A size-segregated aerosol dynamics model UHMA (University of Helsinki Multicomponent Aerosol model) was developed for studies of multicomponent tropospheric aerosol particles. The model includes major aerosol microphysical processes in the atmosphere with a focus on new particle formation and growth; thus it incorporates particle coagulation and multicomponent condensation, applying a revised treatment of condensation flux onto free molecular regime particles and the activation of nanosized clusters by organic vapours (Nano-Köhler theory), as well as recent parameterizations for binary H2SO4-H2O and ternary H2SO4-NH3-H2O homogeneous nucleation and dry deposition. The representation of particle size distribution can be chosen from three sectional methods: the hybrid method, the moving center method, and the retracking method in which moving sections are retracked to a fixed grid after a certain time interval. All these methods can treat particle emissions and atmospheric transport consistently, and are therefore suitable for use in large scale atmospheric models. In a test simulation against an accurate high resolution solution, all the methods showed reasonable treatment of new particle formation with 20 size sections although the hybrid and the retracking methods suffered from artificial widening of the distribution. The moving center approach, on the other hand, showed extra dents in the particle size distribution and failed to predict the onset of detectable particle formation. In a separate test simulation of an observed nucleation event, the model captured the key qualitative behaviour of the system well. Furthermore, its prediction of the organic volume fraction in newly formed particles, suggesting values as high as 0.5 for 3-4 nm particles and approximately 0.8 for 10 nm particles, agrees with recent indirect composition measurements.

  18. Initial Validation of the Sleep Disturbances in Pediatric Cancer Model.

    PubMed

    Daniel, Lauren C; Schwartz, Lisa A; Mindell, Jodi A; Tucker, Carole A; Barakat, Lamia P

    2016-07-01

    OBJECTIVE : The current study evaluates content validity of the Sleep Disturbance in Pediatric Cancer (SDPC) model using qualitative and quantitative stakeholder input.  METHODS : Parents of children (aged: 3-12 years) with acute lymphoblastic leukemia (n = 20) and medical providers (n = 6) participated in semi-structured interviews about child sleep during cancer treatment. They also rated SDPC model component importance on a 0-4 scale and selected the most relevant sleep-related intervention targets.  RESULTS : Qualitatively, parents and providers endorsed that changes in the child's psychosocial, environmental, and biological processes affect sleep. Stakeholders rated most model components (parent: 32 of 40; provider: 39 of 41) as important (>2) to child sleep. Parents were most interested in interventions targeting difficulty falling asleep and providers selected irregular sleep habits/scheduling, though groups did not differ significantly. CONCLUSIONS : Stakeholders supported SDPC content validity. The model will inform subsequent measure and intervention development focusing on biological and behavioral factors most salient to sleep disturbances in pediatric cancer. PMID:26994058

  19. Vibrations inside buildings due to subway railway traffic. Experimental validation of a comprehensive prediction model.

    PubMed

    Lopes, Patrícia; Ruiz, Jésus Fernández; Alves Costa, Pedro; Medina Rodríguez, L; Cardoso, António Silva

    2016-10-15

    The present paper focuses on the experimental validation of a numerical approach previously proposed by the authors for the prediction of vibrations inside buildings due to railway traffic in tunnels. The numerical model is based on the concept of dynamic substructuring and is composed by three autonomous models to simulate the following main parts of the problem: i) generation of vibrations (train-track interaction); ii) propagation of vibrations (track-tunnel-ground system); iii) reception of vibrations (building coupled to the ground). The experimental validation consists in the comparison between the results predicted by the proposed numerical model and the measurements performed inside a building due to the railway traffic in a shallow tunnel located in Madrid. Apart from the brief description of the numerical model and of the case study, the main options and simplifications adopted on the numerical modeling strategy are discussed. The balance adopted between accuracy and simplicity of the numerical approach proved to be a path to follow in order to transfer knowledge to engineering practice. Finally, the comparison between numerical and experimental results allowed finding a good agreement between both, fact that ensures the ability of the proposed modeling strategy to deal with real engineering practical problems. PMID:26589136

  20. Validation of two air quality models for Indian mining conditions.

    PubMed

    Chaulya, S K; Ahmad, M; Singh, R S; Bandopadhyay, L K; Bondyopadhay, C; Mondal, G C

    2003-02-01

    All major mining activity particularly opencast mining contributes to the problem of suspended particulate matter (SPM) directly or indirectly. Therefore, assessment and prediction are required to prevent and minimize the deterioration of SPM due to various opencast mining operations. Determination of emission rate of SPM for these activities and validation of air quality models are the first and foremost concern. In view of the above, the study was taken up for determination of emission rate for SPM to calculate emission rate of various opencast mining activities and validation of commonly used two air quality models for Indian mining conditions. To achieve the objectives, eight coal and three iron ore mining sites were selected to generate site specific emission data by considering type of mining, method of working, geographical location, accessibility and above all resource availability. The study covers various mining activities and locations including drilling, overburden loading and unloading, coal/mineral loading and unloading, coal handling or screening plant, exposed overburden dump, stock yard, workshop, exposed pit surface, transport road and haul road. Validation of the study was carried out through Fugitive Dust Model (FDM) and Point, Area and Line sources model (PAL2) by assigning the measured emission rate for each mining activity, meteorological data and other details of the respective mine as an input to the models. Both the models were run separately for the same set of input data for each mine to get the predicted SPM concentration at three receptor locations for each mine. The receptor locations were selected such a way that at the same places the actual filed measurement were carried out for SPM concentration. Statistical analysis was carried out to assess the performance of the models based on a set measured and predicted SPM concentration data. The value of coefficient of correlation for PAL2 and FDM was calculated to be 0.990-0.994 and 0

  1. Long-term ELBARA-II Assistance to SMOS Land Product and Algorithm Validation at the Valencia Anchor Station (MELBEX Experiment 2010-2013)

    NASA Astrophysics Data System (ADS)

    Lopez-Baeza, Ernesto; Wigneron, Jean-Pierre; Schwank, Mike; Miernecki, Maciej; Kerr, Yann; Casal, Tania; Delwart, Steven; Fernandez-Moran, Roberto; Mecklenburg, Susanne; Coll Pajaron, M. Amparo; Salgado Hernanz, Paula

    The main activity of the Valencia Anchor Station (VAS) is currently now to support the validation of SMOS (Soil Moisture and Ocean Salinity) Level 2 and 3 land products (soil moisture, SM, and vegetation optical depth, TAU). With this aim, the European Space Agency (ESA) has provided the Climatology from Satellites Group of the University of Valencia with an ELBARA-II microwave radiometer under a loan agreement since September 2009. During this time, brightness temperatures (TB) have continuously been acquired, except during normal maintenance or minor repair interruptions. ELBARA-II is an L-band dual-polarization radiometer with two channels (1400-1418 MHz, 1409-1427 MHz). It is continuously measuring over a vineyard field (El Renegado, Caudete de las Fuentes, Valencia) from a 15 m platform with a constant protocol for calibration and angular scanning measurements with the aim to assisting the validation of SMOS land products and the calibration of the L-MEB (L-Band Emission of the Biosphere) -basis for the SMOS Level 2 Land Processor- over the VAS validation site. One of the advantages of using the VAS site is the possibility of studying two different environmental conditions along the year. While the vine cycle extends mainly between April and October, during the rest of the year the area remains under bare soil conditions, adequate for the calibration of the soil model. The measurement protocol currently running has shown to be robust during the whole operation time and will be extended in time as much as possible to continue providing a long-term data set of ELBARA-II TB measurements and retrieved SM and TAU. This data set is also showing to be useful in support of SMOS scientific activities: the VAS area and, specifically the ELBARA-II site, offer good conditions to control the long-term evolution of SMOS Level 2 and Level 3 land products and interpret eventual anomalies that may obscure sensor hidden biases. In addition, SM and TAU that are currently

  2. Test cell modeling and optimization for FPD-II

    SciTech Connect

    Haney, S.W.; Fenstermacher, M.E.

    1985-04-10

    The Fusion Power Demonstration, Configuration II (FPD-II), will ba a DT burning tandem mirror facility with thermal barriers, designed as the next step engineering test reactor (ETR) to follow the tandem mirror ignition test machines. Current plans call for FPD-II to be a multi-purpose device. For approximately the first half of its lifetime, it will operate as a high-Q ignition machine designed to reach or exceed engineering break-even and to demonstrate the technological feasibility of tandem mirror fusion. The second half of its operation will focus on the evaluation of candidate reactor blanket designs using a neutral beam driven test cell inserted at the midplane of the 90 m long cell. This machine called FPD-II+T, uses an insert configuration similar to that used in the MFTF-..cap alpha..+T study. The modeling and optimization of FPD-II+T are the topic of the present paper.

  3. Contaminant transport model validation: The Oak Ridge Reservation

    SciTech Connect

    Lee, R.R.; Ketelle, R.H.

    1988-09-01

    In the complex geologic setting on the Oak Ridge Reservation, hydraulic conductivity is anisotropic and flow is strongly influenced by an extensive and largely discontinuous fracture network. Difficulties in describing and modeling the aquifer system prompted a study to obtain aquifer property data to be used in a groundwater flow model validation experiment. Characterization studies included the performance of an extensive suite of aquifer test within a 600-square-meter area to obtain aquifer property values to describe the flow field in detail. Following aquifer test, a groundwater tracer test was performed under ambient conditions to verify the aquifer analysis. Tracer migration data in the near-field were used in model calibration to predict tracer arrival time and concentration in the far-field. Despite the extensive aquifer testing, initial modeling inaccurately predicted tracer migration direction. Initial tracer migration rates were consistent with those predicted by the model; however, changing environmental conditions resulted in an unanticipated decay in tracer movement. Evaluation of the predictive accuracy of groundwater flow and contaminant transport models on the Oak Ridge Reservation depends on defining the resolution required, followed by field testing and model grid definition at compatible scales. The use of tracer tests, both as a characterization method and to verify model results, provides the highest level of resolution of groundwater flow characteristics. 3 refs., 4 figs.

  4. Image Based Validation of Dynamical Models for Cell Reorientation

    PubMed Central

    Lockley, Robert; Ladds, Graham; Bretschneider, Till

    2016-01-01

    A key feature of directed cell movement is the ability of cells to reorient quickly in response to changes in the direction of an extracellular stimulus. Mathematical models have suggested quite different regulatory mechanisms to explain reorientation, raising the question of how we can validate these models in a rigorous way. In this study, we fit three reaction—diffusion models to experimental data of Dictyostelium amoebae reorienting in response to alternating gradients of mechanical shear flow. The experimental readouts we use to fit are spatio-temporal distributions of a fluorescent reporter for cortical F-actin labeling the cell front. Experiments performed under different conditions are fitted simultaneously to challenge the models with different types of cellular dynamics. Although the model proposed by Otsuji is unable to provide a satisfactory fit, those suggested by Meinhardt and Levchenko fit equally well. Further, we show that reduction of the three-variable Meinhardt model to a two-variable model also provides an excellent fit, but has the advantage of all parameters being uniquely identifiable. Our work demonstrates that model selection and identifiability analysis, commonly applied to temporal dynamics problems in systems biology, can be a powerful tool when extended to spatio-temporal imaging data. PMID:25492625

  5. Image based validation of dynamical models for cell reorientation.

    PubMed

    Lockley, Robert; Ladds, Graham; Bretschneider, Till

    2015-06-01

    A key feature of directed cell movement is the ability of cells to reorient quickly in response to changes in the direction of an extracellular stimulus. Mathematical models have suggested quite different regulatory mechanisms to explain reorientation, raising the question of how we can validate these models in a rigorous way. In this study, we fit three reaction-diffusion models to experimental data of Dictyostelium amoebae reorienting in response to alternating gradients of mechanical shear flow. The experimental readouts we use to fit are spatio-temporal distributions of a fluorescent reporter for cortical F-actin labeling the cell front. Experiments performed under different conditions are fitted simultaneously to challenge the models with different types of cellular dynamics. Although the model proposed by Otsuji is unable to provide a satisfactory fit, those suggested by Meinhardt and Levchenko fit equally well. Further, we show that reduction of the three-variable Meinhardt model to a two-variable model also provides an excellent fit, but has the advantage of all parameters being uniquely identifiable. Our work demonstrates that model selection and identifiability analysis, commonly applied to temporal dynamics problems in systems biology, can be a powerful tool when extended to spatio-temporal imaging data. PMID:25492625

  6. Multiscale Modeling of Gastrointestinal Electrophysiology and Experimental Validation

    PubMed Central

    Du, Peng; O'Grady, Greg; Davidson, John B.; Cheng, Leo K.; Pullan, Andrew J.

    2011-01-01

    Normal gastrointestinal (GI) motility results from the coordinated interplay of multiple cooperating mechanisms, both intrinsic and extrinsic to the GI tract. A fundamental component of this activity is an omnipresent electrical activity termed slow waves, which is generated and propagated by the interstitial cells of Cajal (ICCs). The role of ICC loss and network degradation in GI motility disorders is a significant area of ongoing research. This review examines recent progress in the multiscale modeling framework for effectively integrating a vast range of experimental data in GI electrophysiology, and outlines the prospect of how modeling can provide new insights into GI function in health and disease. The review begins with an overview of the GI tract and its electrophysiology, and then focuses on recent work on modeling GI electrical activity, spanning from cell to body biophysical scales. Mathematical cell models of the ICCs and smooth muscle cell are presented. The continuum framework of monodomain and bidomain models for tissue and organ models are then considered, and the forward techniques used to model the resultant body surface potential and magnetic field are discussed. The review then outlines recent progress in experimental support and validation of modeling, and concludes with a discussion on potential future research directions in this field. PMID:21133835

  7. Criteria of validity for animal models of psychiatric disorders: focus on anxiety disorders and depression.

    PubMed

    Belzung, Catherine; Lemoine, Maël

    2011-01-01

    Animal models of psychiatric disorders are usually discussed with regard to three criteria first elaborated by Willner; face, predictive and construct validity. Here, we draw the history of these concepts and then try to redraw and refine these criteria, using the framework of the diathesis model of depression that has been proposed by several authors. We thus propose a set of five major criteria (with sub-categories for some of them); homological validity (including species validity and strain validity), pathogenic validity (including ontopathogenic validity and triggering validity), mechanistic validity, face validity (including ethological and biomarker validity) and predictive validity (including induction and remission validity). Homological validity requires that an adequate species and strain be chosen: considering species validity, primates will be considered to have a higher score than drosophila, and considering strains, a high stress reactivity in a strain scores higher than a low stress reactivity in another strain. Pathological validity corresponds to the fact that, in order to shape pathological characteristics, the organism has been manipulated both during the developmental period (for example, maternal separation: ontopathogenic validity) and during adulthood (for example, stress: triggering validity). Mechanistic validity corresponds to the fact that the cognitive (for example, cognitive bias) or biological mechanisms (such as dysfunction of the hormonal stress axis regulation) underlying the disorder are identical in both humans and animals. Face validity corresponds to the observable behavioral (ethological validity) or biological (biomarker validity) outcomes: for example anhedonic behavior (ethological validity) or elevated corticosterone (biomarker validity). Finally, predictive validity corresponds to the identity of the relationship between the triggering factor and the outcome (induction validity) and between the effects of the treatments

  8. Increasing the validity of experimental models for depression.

    PubMed

    Dzirasa, Kafui; Covington, Herbert E

    2012-08-01

    Major depressive disorder (MDD) is a central nervous system disorder characterized by the culmination of profound disturbances in mood and affective regulation. Animal models serve as a powerful tool for investigating the neurobiological mechanisms underlying this disorder; however, little standardization exists across the wide range of available modeling approaches most often employed. This review will illustrate some of the most challenging obstacles faced by investigators attempting to associate depressive-like behaviors in rodents with symptoms expressed in MDD. Furthermore, a novel series of depressive-like criteria based on correlating behavioral endophenotypes, novel in vivo neurophysiological measurements, and molecular/cellular analyses within multiple brain are proposed as a potential solution to overcoming this barrier. Ultimately, linking the neurophysiological and cellular/biochemical actions that contribute to the expression of a defined MDD-like syndrome will dramatically extend the translational value of the most valid animal models of MDD. PMID:22823549

  9. Bolted connection modeling and validation through laser-aided testing

    NASA Astrophysics Data System (ADS)

    Dai, Kaoshan; Gong, Changqing; Smith, Benjiamin

    2013-04-01

    Bolted connections are widely employed in facility structures, such as light masts, transmission poles, and wind turbine towers. The complex connection behavior plays a significant role in the overall dynamic characteristics of a structure. A finite element (FE) modeling study of a bolt-connected square tubular steel beam is presented in this paper. Modal testing was performed in a controlled laboratory condition to validate the FE model, developed for the bolted beam. Two laser Doppler vibrometers were used simultaneously to measure structural vibration. A simplified joint model was proposed to further save computation time for structures with bolted connections. This study is an on-going effort to marshal knowledge associated with detecting damage on facility structures with bolted connections.

  10. Analytical thermal model validation for Cassini radioisotope thermoelectric generator

    SciTech Connect

    Lin, E.I.

    1997-12-31

    The Saturn-bound Cassini spacecraft is designed to rely, without precedent, on the waste heat from its three radioisotope thermoelectric generators (RTGs) to warm the propulsion module subsystem, and the RTG end dome temperature is a key determining factor of the amount of waste heat delivered. A previously validated SINDA thermal model of the RTG was the sole guide to understanding its complex thermal behavior, but displayed large discrepancies against some initial thermal development test data. A careful revalidation effort led to significant modifications and adjustments of the model, which result in a doubling of the radiative heat transfer from the heat source support assemblies to the end domes and bring up the end dome and flange temperature predictions to within 2 C of the pertinent test data. The increased inboard end dome temperature has a considerable impact on thermal control of the spacecraft central body. The validation process offers an example of physically-driven analytical model calibration with test data from not only an electrical simulator but also a nuclear-fueled flight unit, and has established the end dome temperatures of a flight RTG where no in-flight or ground-test data existed before.

  11. Dynamic validation of the Planck-LFI thermal model

    NASA Astrophysics Data System (ADS)

    Tomasi, M.; Cappellini, B.; Gregorio, A.; Colombo, F.; Lapolla, M.; Terenzi, L.; Morgante, G.; Bersanelli, M.; Butler, R. C.; Galeotta, S.; Mandolesi, N.; Maris, M.; Mennella, A.; Valenziano, L.; Zacchei, A.

    2010-01-01

    The Low Frequency Instrument (LFI) is an array of cryogenically cooled radiometers on board the Planck satellite, designed to measure the temperature and polarization anisotropies of the cosmic microwave backgrond (CMB) at 30, 44 and 70 GHz. The thermal requirements of the LFI, and in particular the stringent limits to acceptable thermal fluctuations in the 20 K focal plane, are a critical element to achieve the instrument scientific performance. Thermal tests were carried out as part of the on-ground calibration campaign at various stages of instrument integration. In this paper we describe the results and analysis of the tests on the LFI flight model (FM) performed at Thales Laboratories in Milan (Italy) during 2006, with the purpose of experimentally sampling the thermal transfer functions and consequently validating the numerical thermal model describing the dynamic response of the LFI focal plane. This model has been used extensively to assess the ability of LFI to achieve its scientific goals: its validation is therefore extremely important in the context of the Planck mission. Our analysis shows that the measured thermal properties of the instrument show a thermal damping level better than predicted, therefore further reducing the expected systematic effect induced in the LFI maps. We then propose an explanation of the increased damping in terms of non-ideal thermal contacts.

  12. Validation of a new Mesoscale Model for MARS .

    NASA Astrophysics Data System (ADS)

    De Sanctis, K.; Ferretti, R.; Forget, F.; Fiorenza, C.; Visconti, G.

    The study of Mars planet is very important because of the several similarities with the Earth. For the understanding of the dynamical processes which drive the martian atmosphere, a new Martian Mesoscale Model (MARS-MM5) is presented. The new model is based on the Pennsylvania State University (PSU)/National Centre for Atmosphere Research (NCAR) Mesoscale Model Version 5 \\citep{duh,gre}. MARS-MM5 has been adapted to Mars using soil characteristics and topography obtained by Mars Orbital Laser Altimeter (MOLA). Different cases, depending from data availability and corresponding to the equatorial region of Mars, have been selected for multiple MARS-MM5 simulations. To validate the different developments Mars Climate Database (MCD) and TES observations have been employed: MCD version 4.0 has been created on the basis of multi annual integration of Mars GCM output. The Thermal Emission Spectromter observations (TES) detected during Mars Global Surveyor (MGS) mission are used in terms of temperature. The new, and most important, aspect of this work is the direct validation of the newly generated MARS-MM5 in terms of three-dimensional observations. The comparison between MARS-MM5 and GCM horizontal and vertical temperature profiles shows a good agreement; moreover, a good agreement is also found between TES observations and MARS-MM5.

  13. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology

    PubMed Central

    Krishnamoorthi, Shankarjee; Perotti, Luigi E.; Borgstrom, Nils P.; Ajijola, Olujimi A.; Frid, Anna; Ponnaluri, Aditya V.; Weiss, James N.; Qu, Zhilin; Klug, William S.; Ennis, Daniel B.; Garfinkel, Alan

    2014-01-01

    We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG) and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations. PMID:25493967

  14. Statistical validation of structured population models for Daphnia magna

    PubMed Central

    Adoteye, Kaska; Banks, H.T.; Cross, Karissa; Eytcheson, Stephanie; Flores, Kevin B.; LeBlanc, Gerald A.; Nguyen, Timothy; Ross, Chelsea; Smith, Emmaline; Stemkovski, Michael; Stokely, Sarah

    2016-01-01

    In this study we use statistical validation techniques to verify density-dependent mechanisms hypothesized for populations of Daphnia magna. We develop structured population models that exemplify specific mechanisms, and use multi-scale experimental data in order to test their importance. We show that fecundity and survival rates are affected by both time-varying density-independent factors, such as age, and density-dependent factors, such as competition. We perform uncertainty analysis and show that our parameters are estimated with a high degree of confidence. Further, we perform a sensitivity analysis to understand how changes in fecundity and survival rates affect population size and age-structure. PMID:26092608

  15. Low frequency eddy current benchmark study for model validation

    SciTech Connect

    Mooers, R. D.; Boehnlein, T. R.; Cherry, M. R.; Knopp, J. S.; Aldrin, J. C.; Sabbagh, H. A.

    2011-06-23

    This paper presents results of an eddy current model validation study. Precise measurements were made using an impedance analyzer to investigate changes in impedance due to Electrical Discharge Machining (EDM) notches in aluminum plates. Each plate contained one EDM notch at an angle of 0, 10, 20, or 30 degrees from the normal of the plate surface. Measurements were made with the eddy current probe both scanning parallel and perpendicular to the notch length. The experimental response from the vertical and oblique notches will be reported and compared to results from different numerical simulation codes.

  16. Validation of a modified clinical risk score to predict cancer-specific survival for stage II colon cancer

    PubMed Central

    Oliphant, Raymond; Horgan, Paul G; Morrison, David S; McMillan, Donald C

    2015-01-01

    Many patients with stage II colon cancer will die of their disease despite curative surgery. Therefore, identification of patients at high risk of poor outcome after surgery for stage II colon cancer is desirable. This study aims to validate a clinical risk score to predict cancer-specific survival in patients undergoing surgery for stage II colon cancer. Patients undergoing surgery for stage II colon cancer in 16 hospitals in the West of Scotland between 2001 and 2004 were identified from a prospectively maintained regional clinical audit database. Overall and cancer-specific survival rates up to 5 years were calculated. A total of 871 patients were included. At 5 years, cancer-specific survival was 81.9% and overall survival was 65.6%. On multivariate analysis, age ≥75 years (hazard ratio (HR) 2.11, 95% confidence intervals (CI) 1.57–2.85; P<0.001) and emergency presentation (HR 1.97, 95% CI 1.43–2.70; P<0.001) were independently associated with cancer-specific survival. Age and mode of presentation HRs were added to form a clinical risk score of 0–2. The cancer-specific survival at 5 years for patients with a cumulative score 0 was 88.7%, 1 was 78.2% and 2 was 65.9%. These results validate a modified simple clinical risk score for patients undergoing surgery for stage II colon cancer. The combination of these two universally documented clinical factors provides a solid foundation for the examination of the impact of additional clinicopathological and treatment factors on overall and cancer-specific survival. PMID:25487740

  17. A validation study of a stochastic model of human interaction

    NASA Astrophysics Data System (ADS)

    Burchfield, Mitchel Talmadge

    The purpose of this dissertation is to validate a stochastic model of human interactions which is part of a developmentalism paradigm. Incorporating elements of ancient and contemporary philosophy and science, developmentalism defines human development as a progression of increasing competence and utilizes compatible theories of developmental psychology, cognitive psychology, educational psychology, social psychology, curriculum development, neurology, psychophysics, and physics. To validate a stochastic model of human interactions, the study addressed four research questions: (a) Does attitude vary over time? (b) What are the distributional assumptions underlying attitudes? (c) Does the stochastic model, {-}N{intlimitssbsp{-infty}{infty}}varphi(chi,tau)\\ Psi(tau)dtau, have utility for the study of attitudinal distributions and dynamics? (d) Are the Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein theories applicable to human groups? Approximately 25,000 attitude observations were made using the Semantic Differential Scale. Positions of individuals varied over time and the logistic model predicted observed distributions with correlations between 0.98 and 1.0, with estimated standard errors significantly less than the magnitudes of the parameters. The results bring into question the applicability of Fisherian research designs (Fisher, 1922, 1928, 1938) for behavioral research based on the apparent failure of two fundamental assumptions-the noninteractive nature of the objects being studied and normal distribution of attributes. The findings indicate that individual belief structures are representable in terms of a psychological space which has the same or similar properties as physical space. The psychological space not only has dimension, but individuals interact by force equations similar to those described in theoretical physics models. Nonlinear regression techniques were used to estimate Fermi-Dirac parameters from the data. The model explained a high degree

  18. Robust Validation of ENSO in IPCC-Class Coupled Models

    NASA Astrophysics Data System (ADS)

    Stevenson, Samantha; Fox-Kemper, Baylor; Jochum, Markus

    2010-05-01

    Wavelet probability analysis, a new method of model validation, is used to assess the performance of ENSO in a variety of coupled climate models. Wavelet probability analysis relies on wavelet spectra for a given time series, for which the amount of spectral overlap between subsets is measured using a quantity known as the wavelet probability index (WPI). This approach provides quantitative estimates of model agreement relative to either observations or other models, accompanied by well-defined confidence levels. ENSO, as represented by the NINO3.4 index, has been examined in 2,000 year long coupled integrations of both the new NCAR CCSM3.5 and GFDL's CM2.1; interestingly, it is not possible to distinguish either model from observations of NINO3.4 during 1949-2003, for runs shorter than 200 years. At longer model run lengths, some inaccuracies are seen in both CCSM3.5 and CM2.1 relative to observations. CCSM3.5 and CM2.1 are compared to one another using hypothesis testing procedures, and changes in model physics discussed in terms of their impact on ENSO. Finally, the method is applied to non-equilibrium simulations, using both high-CO2 'ramp-up' runs and selected IPCC AR4 integrations. This allows the effect of changing CO2 levels on ENSO activity to be examined, and the statistical significance of such effects to be determined.

  19. Metrological validation for 3D modeling of dental plaster casts.

    PubMed

    Brusco, Nicola; Andreetto, Marco; Lucchese, Luca; Carmignato, Simone; Cortelazzo, Guido M

    2007-11-01

    The contribution of this paper is twofold: (1) it presents an automatic 3D modeling technique and (2) it advances a procedure for its metrological evaluation in the context of a medical application, the 3D modeling of dental plaster casts. The motivation for this work is the creation of a "virtual gypsotheque" where cumbersome dental plaster casts can be replaced by numerical 3D models, thereby alleviating storage and access problems and allowing dentists and orthodontists the use of novel and unprecedented software tools for their medical evaluations. Modeling free-form surfaces of anatomical interest is an intriguing mixture of open issues concerning 3D modeling, geometrical metrology, and medicine. Of general interest is both the fact that a widespread use of 3D modeling in non-engineering applications requires automatic procedures of the kind presented in this work and the adopted validation paradigm for free-form surfaces, rather useful for practical purposes. In this latter respect, the metrological analysis we advance is the first seminal attempt in the field of 3D modeling and can be readily extended to contexts other than the medical one discussed in this paper. PMID:17126062

  20. Evaluation and cross-validation of Environmental Models

    NASA Astrophysics Data System (ADS)

    Lemaire, Joseph

    Before scientific models (statistical or empirical models based on experimental measurements; physical or mathematical models) can be proposed and selected as ISO Environmental Standards, a Commission of professional experts appointed by an established International Union or Association (e.g. IAGA for Geomagnetism and Aeronomy, . . . ) should have been able to study, document, evaluate and validate the best alternative models available at a given epoch. Examples will be given, indicating that different values for the Earth radius have been employed in different data processing laboratories, institutes or agencies, to process, analyse or retrieve series of experimental observations. Furthermore, invariant magnetic coordinates like B and L, commonly used in the study of Earth's radiation belts fluxes and for their mapping, differ from one space mission data center to the other, from team to team, and from country to country. Worse, users of empirical models generally fail to use the original magnetic model which had been employed to compile B and L , and thus to build these environmental models. These are just some flagrant examples of inconsistencies and misuses identified so far; there are probably more of them to be uncovered by careful, independent examination and benchmarking. A meter prototype, the standard unit length that has been determined on 20 May 1875, during the Diplomatic Conference of the Meter, and deposited at the BIPM (Bureau International des Poids et Mesures). In the same token, to coordinate and safeguard progress in the field of Space Weather, similar initiatives need to be undertaken, to prevent wild, uncontrolled dissemination of pseudo Environmental Models and Standards. Indeed, unless validation tests have been performed, there is guaranty, a priori, that all models on the market place have been built consistently with the same units system, and that they are based on identical definitions for the coordinates systems, etc... Therefore

  1. The Atmospheric Radionuclide Transport Model (ARTM) - Validation of a long-term atmospheric dispersion model

    NASA Astrophysics Data System (ADS)

    Hettrich, Sebastian; Wildermuth, Hans; Strobl, Christopher; Wenig, Mark

    2016-04-01

    In the last couple of years, the Atmospheric Radionuclide Transport Model (ARTM) has been developed by the German Federal Office for Radiation Protection (BfS) and the Society for Plant and Reactor Security (GRS). ARTM is an atmospheric dispersion model for continuous long-term releases of radionuclides into the atmosphere, based on the Lagrangian particle model. This model, developed in the first place as a more realistic replacement for the out-dated Gaussian plume models, is currently being optimised for further scientific purposes to study atmospheric dispersion in short-range scenarios. It includes a diagnostic wind field model, allows for the application of building structures and multiple sources (including linear, 2-and 3-dimensional source geometries), and considers orography and surface roughness. As an output it calculates the activity concentration, dry and wet deposition and can model also the radioactive decay of Rn-222. As such, ARTM requires to undergo an intense validation process. While for short-term and short-range models, which were mainly developed for examining nuclear accidents or explosions, a few measurement data-sets are available for validation, data-sets for validating long-term models are very sparse and the existing ones mostly prove to be not applicable for validation. Here we present a strategy for the validation of long-term Lagrangian particle models based on the work with ARTM. In our validation study, the first part we present is a comprehensive analysis of the model sensitivities on different parameters like e.g. (simulation grid size resolution, starting random number, amount of simulation particles, etc.). This study provides a good estimation for the uncertainties of the simulation results and consequently can be used to generate model outputs comparable to the available measurements data at various distances from the emission source. This comparison between measurement data from selected scenarios and simulation results

  2. Aqueous Solution Vessel Thermal Model Development II

    SciTech Connect

    Buechler, Cynthia Eileen

    2015-10-28

    The work presented in this report is a continuation of the work described in the May 2015 report, “Aqueous Solution Vessel Thermal Model Development”. This computational fluid dynamics (CFD) model aims to predict the temperature and bubble volume fraction in an aqueous solution of uranium. These values affect the reactivity of the fissile solution, so it is important to be able to calculate them and determine their effects on the reaction. Part A of this report describes some of the parameter comparisons performed on the CFD model using Fluent. Part B describes the coupling of the Fluent model with a Monte-Carlo N-Particle (MCNP) neutron transport model. The fuel tank geometry is the same as it was in the May 2015 report, annular with a thickness-to-height ratio of 0.16. An accelerator-driven neutron source provides the excitation for the reaction, and internal and external water cooling channels remove the heat. The model used in this work incorporates the Eulerian multiphase model with lift, wall lubrication, turbulent dispersion and turbulence interaction. The buoyancy-driven flow is modeled using the Boussinesq approximation, and the flow turbulence is determined using the k-ω Shear-Stress-Transport (SST) model. The dispersed turbulence multiphase model is employed to capture the multiphase turbulence effects.

  3. Validating a widely used measure of frailty: are all sub-components necessary? Evidence from the Whitehall II cohort study.

    PubMed

    Bouillon, Kim; Sabia, Severine; Jokela, Markus; Gale, Catharine R; Singh-Manoux, Archana; Shipley, Martin J; Kivimäki, Mika; Batty, G David

    2013-08-01

    There is growing interest in the measurement of frailty in older age. The most widely used measure (Fried) characterizes this syndrome using five components: exhaustion, physical activity, walking speed, grip strength, and weight loss. These components overlap, raising the possibility of using fewer, and therefore making the device more time- and cost-efficient. The analytic sample was 5,169 individuals (1,419 women) from the British Whitehall II cohort study, aged 55 to 79 years in 2007-2009. Hospitalization data were accessed through English national records (mean follow-up 15.2 months). Age- and sex-adjusted Cox models showed that all components were significantly associated with hospitalization, the hazard ratios (HR) ranging from 1.18 (95 % confidence interval = 0.98, 1.41) for grip strength to 1.60 (1.35, 1.90) for usual walking speed. Some attenuation of these effects was apparent following mutual adjustment for frailty components, but the rank order of the strength of association remained unchanged. We observed a dose-response relationship between the number of frailty components and the risk for hospitalization [1 component-HR = 1.10 (0.96, 1.26); 2-HR = 1.52 (1.26, 1.83); 3-5-HR = 2.41 (1.84, 3.16), P trend <0.0001]. A concordance index used to evaluate the predictive power for hospital admissions of individual components and the full scale was modest in magnitude (range 0.57 to 0.58). Our results support the validity of the multi-component frailty measure, but the predictive performance of the measure is poor. PMID:22772579

  4. Experimental validation of a numerical model for subway induced vibrations

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Degrande, G.; Lombaert, G.

    2009-04-01

    This paper presents the experimental validation of a coupled periodic finite element-boundary element model for the prediction of subway induced vibrations. The model fully accounts for the dynamic interaction between the train, the track, the tunnel and the soil. The periodicity or invariance of the tunnel and the soil in the longitudinal direction is exploited using the Floquet transformation, which allows for an efficient formulation in the frequency-wavenumber domain. A general analytical formulation is used to compute the response of three-dimensional invariant or periodic media that are excited by moving loads. The numerical model is validated by means of several experiments that have been performed at a site in Regent's Park on the Bakerloo line of London Underground. Vibration measurements have been performed on the axle boxes of the train, on the rail, the tunnel invert and the tunnel wall, and in the free field, both at the surface and at a depth of 15 m. Prior to these vibration measurements, the dynamic soil characteristics and the track characteristics have been determined. The Bakerloo line tunnel of London Underground has been modelled using the coupled periodic finite element-boundary element approach and free field vibrations due to the passage of a train at different speeds have been predicted and compared to the measurements. The correspondence between the predicted and measured response in the tunnel is reasonably good, although some differences are observed in the free field. The discrepancies are explained on the basis of various uncertainties involved in the problem. The variation in the response with train speed is similar for the measurements as well as the predictions. This study demonstrates the applicability of the coupled periodic finite element-boundary element model to make realistic predictions of the vibrations from underground railways.

  5. Non-Linear Slosh Damping Model Development and Validation

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; West, Jeff

    2015-01-01

    Propellant tank slosh dynamics are typically represented by a mechanical model of spring mass damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control (GN&C) analysis. For a partially-filled smooth wall propellant tank, the critical damping based on classical empirical correlation is as low as 0.05%. Due to this low value of damping, propellant slosh is potential sources of disturbance critical to the stability of launch and space vehicles. It is postulated that the commonly quoted slosh damping is valid only under the linear regime where the slosh amplitude is small. With the increase of slosh amplitude, the critical damping value should also increase. If this nonlinearity can be verified and validated, the slosh stability margin can be significantly improved, and the level of conservatism maintained in the GN&C analysis can be lessened. The purpose of this study is to explore and to quantify the dependence of slosh damping with slosh amplitude. Accurately predicting the extremely low damping value of a smooth wall tank is very challenging for any Computational Fluid Dynamics (CFD) tool. One must resolve thin boundary layers near the wall and limit numerical damping to minimum. This computational study demonstrates that with proper grid resolution, CFD can indeed accurately predict the low damping physics from smooth walls under the linear regime. Comparisons of extracted damping values with experimental data for different tank sizes show very good agreements. Numerical simulations confirm that slosh damping is indeed a function of slosh amplitude. When slosh amplitude is low, the damping ratio is essentially constant, which is consistent with the empirical correlation. Once the amplitude reaches a critical value, the damping ratio becomes a linearly increasing function of the slosh amplitude. A follow-on experiment validated the developed nonlinear damping relationship. This discovery can

  6. Validation of document image defect models for optical character recognition

    SciTech Connect

    Li, Y.; Lopresti, D.; Tomkins, A.

    1994-12-31

    In this paper we consider the problem of evaluating models for physical defects affecting the optical character recognition (OCR) process. While a number of such models have been proposed, the contention that they produce the desired result is typically argued in an ad hoc and informal way. We introduce a rigorous and more pragmatic definition of when a model is accurate: we say a defect model is validated if the OCR errors induced by the model are effectively indistinguishable from the errors encountered when using real scanned documents. We present two measures to quantify this similarity: the Vector Space method and the Coin Bias method. The former adapts an approach used in information retrieval, the latter simulates an observer attempting to do better than a {open_quotes}random{close_quotes} guesser. We compare and contrast the two techniques based on experimental data; both seem to work well, suggesting this is an appropriate formalism for the development and evaluation of document image defect models.

  7. Validation of the Coronal Thick Target Source Model

    NASA Astrophysics Data System (ADS)

    Fleishman, Gregory D.; Xu, Yan; Nita, Gelu N.; Gary, Dale E.

    2016-01-01

    We present detailed 3D modeling of a dense, coronal thick-target X-ray flare using the GX Simulator tool, photospheric magnetic measurements, and microwave imaging and spectroscopy data. The developed model offers a remarkable agreement between the synthesized and observed spectra and images in both X-ray and microwave domains, which validates the entire model. The flaring loop parameters are chosen to reproduce the emission measure, temperature, and the nonthermal electron distribution at low energies derived from the X-ray spectral fit, while the remaining parameters, unconstrained by the X-ray data, are selected such as to match the microwave images and total power spectra. The modeling suggests that the accelerated electrons are trapped in the coronal part of the flaring loop, but away from where the magnetic field is minimal, and, thus, demonstrates that the data are clearly inconsistent with electron magnetic trapping in the weak diffusion regime mediated by the Coulomb collisions. Thus, the modeling supports the interpretation of the coronal thick-target sources as sites of electron acceleration in flares and supplies us with a realistic 3D model with physical parameters of the acceleration region and flaring loop.

  8. Nonlinear ultrasound modelling and validation of fatigue damage

    NASA Astrophysics Data System (ADS)

    Fierro, G. P. Malfense; Ciampa, F.; Ginzburg, D.; Onder, E.; Meo, M.

    2015-05-01

    Nonlinear ultrasound techniques have shown greater sensitivity to microcracks and they can be used to detect structural damages at their early stages. However, there is still a lack of numerical models available in commercial finite element analysis (FEA) tools that are able to simulate the interaction of elastic waves with the materials nonlinear behaviour. In this study, a nonlinear constitutive material model was developed to predict the structural response under continuous harmonic excitation of a fatigued isotropic sample that showed anharmonic effects. Particularly, by means of Landau's theory and Kelvin tensorial representation, this model provided an understanding of the elastic nonlinear phenomena such as the second harmonic generation in three-dimensional solid media. The numerical scheme was implemented and evaluated using a commercially available FEA software LS-DYNA, and it showed a good numerical characterisation of the second harmonic amplitude generated by the damaged region known as the nonlinear response area (NRA). Since this process requires only the experimental second-order nonlinear parameter and rough damage size estimation as an input, it does not need any baseline testing with the undamaged structure or any dynamic modelling of the fatigue crack growth. To validate this numerical model, the second-order nonlinear parameter was experimentally evaluated at various points over the fatigue life of an aluminium (AA6082-T6) coupon and the crack propagation was measured using an optical microscope. A good correlation was achieved between the experimental set-up and the nonlinear constitutive model.

  9. Validation of hydrogen gas stratification and mixing models

    DOE PAGESBeta

    Wu, Hsingtzu; Zhao, Haihua

    2015-05-26

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for amore » large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.« less

  10. Validation of hydrogen gas stratification and mixing models

    SciTech Connect

    Wu, Hsingtzu; Zhao, Haihua

    2015-05-26

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for a large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.

  11. Validation of hydrogen gas stratification and mixing models

    SciTech Connect

    Wu, Hsingtzu; Zhao, Haihua

    2015-11-01

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for a large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. Computing time for each BMIX++ model with a normal desktop computer is less than 5 min.

  12. Interactive simulation of embolization coils: modeling and experimental validation.

    PubMed

    Dequidt, Jérémie; Marchal, Maud; Duriez, Christian; Kerien, Erwan; Cotin, Stéphane

    2008-01-01

    Coil embolization offers a new approach to treat aneurysms. This medical procedure is namely less invasive than an open-surgery as it relies on the deployment of very thin platinum-based wires within the aneurysm through the arteries. When performed intracranially, this procedure must be particularly accurate and therefore carefully planned and performed by experienced radiologists. A simulator of the coil deployment represents an interesting and helpful tool for the physician by providing information on the coil behavior. In this paper, an original modeling is proposed to obtain interactive and accurate simulations of coil deployment. The model takes into account geometric nonlinearities and uses a shape memory formulation to describe its complex geometry. An experimental validation is performed in a contact-free environment to identify the mechanical properties of the coil and to quantitatively compare the simulation with real data. Computational performances are also measured to insure an interactive simulation. PMID:18979807

  13. Experimental validation of a finite-element model updating procedure

    NASA Astrophysics Data System (ADS)

    Kanev, S.; Weber, F.; Verhaegen, M.

    2007-02-01

    This paper validates an approach to damage detection and localization based on finite-element model updating (FEMU). The approach has the advantage over other existing methods to FEMU that it simultaneously updates all three finite-element model matrices at the same time preserving their structure (connectivity), symmetry and positive-definiteness. The approach is tested in this paper on an experimental setup consisting of a steel cable, where local mass changes and global change in the tension of the cable are introduced. The new algorithm is applied to identify the size and location of different changes in the structural parameters (mass, stiffness and damping). The obtained results clearly indicate that even small structural changes can be detected and localized with the new method. Additionally, a comparison with many other FEMU-based methods has been performed to show the superiority of the considered method.

  14. Defect distribution model validation and effective process control

    NASA Astrophysics Data System (ADS)

    Zhong, Lei

    2003-07-01

    Assumption of the underlying probability distribution is an essential part of effective process control. In this article, we demonstrate how to improve the effectiveness of equipment monitoring and process induced defect control through properly selecting, validating and using the hypothetical distribution models. The testing method is based on probability plotting, which is made possible through order statistics. Since each ordered sample data point has a cumulative probability associated with it, which is calculated as a function of sample size, the assumption validity is readily judged by the linearity of the ordered sample data versus the deviate predicted by the assumed statistical model from the cumulative probability. A comparison is made between normal and lognormal distributions to illustrate how dramatically the distribution model could affect the control limit setting. Examples presented include defect data collected on SP1 the dark field inspection tool on a variety of deposited and polished metallic and dielectric films. We find that the defect count distribution is in most cases approximately lognormal. We show that normal distribution is an inadequate assumption, as clearly indicated by the non-linearity of the probability plots. Misuse of normal distribution leads to a too optimistic process control limit, typically 50% tighter than suggested by the lognormal distribution. The inappropriate control limit setting consequently results in an excursion rate at a level too high to be manageable. Lognormal distribution is a valid assumption because it is positively skewed, which adequately takes into account the fact that defect count distribution is typically characteristic of a long tail. In essence, use of lognormal distribution is a suggestion that the long tail be treated as part of the process entitlement (capability) instead of process excursion. The adjustment of the expected process entitlement is reflected and quantified by the skewness of

  15. Literature-derived bioaccumulation models for earthworms: Development and validation

    SciTech Connect

    Sample, B.E.; Suter, G.W. II; Beauchamp, J.J.; Efroymson, R.A.

    1999-09-01

    Estimation of contaminant concentrations in earthworms is a critical component in many ecological risk assessments. Without site-specific data, literature-derived uptake factors or models are frequently used. Although considerable research has been conducted on contaminant transfer from soil to earthworms, most studies focus on only a single location. External validation of transfer models has not been performed. The authors developed a database of soil and tissue concentrations for nine inorganic and two organic chemicals. Only studies that presented total concentrations in departed earthworms were included. Uptake factors and simple and multiple regression models of natural-log-transformed concentrations of each analyte in soil and earthworms were developed using data from 26 studies. These models were then applied to data from six additional studies. Estimated and observed earthworm concentrations were compared using nonparametric Wilcoxon signed-rank tests. Relative accuracy and quality of different estimation methods were evaluated by calculating the proportional deviation of the estimate from the measured value. With the exception of Cr, significant, single-variable (e.g., soil concentration) regression models were fit for each analyte. Inclusion of soil Ca improved model fits for Cd and Pb. Soil pH only marginally improved model fits. The best general estimates of chemical concentrations in earthworms were generated by simple ln-ln regression models for As, Cd, Cu, Hg, Mn, Pb, Zn, and polychlorinated biphenyls. No method accurately estimated Cr or Ni in earthworms. Although multiple regression models including pH generated better estimates for a few analytes, in general, the predictive utility gained by incorporating environmental variables was marginal.

  16. Systematic approach to verification and validation: High explosive burn models

    SciTech Connect

    Menikoff, Ralph; Scovel, Christina A.

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code

  17. A Model for Estimating the Reliability and Validity of Criterion-Referenced Measures.

    ERIC Educational Resources Information Center

    Edmonston, Leon P.; Randall, Robert S.

    A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in…

  18. On The Modeling of Educational Systems: II

    ERIC Educational Resources Information Center

    Grauer, Robert T.

    1975-01-01

    A unified approach to model building is developed from the separate techniques of regression, simulation, and factorial design. The methodology is applied in the context of a suburban school district. (Author/LS)

  19. Nyala and Bushbuck II: A Harvesting Model.

    ERIC Educational Resources Information Center

    Fay, Temple H.; Greeff, Johanna C.

    1999-01-01

    Adds a cropping or harvesting term to the animal overpopulation model developed in Part I of this article. Investigates various harvesting strategies that might suggest a solution to the overpopulation problem without actually culling any animals. (ASK)

  20. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  1. A CARTILAGE GROWTH MIXTURE MODEL WITH COLLAGEN REMODELING: VALIDATION PROTOCOLS

    PubMed Central

    Klisch, Stephen M.; Asanbaeva, Anna; Oungoulian, Sevan R.; Masuda, Koichi; Thonar, Eugene J-MA; Davol, Andrew; Sah, Robert L.

    2009-01-01

    A cartilage growth mixture (CGM) model is proposed to address limitations of a model used in a previous study. New stress constitutive equations for the solid matrix are derived and collagen (COL) remodeling is incorporated into the CGM model by allowing the intrinsic COL material constants to evolve during growth. An analytical validation protocol based on experimental data from a recent in vitro growth study is developed. Available data included measurements of tissue volume, biochemical composition, and tensile modulus for bovine calf articular cartilage (AC) explants harvested at three depths and incubated for 13 days in 20% FBS and 20% FBS+β-aminopropionitrile. The proposed CGM model can match tissue biochemical content and volume exactly while predicting theoretical values of tensile moduli that do not significantly differ from experimental values. Also, theoretical values of a scalar COL remodeling factor are positively correlated with COL crosslink content, and mass growth functions are positively correlated with cell density. The results suggest that the CGM model may help to guide in vitro growth protocols for AC tissue via the a priori prediction of geometric and biomechanical properties. PMID:18532855

  2. Validation of a Global Hydrodynamic Flood Inundation Model

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Smith, A.; Sampson, C. C.; Alfieri, L.; Neal, J. C.

    2014-12-01

    In this work we present first validation results for a hyper-resolution global flood inundation model. We use a true hydrodynamic model (LISFLOOD-FP) to simulate flood inundation at 1km resolution globally and then use downscaling algorithms to determine flood extent and depth at 90m spatial resolution. Terrain data are taken from a custom version of the SRTM data set that has been processed specifically for hydrodynamic modelling. Return periods of flood flows along the entire global river network are determined using: (1) empirical relationships between catchment characteristics and index flood magnitude in different hydroclimatic zones derived from global runoff data; and (2) an index flood growth curve, also empirically derived. Bankful return period flow is then used to set channel width and depth, and flood defence impacts are modelled using empirical relationships between GDP, urbanization and defence standard of protection. The results of these simulations are global flood hazard maps for a number of different return period events from 1 in 5 to 1 in 1000 years. We compare these predictions to flood hazard maps developed by national government agencies in the UK and Germany using similar methods but employing detailed local data, and to observed flood extent at a number of sites including St. Louis, USA and Bangkok in Thailand. Results show that global flood hazard models can have considerable skill given careful treatment to overcome errors in the publicly available data that are used as their input.

  3. Validation of two-equation turbulence models for propulsion flowfields

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Venkateswaran, S.; Merkle, Charles L.

    1994-01-01

    The objective of the study is to assess the capability of two-equation turbulence models for simulating propulsion-related flowfields. The standard kappa-epsilon model with Chien's low Reynolds number formulation for near-wall effects is used as the baseline turbulence model. Several experimental test cases, representative of rocket combustor internal flowfields, are used to catalog the performance of the baseline model. Specific flowfields considered here include recirculating flow behind a backstep, mixing between coaxial jets and planar shear layers. Since turbulence solutions are notoriously dependent on grid and numerical methodology, the effects of grid refinement and artificial dissipation on numerical accuracy are studied. In the latter instance, computational results obtained with several central-differenced and upwind-based formulations are compared. Based on these results, improved turbulence modes such as enhanced kappa-epsilon models as well as other two-equation formulations (e.g., kappa-omega) are being studied. In addition, validation of swirling and reacting flowfields are also currently underway.

  4. Canyon building ventilation system dynamic model -- Parameters and validation

    SciTech Connect

    Moncrief, B.R. ); Chen, F.F.K. )

    1993-01-01

    Plant system simulation crosses many disciplines. At the core is the mimic of key components in the form of mathematical models.'' These component models are functionally integrated to represent the plant. With today's low cost high capacity computers, the whole plant can be truly and effectively reproduced in a computer model. Dynamic simulation has its roots in single loop'' design, which is still a common objective in the employment of simulation. The other common objectives are the ability to preview plant operation, to anticipate problem areas, and to test the impact of design options. As plant system complexity increases and our ability to simulate the entire plant grows, the objective to optimize plant system design becomes practical. This shift in objectives from problem avoidance to total optimization by far offers the most rewarding potential. Even a small reduction in bulk materials and space can sufficiently justify the application of this technology. Furthermore, to realize an optimal plant starts from a tight and disciplined design. We believe the assurance required to execute such a design strategy can partly be derived from a plant model. This paper reports on the application of a dynamic model to evaluate the capacity of an existing production plant ventilation system. This study met the practical objectives of capacity evaluation under present and future conditions, and under normal and accidental situations. More importantly, the description of this application, in its methods and its utility, aims to validate the technology of dynamic simulation in the environment of plant system design and safe operation.

  5. Validating the Thinking Styles Inventory-Revised II among Chinese university students with hearing impairment through test accommodations.

    PubMed

    Cheng, Sanyin; Zhang, Li-Fang

    2014-01-01

    The present study pioneered in adopting test accommodations to validate the Thinking Styles Inventory-Revised II (TSI-R2; Sternberg, Wagner, & Zhang, 2007) among Chinese university students with hearing impairment. A series of three studies were conducted that drew their samples from the same two universities, in which accommodating test directions (N = 213), combining test directions with language accommodations from students' perspectives (N = 366), and integrating test directions with language accommodations from teachers' perspectives (N = 129) were used. The accommodated TSI-R2 generally indicated acceptable internal scale reliabilities and factorial validity for Chinese university students with hearing loss. Limitations in relation to the study participants are discussed, as well as test accommodations and the significance and implications of the study. PMID:25051880

  6. Validation Analysis of the Shoal Groundwater Flow and Transport Model

    SciTech Connect

    A. Hassan; J. Chapman

    2008-11-01

    Environmental restoration at the Shoal underground nuclear test is following a process prescribed by a Federal Facility Agreement and Consent Order (FFACO) between the U.S. Department of Energy, the U.S. Department of Defense, and the State of Nevada. Characterization of the site included two stages of well drilling and testing in 1996 and 1999, and development and revision of numerical models of groundwater flow and radionuclide transport. Agreement on a contaminant boundary for the site and a corrective action plan was reached in 2006. Later that same year, three wells were installed for the purposes of model validation and site monitoring. The FFACO prescribes a five-year proof-of-concept period for demonstrating that the site groundwater model is capable of producing meaningful results with an acceptable level of uncertainty. The corrective action plan specifies a rigorous seven step validation process. The accepted groundwater model is evaluated using that process in light of the newly acquired data. The conceptual model of ground water flow for the Project Shoal Area considers groundwater flow through the fractured granite aquifer comprising the Sand Springs Range. Water enters the system by the infiltration of precipitation directly on the surface of the mountain range. Groundwater leaves the granite aquifer by flowing into alluvial deposits in the adjacent basins of Fourmile Flat and Fairview Valley. A groundwater divide is interpreted as coinciding with the western portion of the Sand Springs Range, west of the underground nuclear test, preventing flow from the test into Fourmile Flat. A very low conductivity shear zone east of the nuclear test roughly parallels the divide. The presence of these lateral boundaries, coupled with a regional discharge area to the northeast, is interpreted in the model as causing groundwater from the site to flow in a northeastward direction into Fairview Valley. Steady-state flow conditions are assumed given the absence of

  7. NAIRAS aircraft radiation model development, dose climatology, and initial validation

    NASA Astrophysics Data System (ADS)

    Mertens, Christopher J.; Meier, Matthias M.; Brown, Steven; Norman, Ryan B.; Xu, Xiaojing

    2013-10-01

    The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis suggests

  8. Combined Analysis and Validation of Earth Rotation Models and Observations

    NASA Astrophysics Data System (ADS)

    Kutterer, Hansjoerg; Göttl, Franziska; Heiker, Andrea; Kirschner, Stephanie; Schmidt, Michael; Seitz, Florian

    2010-05-01

    Global dynamic processes cause changes in the Earth's rotation, gravity field and geometry. Thus, they can be traced in geodetic observations of these quantities. However, the sensitivity of the various geodetic observation techniques to specific processes in the Earth system differs. More meaningful conclusions with respect to contributions from individual Earth subsystems can be drawn from the combined analysis of highly precise and consistent parameter time series from heterogeneous observation types which carry partially redundant and partially complementary information. For the sake of a coordinated research in this field, the Research Unit FOR 584 "Earth Rotation and Global Dynamic Processes" is funded at present by the German Research Foundation (DFG). It is concerned with the refined and consistent modeling and data analysis. One of the projects (P9) within this Research Unit addresses the combined analysis and validation of Earth rotation models and observations. In P9 three main topics are addressed: (1) the determination and mutual validation of reliable consistent time series for Earth rotation parameters and gravity field coefficients due to the consideration of their physical connection by the Earth's tensor of inertia, (2) the separation of individual Earth rotation excitation mechanisms by merging all available relevant data from recent satellite missions (GRACE, Jason-1, …) and geodetic space techniques (GNSS, SLR, VLBI, …) in a highly consistent way, (3) the estimation of fundamental physical Earth parameters (Love numbers, …) by an inverse model using the improved geodetic observation time series as constraints. Hence, this project provides significant and unique contributions to the field of Earth system science in general; it corresponds with the goals of the Global Geodetic Observing System (GGOS). In this paper project P9 is introduced, the goals are summarized and a status report including a presentation and discussion of intermediate

  9. Validity of the KABC-II Culture-Language Interpretive Matrix: A Comparison of Native English Speakers and Spanish-Speaking English Language Learners

    ERIC Educational Resources Information Center

    Van Deth, Leah M.

    2013-01-01

    The purpose of the present study was to investigate the validity of the Culture-Language Interpretive Matrix (C-LIM; Flanagan, Ortiz, & Alfonso, 2013) when applied to scores from the Kaufman Assessment Battery for Children, 2nd Edition (KABC-II; Kaufman & Kaufman, 2004). Data were analyzed from the KABC-II standardization sample as well as…

  10. Geophysical Monitoring for Validation of Transient Permafrost Models (Invited)

    NASA Astrophysics Data System (ADS)

    Hauck, C.; Hilbich, C.; Marmy, A.; Scherler, M.

    2013-12-01

    Permafrost is a widespread phenomenon at high latitudes and high altitudes and describes the permanently frozen state of the subsurface in lithospheric material. In the context of climate change, both, new monitoring and modelling techniques are required to observe and predict potential permafrost changes, e.g. the warming and degradation which may lead to the liberation of carbon (Arctic) and the destabilisation of permafrost slopes (mountains). Mountain permafrost occurrences in the European Alps are characterised by temperatures only a few degrees below zero and are therefore particularly sensitive to projected climate changes in the 21st century. Traditional permafrost observation techniques are mainly based on thermal monitoring in vertical and horizontal dimension, but they provide only weak indications of physical properties such as ice or liquid water content. Geophysical techniques can be used to characterise permafrost occurrences and to monitor their changes as the physical properties of frozen and unfrozen ground measured by geophysical techniques are markedly different. In recent years, electromagnetic, seismic but especially electrical methods have been used to continuously monitor permafrost occurrences and to detect long-term changes within the active layer and regarding the ice content within the permafrost layer. On the other hand, coupled transient thermal/hydraulic models are used to predict the evolution of permafrost occurrences under different climate change scenarios. These models rely on suitable validation data for a certain observation period, which is usually restricted to data sets of ground temperature and active layer depth. Very important initialisation and validation data for permafrost models are, however, ground ice content and unfrozen water content in the active layer. In this contribution we will present a geophysical monitoring application to estimate ice and water content and their evolution in time at a permafrost station in

  11. Validating clustering of molecular dynamics simulations using polymer models

    PubMed Central

    2011-01-01

    Background Molecular dynamics (MD) simulation is a powerful technique for sampling the meta-stable and transitional conformations of proteins and other biomolecules. Computational data clustering has emerged as a useful, automated technique for extracting conformational states from MD simulation data. Despite extensive application, relatively little work has been done to determine if the clustering algorithms are actually extracting useful information. A primary goal of this paper therefore is to provide such an understanding through a detailed analysis of data clustering applied to a series of increasingly complex biopolymer models. Results We develop a novel series of models using basic polymer theory that have intuitive, clearly-defined dynamics and exhibit the essential properties that we are seeking to identify in MD simulations of real biomolecules. We then apply spectral clustering, an algorithm particularly well-suited for clustering polymer structures, to our models and MD simulations of several intrinsically disordered proteins. Clustering results for the polymer models provide clear evidence that the meta-stable and transitional conformations are detected by the algorithm. The results for the polymer models also help guide the analysis of the disordered protein simulations by comparing and contrasting the statistical properties of the extracted clusters. Conclusions We have developed a framework for validating the performance and utility of clustering algorithms for studying molecular biopolymer simulations that utilizes several analytic and dynamic polymer models which exhibit well-behaved dynamics including: meta-stable states, transition states, helical structures, and stochastic dynamics. We show that spectral clustering is robust to anomalies introduced by structural alignment and that different structural classes of intrinsically disordered proteins can be reliably discriminated from the clustering results. To our knowledge, our framework is the

  12. Implementing the Ecosystem Model: Phase II.

    ERIC Educational Resources Information Center

    Schuh, John H.

    1978-01-01

    The ecosystem model was used to assess student perceptions of certain aspects of residential life at a large university. Over 70 percent of questionnaires were returned. From the data, aspects of the environment were changed according to student recommendations. A great need for more information communication was found. (RPG)

  13. NGC1300 dynamics - II. The response models

    NASA Astrophysics Data System (ADS)

    Kalapotharakos, C.; Patsis, P. A.; Grosbøl, P.

    2010-10-01

    We study the stellar response in a spectrum of potentials describing the barred spiral galaxy NGC1300. These potentials have been presented in a previous paper and correspond to three different assumptions as regards the geometry of the galaxy. For each potential we consider a wide range of Ωp pattern speed values. Our goal is to discover the geometries and the Ωp supporting specific morphological features of NGC1300. For this purpose we use the method of response models. In order to compare the images of NGC1300 with the density maps of our models, we define a new index which is a generalization of the Hausdorff distance. This index helps us to find out quantitatively which cases reproduce specific features of NGC1300 in an objective way. Furthermore, we construct alternative models following a Schwarzschild-type technique. By this method we vary the weights of the various energy levels, and thus the orbital contribution of each energy, in order to minimize the differences between the response density and that deduced from the surface density of the galaxy, under certain assumptions. We find that the models corresponding to Ωp ~ 16 and 22 kms-1kpc-1 are able to reproduce efficiently certain morphological features of NGC1300, with each one having its advantages and drawbacks. Based on observations collected at the European Southern Observatory, Chile: programme ESO 69.A-0021. E-mail: ckalapot@phys.uoa.gr (CK); patsis@academyofathens.gr (PAP); pgrosbol@eso.org (PG)

  14. ADOPT: A Historically Validated Light Duty Vehicle Consumer Choice Model

    SciTech Connect

    Brooker, A.; Gonder, J.; Lopp, S.; Ward, J.

    2015-05-04

    The Automotive Deployment Option Projection Tool (ADOPT) is a light-duty vehicle consumer choice and stock model supported by the U.S. Department of Energy’s Vehicle Technologies Office. It estimates technology improvement impacts on U.S. light-duty vehicles sales, petroleum use, and greenhouse gas emissions. ADOPT uses techniques from the multinomial logit method and the mixed logit method estimate sales. Specifically, it estimates sales based on the weighted value of key attributes including vehicle price, fuel cost, acceleration, range and usable volume. The average importance of several attributes changes nonlinearly across its range and changes with income. For several attributes, a distribution of importance around the average value is used to represent consumer heterogeneity. The majority of existing vehicle makes, models, and trims are included to fully represent the market. The Corporate Average Fuel Economy regulations are enforced. The sales feed into the ADOPT stock model. It captures key aspects for summing petroleum use and greenhouse gas emissions This includes capturing the change in vehicle miles traveled by vehicle age, the creation of new model options based on the success of existing vehicles, new vehicle option introduction rate limits, and survival rates by vehicle age. ADOPT has been extensively validated with historical sales data. It matches in key dimensions including sales by fuel economy, acceleration, price, vehicle size class, and powertrain across multiple years. A graphical user interface provides easy and efficient use. It manages the inputs, simulation, and results.

  15. Mechanical validation of whole bone composite femur models.

    PubMed

    Cristofolini, L; Viceconti, M; Cappello, A; Toni, A

    1996-04-01

    Composite synthetic models of the human femur have recently become commercially available as substitutes for cadaveric specimens. Their quick diffusion was justified by the advantages they offer as a substitute for real femurs. The present investigation concentrated on an extensive experimental validation of the mechanical behaviour of the whole bone composite model, compared to human fresh-frozen and dried-rehydrated specimens for different loading conditions. First, the viscoelastic behaviour of the models was investigated under simulated single leg stance loading, showing that the little time dependent phenomena observed tend to extinguish within a few minutes of the load application. The behaviour under axial loading was then studied by comparing the vertical displacement of the head as well as the axial strains, by application of a parametric descriptive model of the strain distribution. Finally, a four point bending test and a torsional test were performed to characterize the whole bone stiffness of the femur. In all these tests, the composite femurs were shown to fall well within the range for cadaveric specimens, with no significant differences being detected between the synthetic femurs and the two groups of cadaveric femurs. Moreover, the interfemur variability for the composite femurs was 20-200 times lower than that for the cadaveric specimens, thus allowing smaller differences to be characterized as significant using the same simple size, if the composite femurs are employed. PMID:8964782

  16. Validation of Thermospheric Density Models for Drag Specification

    NASA Astrophysics Data System (ADS)

    Boll, N. J.; Ridley, A. J.; Doornbos, E.

    2014-12-01

    The rate of deployment for small satellite constellations into low earth orbit (LEO) is rapidly increasing. At these altitudes, the orbital characteristics of low mass spacecraft are heavily impacted by atmospheric drag. Given that many such satellites do not possess systems capable of applying thrust to correct for these perturbations, the ability to perform station-keeping maneuvers, as well as to adjust and maintain the relative position of each spacecraft within a constellation, is greatly dependent on the ability to accurately model variations in the thermosphere-ionosphere density. This paper uses density data measured along the orbital paths of the Challenging Minisatellite Payload (CHAMP), the Gravity Recovery and Climate Experiment (GRACE), and the Gravity field and steady-state Ocean Circulation Explorer (GOCE) to validate and compare several atmospheric models, including the Global Ionosphere Thermosphere Model (GITM), the US Naval Research Laboratory Mass Spectrometer and Incoherent Scatter Radar (NRLMSISE-00), and the Jacchia-Bowman 2008 empirical thermospheric density model (JB2008), under various geomagnetic activity levels and seasonal conditions.

  17. Bioaerosol optical sensor model development and initial validation

    NASA Astrophysics Data System (ADS)

    Campbell, Steven D.; Jeys, Thomas H.; Eapen, Xuan Le

    2007-04-01

    This paper describes the development and initial validation of a bioaerosol optical sensor model. This model was used to help determine design parameters and estimate performance of a new low-cost optical sensor for detecting bioterrorism agents. In order to estimate sensor performance in detecting biowarfare simulants and rejecting environmental interferents, use was made of a previously reported catalog of EEM (excitation/emission matrix) fluorescence cross-section measurements and previously reported multiwavelength-excitation biosensor modeling work. In the present study, the biosensor modeled employs a single high-power 365 nm UV LED source plus an IR laser diode for particle size determination. The sensor has four output channels: IR size channel, UV elastic channel and two fluorescence channels. The sensor simulation was used to select the fluorescence channel wavelengths of 400-450 and 450-600 nm. Using these selected fluorescence channels, the performance of the sensor in detecting simulants and rejecting interferents was estimated. Preliminary measurements with the sensor are presented which compare favorably with the simulation results.

  18. Development and validation of a railgun hydrogen pellet injector model

    SciTech Connect

    King, T.L.; Zhang, J.; Kim, K.

    1995-12-31

    A railgun hydrogen pellet injector model is presented and its predictions are compared with the experimental data. High-speed hydrogenic ice injection is the dominant refueling method for magnetically confined plasmas used in controlled thermonuclear fusion research. As experimental devices approach the scale of power-producing fusion reactors, the fueling requirements become increasingly more difficult to meet since, due to the large size and the high electron densities and temperatures of the plasma, hypervelocity pellets of a substantial size will need to be injected into the plasma continuously and at high repetition rates. Advanced technologies, such as the railgun pellet injector, are being developed to address this demand. Despite the apparent potential of electromagnetic launchers to produce hypervelocity projectiles, physical effects that were neither anticipated nor well understood have made it difficult to realize this potential. Therefore, it is essential to understand not only the theory behind railgun operation, but the primary loss mechanisms, as well. Analytic tools have been used by many researchers to design and optimize railguns and analyze their performance. This has led to a greater understanding of railgun behavior and opened the door for further improvement. A railgun hydrogen pellet injector model has been developed. The model is based upon a pellet equation of motion that accounts for the dominant loss mechanisms, inertial and viscous drag. The model has been validated using railgun pellet injectors developed by the Fusion Technology Research Laboratory at the University of Illinois at Urbana-Champaign.

  19. Criterion Validity, Severity Cut Scores, and Test-Retest Reliability of the Beck Depression Inventory-II in a University Counseling Center Sample

    ERIC Educational Resources Information Center

    Sprinkle, Stephen D.; Lurie, Daphne; Insko, Stephanie L.; Atkinson, George; Jones, George L.; Logan, Arthur R.; Bissada, Nancy N.

    2002-01-01

    The criterion validity of the Beck Depression Inventory-II (BDI-II; A. T. Beck, R. A. Steer, & G. K. Brown, 1996) was investigated by pairing blind BDI-II administrations with the major depressive episode portion of the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I; M. B. First, R. L. Spitzer, M. Gibbon, & J. B. W. Williams,…

  20. Science Motivation Questionnaire II: Validation with Science Majors and Nonscience Majors

    ERIC Educational Resources Information Center

    Glynn, Shawn M.; Brickman, Peggy; Armstrong, Norris; Taasoobshirazi, Gita

    2011-01-01

    From the perspective of social cognitive theory, the motivation of students to learn science in college courses was examined. The students--367 science majors and 313 nonscience majors--responded to the Science Motivation Questionnaire II, which assessed five motivation components: intrinsic motivation, self-determination, self-efficacy, career…

  1. The Internal Validation of Level II and Level III Respiratory Therapy Examinations. Final Report.

    ERIC Educational Resources Information Center

    Jouett, Michael L.

    This project began with the delineation of the roles and functions of respiratory therapy personnel by the American Association for Respiratory Therapy. In Phase II, The Psychological Corporation used this delineation to develop six proficiency examinations, three at each of two levels. One exam at each level was designated for the purpose of the…

  2. Results of site validation experiments. Volume II. Supporting documents 5 through 14

    SciTech Connect

    Not Available

    1983-01-01

    Volume II contains the following supporting documents: Summary of Geologic Mapping of Underground Investigations; Logging of Vertical Coreholes - ''Double Box'' Area and Exploratory Drift; WIPP High Precision Gravity Survey; Basic Data Reports for Drillholes, Brine Content of Facility Internal Strata; Mineralogical Content of Facility Interval Strata; Location and Characterization of Interbedded Materials; Characterization of Aquifers at Shaft Locations; and Permeability of Facility Interval Strate.

  3. First principles Candu fuel model and validation experimentation

    SciTech Connect

    Corcoran, E.C.; Kaye, M.H.; Lewis, B.J.; Thompson, W.T.; Akbari, F.; Higgs, J.D.; Verrall, R.A.; He, Z.; Mouris, J.F.

    2007-07-01

    are added mixing terms associated with the appearance of the component species in particular phases. To validate the model, coulometric titration experiments relating the chemical potential of oxygen to the moles of oxygen introduced to SIMFUEL are underway. A description of the apparatus to oxidize and reduce samples in a controlled way in a H{sub 2}/H{sub 2}O mixture is presented. Existing measurements for irradiated fuel, both defected and non-defected, are also being incorporated into the validation process. (authors)

  4. Development, validation and application of numerical space environment models

    NASA Astrophysics Data System (ADS)

    Honkonen, Ilja

    2013-10-01

    Currently the majority of space-based assets are located inside the Earth's magnetosphere where they must endure the effects of the near-Earth space environment, i.e. space weather, which is driven by the supersonic flow of plasma from the Sun. Space weather refers to the day-to-day changes in the temperature, magnetic field and other parameters of the near-Earth space, similarly to ordinary weather which refers to changes in the atmosphere above ground level. Space weather can also cause adverse effects on the ground, for example, by inducing large direct currents in power transmission systems. The performance of computers has been growing exponentially for many decades and as a result the importance of numerical modeling in science has also increased rapidly. Numerical modeling is especially important in space plasma physics because there are no in-situ observations of space plasmas outside of the heliosphere and it is not feasible to study all aspects of space plasmas in a terrestrial laboratory. With the increasing number of computational cores in supercomputers, the parallel performance of numerical models on distributed memory hardware is also becoming crucial. This thesis consists of an introduction, four peer reviewed articles and describes the process of developing numerical space environment/weather models and the use of such models to study the near-Earth space. A complete model development chain is presented starting from initial planning and design to distributed memory parallelization and optimization, and finally testing, verification and validation of numerical models. A grid library that provides good parallel scalability on distributed memory hardware and several novel features, the distributed cartesian cell-refinable grid (DCCRG), is designed and developed. DCCRG is presently used in two numerical space weather models being developed at the Finnish Meteorological Institute. The first global magnetospheric test particle simulation based on the

  5. Applicability of Monte Carlo cross validation technique for model development and validation using generalised least squares regression

    NASA Astrophysics Data System (ADS)

    Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra

    2013-03-01

    SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.

  6. Systematic validation of disease models for pharmacoeconomic evaluations. Swiss HIV Cohort Study.

    PubMed

    Sendi, P P; Craig, B A; Pfluger, D; Gafni, A; Bucher, H C

    1999-08-01

    Pharmacoeconomic evaluations are often based on computer models which simulate the course of disease with and without medical interventions. The purpose of this study is to propose and illustrate a rigorous approach for validating such disease models. For illustrative purposes, we applied this approach to a computer-based model we developed to mimic the history of HIV-infected subjects at the greatest risk for Mycobacterium avium complex (MAC) infection in Switzerland. The drugs included as a prophylactic intervention against MAC infection were azithromycin and clarithromycin. We used a homogenous Markov chain to describe the progression of an HIV-infected patient through six MAC-free states, one MAC state, and death. Probability estimates were extracted from the Swiss HIV Cohort Study database (1993-95) and randomized controlled trials. The model was validated testing for (1) technical validity (2) predictive validity (3) face validity and (4) modelling process validity. Sensitivity analysis and independent model implementation in DATA (PPS) and self-written Fortran 90 code (BAC) assured technical validity. Agreement between modelled and observed MAC incidence confirmed predictive validity. Modelled MAC prophylaxis at different starting conditions affirmed face validity. Published articles by other authors supported modelling process validity. The proposed validation procedure is a useful approach to improve the validity of the model. PMID:10461580

  7. A photoemission model for low work function coated metal surfaces and its experimental validation

    NASA Astrophysics Data System (ADS)

    Jensen, Kevin L.; Feldman, Donald W.; Moody, Nathan A.; O'Shea, Patrick G.

    2006-06-01

    Photocathodes are a critical component many linear accelerator based light sources. The development of a custom-engineered photocathode based on low work function coatings requires an experimentally validated photoemission model that accounts the complexity of the emission process. We have developed a time-dependent model accounting for the effects of laser heating and thermal propagation on photoemission. It accounts for surface conditions (coating, field enhancement, and reflectivity), laser parameters (duration, intensity, and wavelength), and material characteristics (reflectivity, laser penetration depth, and scattering rates) to predict current distribution and quantum efficiency (QE) as a function of wavelength. The model is validated by (i) experimental measurements of the QE of cesiated surfaces, (ii) the QE and performance of commercial dispenser cathodes (B, M, and scandate), and (iii) comparison to QE values reported in the literature for bare metals and B-type dispenser cathodes, all for various wavelengths. Of particular note is that the highest QE for a commercial (M-type) dispenser cathode found here was measured to be 0.22% at 266 nm, and is projected to be 3.5 times larger for a 5 ps pulse delivering 0.6 mJ/cm2 under a 50 MV/m field.

  8. Case study for model validation : assessing a model for thermal decomposition of polyurethane foam.

    SciTech Connect

    Dowding, Kevin J.; Leslie, Ian H.; Hobbs, Michael L.; Rutherford, Brian Milne; Hills, Richard Guy; Pilch, Martin M.

    2004-10-01

    A case study is reported to document the details of a validation process to assess the accuracy of a mathematical model to represent experiments involving thermal decomposition of polyurethane foam. The focus of the report is to work through a validation process. The process addresses the following activities. The intended application of mathematical model is discussed to better understand the pertinent parameter space. The parameter space of the validation experiments is mapped to the application parameter space. The mathematical models, computer code to solve the models and its (code) verification are presented. Experimental data from two activities are used to validate mathematical models. The first experiment assesses the chemistry model alone and the second experiment assesses the model of coupled chemistry, conduction, and enclosure radiation. The model results of both experimental activities are summarized and uncertainty of the model to represent each experimental activity is estimated. The comparison between the experiment data and model results is quantified with various metrics. After addressing these activities, an assessment of the process for the case study is given. Weaknesses in the process are discussed and lessons learned are summarized.

  9. Schizosaccharomyces pombe and its Ni(II)-insensitive mutant GA1 in Ni(II) uptake from aqueous solutions: a biodynamic model.

    PubMed

    Sayar, Nihat Alpagu; Durmaz-Sam, Selcen; Kazan, Dilek; Sayar, Ahmet Alp

    2014-08-01

    In the present study, Ni(II) uptake from aqueous solution by living cells of the Schizosaccharomyces pombe haploid 972 with h (-) mating type and a Ni(II)-insensitive mutant GA1 derived from 972 was investigated at various initial glucose and Ni(II) concentrations. A biodynamic model was developed to predict the unsteady and steady-state phases of the uptake process. Gompertz growth and uptake process parameters were optimized to predict the maximum growth rate μ m and the process metric C r, the remaining Ni(II) content in the aqueous solution. The simulated overall metal uptake values were found to be in acceptable agreement with experimental results. The model validation was done through regression statistics and uncertainty and sensitivity analyses. To gain insight into the phenomenon of Ni(II) uptake by wild-type and mutant S. pombe, probable active and passive metal transport mechanisms in yeast cells were discussed in view of the simulation results. The present work revealed the potential of mutant GA1 to remove Ni(II) cations from aqueous media. The results obtained provided new insights for understanding the combined effect of biosorption and bioaccumulation processes for metal removal and offered a possibility for the use of growing mutant S. pombe cell in bioremediation. PMID:24752843

  10. PEP-II vacuum system pressure profile modeling using EXCEL

    SciTech Connect

    Nordby, M.; Perkins, C.

    1994-06-01

    A generic, adaptable Microsoft EXCEL program to simulate molecular flow in beam line vacuum systems is introduced. Modeling using finite-element approximation of the governing differential equation is discussed, as well as error estimation and program capabilities. The ease of use and flexibility of the spreadsheet-based program is demonstrated. PEP-II vacuum system models are reviewed and compared with analytical models.

  11. Effects of Risperidone on Aberrant Behavior in Persons with Developmental Disabilities: II. Social Validity Measures.

    ERIC Educational Resources Information Center

    McAdam, David B.; Zarcone, Jennifer R.; Hellings, Jessica; Napolitano, Deborah A.; Schroeder, Stephen R.

    2002-01-01

    Consumer satisfaction and social validity were measured during a double-blind, placebo-controlled evaluation of risperidone in treating aberrant behaviors of persons with developmental disabilities. A survey showed all 17 caregivers felt participation was positive. Community members (n=52) also indicated that when on medication, the 5 participants…

  12. Comparing Validity and Reliability in Special Education Title II and IDEA Data

    ERIC Educational Resources Information Center

    Steinbrecher, Trisha D.; McKeown, Debra; Walther-Thomas, Chriss

    2013-01-01

    Previous researchers have found that special education teacher shortages are pervasive and exacerbated by federal policies regarding "highly qualified" teacher requirements. The authors examined special education teacher personnel data from 2 federal data sources to determine if these sources offer a reliable and valid means of…

  13. The African American Acculturation Scale II: Cross-Validation and Short Form.

    ERIC Educational Resources Information Center

    Landrine, Hope; Klonoff, Elizabeth A.

    1995-01-01

    Studied African American culture, using a new, shortened, 33-item African American Acculturation Scale (AAAS-33) to assess the scale's validity and reliability. Comparisons between the original form and AAAS-33 reveal high correlations, however, the longer form may be sensitive to some beliefs, practices, and attitudes not assessed by the short…

  14. Validity of Social, Moral and Emotional Facets of Self-Description Questionnaire II

    ERIC Educational Resources Information Center

    Leung, Kim Chau; Marsh, Herbert W.; Yeung, Alexander Seeshing; Abduljabbar, Adel S.

    2015-01-01

    Studies adopting a construct validity approach can be categorized into within- and between-network studies. Few studies have applied between-network approach and tested the correlations of the social (same-sex relations, opposite-sex relations, parent relations), moral (honesty-trustworthiness), and emotional (emotional stability) facets of the…

  15. Dynamical models of a sample of Population II stars

    NASA Astrophysics Data System (ADS)

    Levison, H. F.; Richstone, D. O.

    1986-09-01

    Dynamical models are constructed in order to investigate the implications of recent kinematic data of distant Population II stars on the emissivity distribution of those stars. Models are constructed using a modified Schwarzschild method in two extreme scale-free potentials, spherical and E6 elliptical. Both potentials produce flat rotation curves and velocity dispersion profiles. In all models, the distribution of stars in this sample is flat. Moreover, it is not possible to construct a model with a strictly spheroidal emissivity distribution. Most models have dimples at the poles. The dynamics of the models indicate that the system is supported by both the third integral and z angular momentum.

  16. Modeling and Validating Chronic Pharmacological Manipulation of Circadian Rhythms

    PubMed Central

    Kim, J K; Forger, D B; Marconi, M; Wood, D; Doran, A; Wager, T; Chang, C; Walton, K M

    2013-01-01

    Circadian rhythms can be entrained by a light-dark (LD) cycle and can also be reset pharmacologically, for example, by the CK1δ/ε inhibitor PF-670462. Here, we determine how these two independent signals affect circadian timekeeping from the molecular to the behavioral level. By developing a systems pharmacology model, we predict and experimentally validate that chronic CK1δ/ε inhibition during the earlier hours of a LD cycle can produce a constant stable delay of rhythm. However, chronic dosing later during the day, or in the presence of longer light intervals, is not predicted to yield an entrained rhythm. We also propose a simple method based on phase response curves (PRCs) that predicts the effects of a LD cycle and chronic dosing of a circadian drug. This work indicates that dosing timing and environmental signals must be carefully considered for accurate pharmacological manipulation of circadian phase. PMID:23863866

  17. Validation of a Deterministic Vibroacoustic Response Prediction Model

    NASA Technical Reports Server (NTRS)

    Caimi, Raoul E.; Margasahayam, Ravi

    1997-01-01

    This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.

  18. Utilizing Chamber Data for Developing and Validating Climate Change Models

    NASA Technical Reports Server (NTRS)

    Monje, Oscar

    2012-01-01

    Controlled environment chambers (e.g. growth chambers, SPAR chambers, or open-top chambers) are useful for measuring plant ecosystem responses to climatic variables and CO2 that affect plant water relations. However, data from chambers was found to overestimate responses of C fluxes to CO2 enrichment. Chamber data may be confounded by numerous artifacts (e.g. sidelighting, edge effects, increased temperature and VPD, etc) and this limits what can be measured accurately. Chambers can be used to measure canopy level energy balance under controlled conditions and plant transpiration responses to CO2 concentration can be elucidated. However, these measurements cannot be used directly in model development or validation. The response of stomatal conductance to CO2 will be the same as in the field, but the measured response must be recalculated in such a manner to account for differences in aerodynamic conductance, temperature and VPD between the chamber and the field.

  19. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  20. Modal testing for model validation of structures with discrete nonlinearities

    PubMed Central

    Ewins, D. J.; Weekes, B.; delli Carri, A.

    2015-01-01

    Model validation using data from modal tests is now widely practiced in many industries for advanced structural dynamic design analysis, especially where structural integrity is a primary requirement. These industries tend to demand highly efficient designs for their critical structures which, as a result, are increasingly operating in regimes where traditional linearity assumptions are no longer adequate. In particular, many modern structures are found to contain localized areas, often around joints or boundaries, where the actual mechanical behaviour is far from linear. Such structures need to have appropriate representation of these nonlinear features incorporated into the otherwise largely linear models that are used for design and operation. This paper proposes an approach to this task which is an extension of existing linear techniques, especially in the testing phase, involving only just as much nonlinear analysis as is necessary to construct a model which is good enough, or ‘valid’: i.e. capable of predicting the nonlinear response behaviour of the structure under all in-service operating and test conditions with a prescribed accuracy. A short-list of methods described in the recent literature categorized using our framework is given, which identifies those areas in which further development is most urgently required. PMID:26303924

  1. Circulation Control Model Experimental Database for CFD Validation

    NASA Technical Reports Server (NTRS)

    Paschal, Keith B.; Neuhart, Danny H.; Beeler, George B.; Allan, Brian G.

    2012-01-01

    A 2D circulation control wing was tested in the Basic Aerodynamic Research Tunnel at the NASA Langley Research Center. A traditional circulation control wing employs tangential blowing along the span over a trailing-edge Coanda surface for the purpose of lift augmentation. This model has been tested extensively at the Georgia Tech Research Institute for the purpose of performance documentation at various blowing rates. The current study seeks to expand on the previous work by documenting additional flow-field data needed for validation of computational fluid dynamics. Two jet momentum coefficients were tested during this entry: 0.047 and 0.114. Boundary-layer transition was investigated and turbulent boundary layers were established on both the upper and lower surfaces of the model. Chordwise and spanwise pressure measurements were made, and tunnel sidewall pressure footprints were documented. Laser Doppler Velocimetry measurements were made on both the upper and lower surface of the model at two chordwise locations (x/c = 0.8 and 0.9) to document the state of the boundary layers near the spanwise blowing slot.

  2. Tyre tread-block friction: modelling, simulation and experimental validation

    NASA Astrophysics Data System (ADS)

    Wallaschek, Jörg; Wies, Burkard

    2013-07-01

    Pneumatic tyres are used in vehicles since the beginning of the last century. They generate braking and steering forces for bicycles, motor cycles, cars, busses, trucks, agricultural vehicles and aircraft. These forces are generated in the usually very small contact area between tyre and road and their performance characteristics are of eminent importance for safety and comfort. Much research has been addressed to optimise tyre design with respect to footprint pressure and friction. In this context, the development of virtual tyre prototypes, that is, simulation models for the tyre, has grown to a science in its own. While the modelling of the structural dynamics of the tyre has reached a very advanced level, which allows to take into account effects like the rate-independent inelasticity of filled elastomers or the transient 3D deformations of the ply-reinforced tread, shoulder and sidewalls, little is known about the friction between tread-block elements and road. This is particularly obvious in the case when snow, ice, water or a third-body layer are present in the tyre-road contact. In the present paper, we give a survey on the present state of knowledge in the modelling, simulation and experimental validation of tyre tread-block friction processes. We concentrate on experimental techniques.

  3. Vibroacoustic Model Validation for a Curved Honeycomb Composite Panel

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Robinson, Jay H.; Grosveld, Ferdinand W.

    2001-01-01

    Finite element and boundary element models are developed to investigate the vibroacoustic response of a curved honeycomb composite sidewall panel. Results from vibroacoustic tests conducted in the NASA Langley Structural Acoustic Loads and Transmission facility are used to validate the numerical predictions. The sidewall panel is constructed from a flexible honeycomb core sandwiched between carbon fiber reinforced composite laminate face sheets. This type of construction is being used in the development of an all-composite aircraft fuselage. In contrast to conventional rib-stiffened aircraft fuselage structures, the composite panel has nominally uniform thickness resulting in a uniform distribution of mass and stiffness. Due to differences in the mass and stiffness distribution, the noise transmission mechanisms for the composite panel are expected to be substantially different from those of a conventional rib-stiffened structure. The development of accurate vibroacoustic models will aide in the understanding of the dominant noise transmission mechanisms and enable optimization studies to be performed that will determine the most beneficial noise control treatments. Finite element and boundary element models of the sidewall panel are described. Vibroacoustic response predictions are presented for forced vibration input and the results are compared with experimental data.

  4. Validation of an Acoustic Impedance Prediction Model for Skewed Resonators

    NASA Technical Reports Server (NTRS)

    Howerton, Brian M.; Parrott, Tony L.

    2009-01-01

    An impedance prediction model was validated experimentally to determine the composite impedance of a series of high-aspect ratio slot resonators incorporating channel skew and sharp bends. Such structures are useful for packaging acoustic liners into constrained spaces for turbofan noise control applications. A formulation of the Zwikker-Kosten Transmission Line (ZKTL) model, incorporating the Richards correction for rectangular channels, is used to calculate the composite normalized impedance of a series of six multi-slot resonator arrays with constant channel length. Experimentally, acoustic data was acquired in the NASA Langley Normal Incidence Tube over the frequency range of 500 to 3500 Hz at 120 and 140 dB OASPL. Normalized impedance was reduced using the Two-Microphone Method for the various combinations of channel skew and sharp 90o and 180o bends. Results show that the presence of skew and/or sharp bends does not significantly alter the impedance of a slot resonator as compared to a straight resonator of the same total channel length. ZKTL predicts the impedance of such resonators very well over the frequency range of interest. The model can be used to design arrays of slot resonators that can be packaged into complex geometries heretofore unsuitable for effective acoustic treatment.

  5. Ground target infrared signature model validation for real-time hardware-in-the-loop simulations

    NASA Astrophysics Data System (ADS)

    Sanders, Jeffrey S.; Rodgers, Jeremy B.; Siddique, Ahmed A.

    1998-07-01

    Techniques and tools for validation of real-time infrared target signature models are presented. The model validation techniques presented in this paper were developed for hardware-in-the-loop (HWIL) simulations at the U.S. Army Missile Command's Research, Development, and Engineering Center. Real-time target model validation is a required deliverable to the customer of a HWIL simulation facility and is a critical part of ensuring the fidelity of a HWIL simulation. There are two levels of real-time target model validation. The first level is comparison of the target model to some baseline or measured data which answers the question 'are the simulation inputs correct?' The second level of validation is a simulation validation which answers the question 'for a given target model input are the simulation hardware and software generating the correct output?' This paper deals primarily with the first level of target model validation.

  6. Stratospheric Heterogeneous Chemistry and Microphysics: Model Development, Validation and Applications

    NASA Technical Reports Server (NTRS)

    Turco, Richard P.

    1996-01-01

    The objectives of this project are to: define the chemical and physical processes leading to stratospheric ozone change that involve polar stratospheric clouds (PSCS) and the reactions occurring on the surfaces of PSC particles; study the formation processes, and the physical and chemical properties of PSCS, that are relevant to atmospheric chemistry and to the interpretation of field measurements taken during polar stratosphere missions; develop quantitative models describing PSC microphysics and heterogeneous chemical processes; assimilate laboratory and field data into these models; and calculate the extent of chemical processing on PSCs and the impact of specific microphysical processes on polar composition and ozone depletion. During the course of the project, a new coupled microphysics/physical-chemistry/ photochemistry model for stratospheric sulfate aerosols and nitric acid and ice PSCs was developed and applied to analyze data collected during NASA's Arctic Airborne Stratospheric Expedition-II (AASE-II) and other missions. In this model, detailed treatments of multicomponent sulfate aerosol physical chemistry, sulfate aerosol microphysics, polar stratospheric cloud microphysics, PSC ice surface chemistry, as well as homogeneous gas-phase chemistry were included for the first time. In recent studies focusing on AASE measurements, the PSC model was used to analyze specific measurements from an aircraft deployment of an aerosol impactor, FSSP, and NO(y) detector. The calculated results are in excellent agreement with observations for particle volumes as well as NO(y) concentrations, thus confirming the importance of supercooled sulfate/nitrate droplets in PSC formation. The same model has been applied to perform a statistical study of PSC properties in the Northern Hemisphere using several hundred high-latitude air parcel trajectories obtained from Goddard. The rates of ozone depletion along trajectories with different meteorological histories are presently

  7. The Role of Structural Models in the Solar Sail Flight Validation Process

    NASA Technical Reports Server (NTRS)

    Johnston, John D.

    2004-01-01

    NASA is currently soliciting proposals via the New Millennium Program ST-9 opportunity for a potential Solar Sail Flight Validation (SSFV) experiment to develop and operate in space a deployable solar sail that can be steered and provides measurable acceleration. The approach planned for this experiment is to test and validate models and processes for solar sail design, fabrication, deployment, and flight. These models and processes would then be used to design, fabricate, and operate scaleable solar sails for future space science missions. There are six validation objectives planned for the ST9 SSFV experiment: 1) Validate solar sail design tools and fabrication methods; 2) Validate controlled deployment; 3) Validate in space structural characteristics (focus of poster); 4) Validate solar sail attitude control; 5) Validate solar sail thrust performance; 6) Characterize the sail's electromagnetic interaction with the space environment. This poster presents a top-level assessment of the role of structural models in the validation process for in-space structural characteristics.

  8. Canyon building ventilation system dynamic model -- Parameters and validation

    SciTech Connect

    Moncrief, B.R.; Chen, F.F.K.

    1993-02-01

    Plant system simulation crosses many disciplines. At the core is the mimic of key components in the form of mathematical ``models.`` These component models are functionally integrated to represent the plant. With today`s low cost high capacity computers, the whole plant can be truly and effectively reproduced in a computer model. Dynamic simulation has its roots in ``single loop`` design, which is still a common objective in the employment of simulation. The other common objectives are the ability to preview plant operation, to anticipate problem areas, and to test the impact of design options. As plant system complexity increases and our ability to simulate the entire plant grows, the objective to optimize plant system design becomes practical. This shift in objectives from problem avoidance to total optimization by far offers the most rewarding potential. Even a small reduction in bulk materials and space can sufficiently justify the application of this technology. Furthermore, to realize an optimal plant starts from a tight and disciplined design. We believe the assurance required to execute such a design strategy can partly be derived from a plant model. This paper reports on the application of a dynamic model to evaluate the capacity of an existing production plant ventilation system. This study met the practical objectives of capacity evaluation under present and future conditions, and under normal and accidental situations. More importantly, the description of this application, in its methods and its utility, aims to validate the technology of dynamic simulation in the environment of plant system design and safe operation.

  9. An open source lower limb model: Hip joint validation.

    PubMed

    Modenese, L; Phillips, A T M; Bull, A M J

    2011-08-11

    Musculoskeletal lower limb models have been shown to be able to predict hip contact forces (HCFs) that are comparable to in vivo measurements obtained from instrumented prostheses. However, the muscle recruitment predicted by these models does not necessarily compare well to measured electromyographic (EMG) signals. In order to verify if it is possible to accurately estimate HCFs from muscle force patterns consistent with EMG measurements, a lower limb model based on a published anatomical dataset (Klein Horsman et al., 2007. Clinical Biomechanics. 22, 239-247) has been implemented in the open source software OpenSim. A cycle-to-cycle hip joint validation was conducted against HCFs recorded during gait and stair climbing trials of four arthroplasty patients (Bergmann et al., 2001. Journal of Biomechanics. 34, 859-871). Hip joint muscle tensions were estimated by minimizing a polynomial function of the muscle forces. The resulting muscle activation patterns obtained by assessing multiple powers of the objective function were compared against EMG profiles from the literature. Calculated HCFs denoted a tendency to monotonically increase their magnitude when raising the power of the objective function; the best estimation obtained from muscle forces consistent with experimental EMG profiles was found when a quadratic objective function was minimized (average overestimation at experimental peak frame: 10.1% for walking, 7.8% for stair climbing). The lower limb model can produce appropriate balanced sets of muscle forces and joint contact forces that can be used in a range of applications requiring accurate quantification of both. The developed model is available at the website https://simtk.org/home/low_limb_london. PMID:21742331

  10. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  11. An approach to model validation and model-based prediction -- polyurethane foam case study.

    SciTech Connect

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical analyses and

  12. User's guide for the stock-recruitment model validation program. Environmental Sciences Division Publication No. 1985

    SciTech Connect

    Christensen, S.W.; Kirk, B.L.; Goodyear, C.P.

    1982-06-01

    SRVAL is a FORTRAN IV computer code designed to aid in assessing the validity of curve-fits of the linearized Ricker stock-recruitment model, modified to incorporate multiple-age spawners and to include an environmental variable, to variously processed annual catch-per-unit effort statistics for a fish population. It is sometimes asserted that curve-fits of this kind can be used to determine the sensitivity of fish populations to such man-induced stresses as entrainment and impingement at power plants. The SRVAL code was developed to test such assertions. It was utilized in testimony written in connection with the Hudson River Power Case (US Environmental Protection Agency, Region II). This testimony was recently published as a NUREG report. Here, a user's guide for SRVAL is presented.

  13. Competitive sorption of Pb(II), Cu(II) and Ni(II) on carbonaceous nanofibers: A spectroscopic and modeling approach.

    PubMed

    Ding, Congcong; Cheng, Wencai; Wang, Xiangxue; Wu, Zhen-Yu; Sun, Yubing; Chen, Changlun; Wang, Xiangke; Yu, Shu-Hong

    2016-08-01

    The competitive sorption of Pb(II), Cu(II) and Ni(II) on the uniform carbonaceous nanofibers (CNFs) was investigated in binary/ternary-metal systems. The pH-dependent sorption of Pb(II), Cu(II) and Ni(II) on CNFs was independent of ionic strength, indicating that inner-sphere surface complexation dominated sorption Pb(II), Cu(II) and Ni(II) on CNFs. The maximum sorption capacities of Pb(II), Cu(II) and Ni(II) on CNFs in single-metal systems at a pH 5.5±0.2 and 25±1°C were 3.84 (795.65mg/g), 3.21 (204.00mg/g) and 2.67 (156.70mg/g)mmol/g, respectively. In equimolar binary/ternary-metal systems, Pb(II) exhibited greater inhibition of the sorption of Cu(II) and Ni(II), demonstrating the stronger affinity of CNFs for Pb(II). The competitive sorption of heavy metals in ternary-metal systems was predicted quite well by surface complexation modeling derived from single-metal data. According to FTIR, XPS and EXAFS analyses, Pb(II), Cu(II) and Ni(II) were specifically adsorbed on CNFs via covalent bonding. These observations should provide an essential start in simultaneous removal of multiple heavy metals from aquatic environments by CNFs, and open the doorways for the application of CNFs. PMID:27108273

  14. A technique for global monitoring of net solar irradiance at the ocean surface. II - Validation

    NASA Technical Reports Server (NTRS)

    Chertock, Beth; Frouin, Robert; Gautier, Catherine

    1992-01-01

    The generation and validation of the first satellite-based long-term record of surface solar irradiance over the global oceans are addressed. The record is generated using Nimbus-7 earth radiation budget (ERB) wide-field-of-view plentary-albedo data as input to a numerical algorithm designed and implemented based on radiative transfer theory. The mean monthly values of net surface solar irradiance are computed on a 9-deg latitude-longitude spatial grid for November 1978-October 1985. The new data set is validated in comparisons with short-term, regional, high-resolution, satellite-based records. The ERB-based values of net surface solar irradiance are compared with corresponding values based on radiance measurements taken by the Visible-Infrared Spin Scan Radiometer aboard GOES series satellites. Errors in the new data set are estimated to lie between 10 and 20 W/sq m on monthly time scales.

  15. Gravitropic responses of the Avena coleoptile in space and on clinostats. II. Is reciprocity valid?

    NASA Technical Reports Server (NTRS)

    Johnsson, A.; Brown, A. H.; Chapman, D. K.; Heathcote, D.; Karlsson, C.

    1995-01-01

    Experiments were undertaken to determine if the reciprocity rule is valid for gravitropic responses of oat coleoptiles in the acceleration region below 1 g. The rule predicts that the gravitropic response should be proportional to the product of the applied acceleration and the stimulation time. Seedlings were cultivated on 1 g centrifuges and transferred to test centrifuges to apply a transverse g-stimulation. Since responses occurred in microgravity, the uncertainties about the validity of clinostat simulation of weightlessness was avoided. Plants at two stages of coleoptile development were tested. Plant responses were obtained using time-lapse video recordings that were analyzed after the flight. Stimulus intensities and durations were varied and ranged from 0.1 to 1.0 g and from 2 to 130 min, respectively. For threshold g-doses the reciprocity rule was obeyed. The threshold dose was of the order of 55 g s and 120 g s, respectively, for two groups of plants investigated. Reciprocity was studied also at bending responses which are from just above the detectable level to about 10 degrees. The validity of the rule could not be confirmed for higher g-doses, chiefly because the data were more variable. It was investigated whether the uniformity of the overall response data increased when the gravitropic dose was defined as (gm x t) with m-values different from unity. This was not the case and the reciprocity concept is, therefore, valid also in the hypogravity region. The concept of gravitropic dose, the product of the transverse acceleration and the stimulation time, is also well-defined in the acceleration region studied. With the same hardware, tests were done on earth where responses occurred on clinostats. The results did not contradict the reciprocity rule but scatter in the data was large.

  16. EEG-neurofeedback for optimising performance. II: creativity, the performing arts and ecological validity.

    PubMed

    Gruzelier, John H

    2014-07-01

    As a continuation of a review of evidence of the validity of cognitive/affective gains following neurofeedback in healthy participants, including correlations in support of the gains being mediated by feedback learning (Gruzelier, 2014a), the focus here is on the impact on creativity, especially in the performing arts including music, dance and acting. The majority of research involves alpha/theta (A/T), sensory-motor rhythm (SMR) and heart rate variability (HRV) protocols. There is evidence of reliable benefits from A/T training with advanced musicians especially for creative performance, and reliable benefits from both A/T and SMR training for novice music performance in adults and in a school study with children with impact on creativity, communication/presentation and technique. Making the SMR ratio training context ecologically relevant for actors enhanced creativity in stage performance, with added benefits from the more immersive training context. A/T and HRV training have benefitted dancers. The neurofeedback evidence adds to the rapidly accumulating validation of neurofeedback, while performing arts studies offer an opportunity for ecological validity in creativity research for both creative process and product. PMID:24239853

  17. Bow shock models of ultracompact H II regions

    NASA Technical Reports Server (NTRS)

    Mac Low, Mordecai-Mark; Van Buren, Dave; Wood, Douglas O. S.; Churchwell, ED

    1991-01-01

    This paper presents models of ultracompact H II regions as the bow shocks formed by massive stars, with strong stellar winds, moving supersonically through molecular clouds. The morphologies, sizes and brightnesses of observed objects match the models well. Plausible models are provided for the ultracompact H II regions G12.21 - 0.1, G29.96 - 0.02, G34.26 + 0.15, and G43.89 - 0.78. To do this, the equilibrium shape of the wind-blown shell is calculated, assuming momentum conservation. Then the shell is illuminated with ionizing radiation from the central star, radiative transfer for free-free emission through the shell is performed, and the resulting object is visualized at various angles for comparison with radio continuum maps. The model unifies most of the observed morphologies of ultracompact H II regions, excluding only those objects with spherical shells. Ram pressure confinement greatly lengthens the life of ultracompact H II regions, explaining the large number that exist in the Galaxy despite their low apparent kinematic ages.

  18. Validation model for Raman based skin carotenoid detection.

    PubMed

    Ermakov, Igor V; Gellermann, Werner

    2010-12-01

    measurement, and which can be easily removed for subsequent biochemical measurements. Excellent correlation (coefficient R=0.95) is obtained for this tissue site which could serve as a model site for scaled up future validation studies of large populations. The obtained results provide proof that resonance Raman spectroscopy is a valid non-invasive objective methodology for the quantitative assessment of carotenoid antioxidants in human skin in vivo. PMID:20678465

  19. NAIRAS aircraft radiation model development, dose climatology, and initial validation

    PubMed Central

    Mertens, Christopher J; Meier, Matthias M; Brown, Steven; Norman, Ryan B; Xu, Xiaojing

    2013-01-01

    [1] The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis

  20. Validation of medical modeling & simulation training devices and systems.

    PubMed

    Magee, J Harvey

    2003-01-01

    For almost a decade, research has been conducted in many areas of science to develop medical simulation training devices and even comprehensive training systems. To propel the field, the Telemedicine and Advanced Technology Research Center (TATRC), an agency of the United States Army Medical Research Materiel Command (USAMRMC), has been managing a portfolio of research projects in the area of Medical Modeling and Simulation (MM&S) since 1999. Significant progress has made to identify and harness enabling technologies. Generally, these developments can be categorized in four areas: (1) PC-based interactive multimedia, (2) Digitally Enhanced Mannequins, (3) Virtual Workbench, or "part-task", simulators, and (4) Total Immersion Virtual Reality (TIVR). Many medical simulation-training systems have shown great potential to improve medical training, but the potential shown has been based largely on anecdotal feedback from informal user studies. Formal assessment is needed to determine the degree to which simulator(s) train medical skills and the degree to which skills learned on a simulator transfer to the practice of care. A robust methodology is required as a basis for these assessments. Several scientific workshops sponsored in 2001 focused on algorithm and metrics development in support of surgical simulation. Also in 2001, TATRC chartered a Simulation Working Group (SWG) to develop a robust methodology upon which to base an assessment of the effectiveness of simulation training devices and systems. After the terrorist attacks of September 11, 2001, attention was redirected for a period, and progress was delayed. In the summer of 2002, TATRC chartered a follow-on group called the Validation, Metrics and Simulation (VMAS) Committee. The poster will highlight and summarize the development of the methodology and identify validation studies to be conducted (supported by various funding sources and research programs). The interaction between TATRC and the National Capital

  1. Validated Analytical Model of a Pressure Compensation Drip Irrigation Emitter

    NASA Astrophysics Data System (ADS)

    Shamshery, Pulkit; Wang, Ruo-Qian; Taylor, Katherine; Tran, Davis; Winter, Amos

    2015-11-01

    This work is focused on analytically characterizing the behavior of pressure-compensating drip emitters in order to design low-cost, low-power irrigation solutions appropriate for off-grid communities in developing countries. There are 2.5 billion small acreage farmers worldwide who rely solely on their land for sustenance. Drip, compared to flood, irrigation leads to up to 70% reduction in water consumption while increasing yields by 90% - important in countries like India which are quickly running out of water. To design a low-power drip system, there is a need to decrease the pumping pressure requirement at the emitters, as pumping power is the product of pressure and flow rate. To efficiently design such an emitter, the relationship between the fluid-structure interactions that occur in an emitter need to be understood. In this study, a 2D analytical model that captures the behavior of a common drip emitter was developed and validated through experiments. The effects of independently changing the channel depth, channel width, channel length and land height on the performance were studied. The model and the key parametric insights presented have the potential to be optimized in order to guide the design of low-pressure, clog-resistant, pressure-compensating emitters.

  2. Validation of a New Rainbow Model Over the Hawaiian Islands

    NASA Astrophysics Data System (ADS)

    Ricard, J. L.; Adams, P. L.; Barckike, J.

    2012-12-01

    A new realistic model of the rainbow has been developed at the CNRM. It is based on the Airy theory. The main entry parameters are the droplet size distribution, the angle of the sun above the horizon, the temperature of the droplets and the wavelength. The island of Hawaii seems to be a perfect place for the validation of the rainbow model. Not only because of its famous rainbows, but also because of the convenient ring road along the coast. The older lower islands for more frequent viewing opportunities having to do with the proximity of clear sky to heavy rainfall. Both Oahu and Kauai as well as the western part of Maui have coastal roads that offer good access to rainbows. The best time to view rainbows is when the sun angle is lowest, in other words near the winter solstice. Figure 1 = Map of mean annual rainfall for the islands of Kauai and Oahu, developed from the new 2011 Rainfall Atlas of Hawaii. The base period of the statistics is 1978-2007. Figure 2 = Moisture zone map by Gon et al (1998). Blue areas are the wet ones. Green areas are the Mesic ones. Yellow areas are the dry ones.

  3. Bidirectional reflectance function in coastal waters: modeling and validation

    NASA Astrophysics Data System (ADS)

    Gilerson, Alex; Hlaing, Soe; Harmel, Tristan; Tonizzo, Alberto; Arnone, Robert; Weidemann, Alan; Ahmed, Samir

    2011-11-01

    The current operational algorithm for the correction of bidirectional effects from the satellite ocean color data is optimized for typical oceanic waters. However, versions of bidirectional reflectance correction algorithms, specifically tuned for typical coastal waters and other case 2 conditions, are particularly needed to improve the overall quality of those data. In order to analyze the bidirectional reflectance distribution function (BRDF) of case 2 waters, a dataset of typical remote sensing reflectances was generated through radiative transfer simulations for a large range of viewing and illumination geometries. Based on this simulated dataset, a case 2 water focused remote sensing reflectance model is proposed to correct above-water and satellite water leaving radiance data for bidirectional effects. The proposed model is first validated with a one year time series of in situ above-water measurements acquired by collocated multi- and hyperspectral radiometers which have different viewing geometries installed at the Long Island Sound Coastal Observatory (LISCO). Match-ups and intercomparisons performed on these concurrent measurements show that the proposed algorithm outperforms the algorithm currently in use at all wavelengths.

  4. Experimental validation of 2D profile photoresist shrinkage model

    NASA Astrophysics Data System (ADS)

    Bunday, Benjamin; Cordes, Aaron; Self, Andy; Ferry, Lorena; Danilevsky, Alex

    2011-03-01

    electron beam directly impacting the profile and backscattered electrons from the electron beam impacting the surrounding substrate. This dose from backscattering was shown to be an important component in the resist shrinkage process, such that at lower beam energies, it dominates linewidth shrinkage. In this work, results from a previous paper will be further explored with numerically simulated results and compared to experimental results to validate the model. With these findings, we can demonstrate the state of readiness of these models for predicting the shrinkage characteristics of photoresist measurements and estimating the errors in calculating the original CD from the shrinkage trend.

  5. Validation of model based active control of combustion instability

    SciTech Connect

    Fleifil, M.; Ghoneim, Z.; Ghoniem, A.F.

    1998-07-01

    The demand for efficient, company and clean combustion systems have spurred research into the fundamental mechanisms governing their performance and means of interactively changing their performance characteristics. Thermoacoustic instability which is frequently observed in combustion systems with high power density, when burning close to the lean flammability limit, or using exhaust gas recirculation to meet more stringent emissions regulations, etc. Its occurrence and/or means to mitigate them passively lead to performance degradation such as reduced combustion efficiency, high local heat transfer rates, increase in the mixture equivalence ratio or system failure due to structural damage. This paper reports on their study of the origin of thermoacoustic instability, its dependence on system parameters and the means of actively controlling it. The authors have developed an analytical model of thermoacoustic instability in premixed combustors. The model combines a heat release dynamics model constructed using the kinematics of a premixed flame stabilized behind a perforated plate with the linearized conservation equations governing the system acoustics. This formulation allows model based controller design. In order to test the performance of the analytical model, a numerical solution of the partial differential equations governing the system has been carried out using the principle of harmonic separation and focusing on the dominant unstable mode. This leads to a system of ODEs governing the thermofluid variables. Analytical predictions of the frequency and growth ate of the unstable mode are shown to be in good agreement with the numerical simulations as well s with those obtained using experimental identification techniques when applied to a laboratory combustor. The authors use these results to confirm the validity of the assumptions used in formulating the analytical model. A controller based on the minimization of a cost function using the LQR technique has

  6. Transient PVT measurements and model predictions for vessel heat transfer. Part II.

    SciTech Connect

    Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.; Evans, Gregory Herbert; Rice, Steven F.

    2010-07-01

    Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models in which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.

  7. Computing Models of CDF and D0 in Run II

    SciTech Connect

    Lammel, S.

    1997-05-01

    The next collider run of the Fermilab Tevatron, Run II, is scheduled for autumn of 1999. Both experiments, the Collider Detector at Fermilab (CDF) and the D0 experiment are being modified to cope with the higher luminosity and shorter bunch spacing of the Tevatron. New detector components, higher event complexity, and an increased data volume require changes from the data acquisition systems up to the analysis systems. In this paper we present a summary of the computing models of the two experiments for Run II.

  8. Validation of Community Models: 2. Development of a Baseline, Using the Wang-Sheeley-Arge Model

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    This paper is the second in a series providing independent validation of community models of the outer corona and inner heliosphere. Here I present a comprehensive validation of the Wang-Sheeley-Arge (WSA) model. These results will serve as a baseline against which to compare the next generation of comparable forecasting models. The WSA model is used by a number of agencies to predict Solar wind conditions at Earth up to 4 days into the future. Given its importance to both the research and forecasting communities, it is essential that its performance be measured systematically and independently. I offer just such an independent and systematic validation. I report skill scores for the model's predictions of wind speed and interplanetary magnetic field (IMF) polarity for a large set of Carrington rotations. The model was run in all its routinely used configurations. It ingests synoptic line of sight magnetograms. For this study I generated model results for monthly magnetograms from multiple observatories, spanning the Carrington rotation range from 1650 to 2074. I compare the influence of the different magnetogram sources and performance at quiet and active times. I also consider the ability of the WSA model to forecast both sharp transitions in wind speed from slow to fast wind and reversals in the polarity of the radial component of the IMF. These results will serve as a baseline against which to compare future versions of the model as well as the current and future generation of magnetohydrodynamic models under development for forecasting use.

  9. Alaska North Slope Tundra Travel Model and Validation Study

    SciTech Connect

    Harry R. Bader; Jacynthe Guimond

    2006-03-01

    of variability in snow depth cover throughout the period of field experimentation. The amount of change in disturbance indicators was greater in the tundra communities of the Foothills than in those of the Coastal Plain. However the overall level of change in both community types was less than expected. In Coastal Plain communities, ground hardness and snow slab thickness were found to play an important role in change in active layer depth and soil moisture as a result of treatment. In the Foothills communities, snow cover had the most influence on active layer depth and soil moisture as a result of treatment. Once certain minimum thresholds for ground hardness, snow slab thickness, and snow depth were attained, it appeared that little or no additive effect was realized regarding increased resistance to disturbance in the tundra communities studied. DNR used the results of this modeling project to set a standard for maximum permissible disturbance of cross-country tundra travel, with the threshold set below the widely accepted standard of Low Disturbance levels (as determined by the U.S. Fish and Wildlife Service). DNR followed the modeling project with a validation study, which seemed to support the field trial conclusions and indicated that the standard set for maximum permissible disturbance exhibits a conservative bias in favor of environmental protection. Finally DNR established a quick and efficient tool for visual estimations of disturbance to determine when investment in field measurements is warranted. This Visual Assessment System (VAS) seemed to support the plot disturbance measurements taking during the modeling and validation phases of this project.

  10. Parental modelling of eating behaviours: observational validation of the Parental Modelling of Eating Behaviours scale (PARM).

    PubMed

    Palfreyman, Zoe; Haycraft, Emma; Meyer, Caroline

    2015-03-01

    Parents are important role models for their children's eating behaviours. This study aimed to further validate the recently developed Parental Modelling of Eating Behaviours Scale (PARM) by examining the relationships between maternal self-reports on the PARM with the modelling practices exhibited by these mothers during three family mealtime observations. Relationships between observed maternal modelling and maternal reports of children's eating behaviours were also explored. Seventeen mothers with children aged between 2 and 6 years were video recorded at home on three separate occasions whilst eating a meal with their child. Mothers also completed the PARM, the Children's Eating Behaviour Questionnaire and provided demographic information about themselves and their child. Findings provided validation for all three PARM subscales, which were positively associated with their observed counterparts on the observational coding scheme (PARM-O). The results also indicate that habituation to observations did not change the feeding behaviours displayed by mothers. In addition, observed maternal modelling was significantly related to children's food responsiveness (i.e., their interest in and desire for foods), enjoyment of food, and food fussiness. This study makes three important contributions to the literature. It provides construct validation for the PARM measure and provides further observational support for maternal modelling being related to lower levels of food fussiness and higher levels of food enjoyment in their children. These findings also suggest that maternal feeding behaviours remain consistent across repeated observations of family mealtimes, providing validation for previous research which has used single observations. PMID:25111293

  11. Validation of population-based disease simulation models: a review of concepts and methods

    PubMed Central

    2010-01-01

    Background Computer simulation models are used increasingly to support public health research and policy, but questions about their quality persist. The purpose of this article is to review the principles and methods for validation of population-based disease simulation models. Methods We developed a comprehensive framework for validating population-based chronic disease simulation models and used this framework in a review of published model validation guidelines. Based on the review, we formulated a set of recommendations for gathering evidence of model credibility. Results Evidence of model credibility derives from examining: 1) the process of model development, 2) the performance of a model, and 3) the quality of decisions based on the model. Many important issues in model validation are insufficiently addressed by current guidelines. These issues include a detailed evaluation of different data sources, graphical representation of models, computer programming, model calibration, between-model comparisons, sensitivity analysis, and predictive validity. The role of external data in model validation depends on the purpose of the model (e.g., decision analysis versus prediction). More research is needed on the methods of comparing the quality of decisions based on different models. Conclusion As the role of simulation modeling in population health is increasing and models are becoming more complex, there is a need for further improvements in model validation methodology and common standards for evaluating model credibility. PMID:21087466

  12. Validation of numerical ground water models used to guide decision making.

    PubMed

    Hassan, Ahmed E

    2004-01-01

    Many sites of ground water contamination rely heavily on complex numerical models of flow and transport to develop closure plans. This complexity has created a need for tools and approaches that can build confidence in model predictions and provide evidence that these predictions are sufficient for decision making. Confidence building is a long-term, iterative process and the author believes that this process should be termed model validation. Model validation is a process, not an end result. That is, the process of model validation cannot ensure acceptable prediction or quality of the model. Rather, it provides an important safeguard against faulty models or inadequately developed and tested models. If model results become the basis for decision making, then the validation process provides evidence that the model is valid for making decisions (not necessarily a true representation of reality). Validation, verification, and confirmation are concepts associated with ground water numerical models that not only do not represent established and generally accepted practices, but there is not even widespread agreement on the meaning of the terms as applied to models. This paper presents a review of model validation studies that pertain to ground water flow and transport modeling. Definitions, literature debates, previously proposed validation strategies, and conferences and symposia that focused on subsurface model validation are reviewed and discussed. The review is general and focuses on site-specific, predictive ground water models used for making decisions regarding remediation activities and site closure. The aim is to provide a reasonable starting point for hydrogeologists facing model validation for ground water systems, thus saving a significant amount of time, effort, and cost. This review is also aimed at reviving the issue of model validation in the hydrogeologic community and stimulating the thinking of researchers and practitioners to develop practical and

  13. Active control strategy on a catenary-pantograph validated model

    NASA Astrophysics Data System (ADS)

    Sanchez-Rebollo, C.; Jimenez-Octavio, J. R.; Carnicero, A.

    2013-04-01

    Dynamic simulation methods have become essential in the design process and control of the catenary-pantograph system, overall since high-speed trains and interoperability criteria are getting very trendy. This paper presents an original hardware-in-the-loop (HIL) strategy aimed at integrating a multicriteria active control within the catenary-pantograph dynamic interaction. The relevance of HIL control systems applied in the frame of the pantograph is undoubtedly increasing due to the recent and more demanding requirements for high-speed railway systems. Since the loss of contact between the catenary and the pantograph leads to arcing and electrical wear, and too high contact forces cause mechanical wear of both the catenary wires and the strips of the pantograph, not only prescribed but also economic and performance criteria ratify such a relevance. Different configurations of the proportional-integral-derivative (PID) controller are proposed and applied to two different plant systems. Since this paper is mainly focused on the control strategy, both plant systems are simulation models though the methodology is suitable for a laboratory bench. The strategy of control involves a multicriteria optimisation of the contact force and the consumption of the energy supplied by the control force, a genetic algorithm has been applied for this purpose. Thus, the PID controller is fitted according to these conflicting objectives and tested within a nonlinear lumped model and a nonlinear finite element model, being the last one validated against the European Standard EN 50318. Finally, certain tests have been accomplished in order to analyse the robustness of the control strategy. Particularly, the relevance or the plant simulation, the running speed and the instrumentation time delay are studied in this paper.

  14. A Unified Sea Ice Thickness Data Set for Model Validation

    NASA Astrophysics Data System (ADS)

    Lindsay, R.; Wensnahan, M.

    2007-12-01

    Can we, as a community, do better at using existing ice thickness measurements to more effectively evaluate the changing nature of the Arctic ice pack and to better evaluate the performance of our models? We think we can if we work together. We are trying to create a unified ice thickness data set by combining observations from various ice thickness measurement systems. It is designed to facilitate the intercomparison of different measurements, the evaluation of the state of the ice pack, and the validation of sea ice models. Datasets that might be included are ice draft estimates from various submarine and moored upward looking sonar instruments, ice thickness estimates from airborne electromagnetic instruments, and satellite altimeter freeboard measurements. Three principles for the proposed data set are: 1) Full documentation of data sources and characteristics, 2) Spatial and temporal averaging to approximately common scales, and 3) Common data formats. We would not mix data types and we would not interpolate to locations or times not represented in the observations. The target spatial and temporal scale for the measurements would be 50 lineal km of ice and/or one month. Point measurements are not so useful in this context. Data from both hemispheres and any body of ocean water would be included. Documentation would include locations, times, measurement methods, processing, snow depth assumptions, averaging distance and time, error characteristics, data provider, and more. The cooperation and collaboration of the various data providers is essential to the success of this project and so far we have had a very gratifying response to our overtures. We would like to hear from any who have not heard from us and who have collected sea ice thickness data at the approximate target scales. With potentially thousands of individual samples, much could be learned about the measurement systems, about the changing state of the ice cover, and about ice model performance and

  15. Tsunami-HySEA model validation for tsunami current predictions

    NASA Astrophysics Data System (ADS)

    Macías, Jorge; Castro, Manuel J.; González-Vida, José Manuel; Ortega, Sergio

    2016-04-01

    Model ability to compute and predict tsunami flow velocities is of importance in risk assessment and hazard mitigation. Substantial damage can be produced by high velocity flows, particularly in harbors and bays, even when the wave height is small. Besides, an accurate simulation of tsunami flow velocities and accelerations is fundamental for advancing in the study of tsunami sediment transport. These considerations made the National Tsunami Hazard Mitigation Program (NTHMP) proposing a benchmark exercise focussed on modeling and simulating tsunami currents. Until recently, few direct measurements of tsunami velocities were available to compare and to validate model results. After Tohoku 2011 many current meters measurement were made, mainly in harbors and channels. In this work we present a part of the contribution made by the EDANYA group from the University of Malaga to the NTHMP workshop organized at Portland (USA), 9-10 of February 2015. We have selected three out of the five proposed benchmark problems. Two of them consist in real observed data from the Tohoku 2011 event, one at Hilo Habour (Hawaii) and the other at Tauranga Bay (New Zealand). The third one consists in laboratory experimental data for the inundation of Seaside City in Oregon. Acknowledgements: This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069) and the Spanish Government Research project DAIFLUID (MTM2012-38383-C02-01) and Universidad de Málaga, Campus de Excelencia Andalucía TECH. The GPU and multi-GPU computations were performed at the Unit of Numerical Methods (UNM) of the Research Support Central Services (SCAI) of the University of Malaga.

  16. Predictive data modeling of human type II diabetes related statistics

    NASA Astrophysics Data System (ADS)

    Jaenisch, Kristina L.; Jaenisch, Holger M.; Handley, James W.; Albritton, Nathaniel G.

    2009-04-01

    During the course of routine Type II treatment of one of the authors, it was decided to derive predictive analytical Data Models of the daily sampled vital statistics: namely weight, blood pressure, and blood sugar, to determine if the covariance among the observed variables could yield a descriptive equation based model, or better still, a predictive analytical model that could forecast the expected future trend of the variables and possibly eliminate the number of finger stickings required to montior blood sugar levels. The personal history and analysis with resulting models are presented.

  17. SDSS-II: Determination of shape and color parameter coefficients for SALT-II fit model

    SciTech Connect

    Dojcsak, L.; Marriner, J.; /Fermilab

    2010-08-01

    In this study we look at the SALT-II model of Type IA supernova analysis, which determines the distance moduli based on the known absolute standard candle magnitude of the Type IA supernovae. We take a look at the determination of the shape and color parameter coefficients, {alpha} and {beta} respectively, in the SALT-II model with the intrinsic error that is determined from the data. Using the SNANA software package provided for the analysis of Type IA supernovae, we use a standard Monte Carlo simulation to generate data with known parameters to use as a tool for analyzing the trends in the model based on certain assumptions about the intrinsic error. In order to find the best standard candle model, we try to minimize the residuals on the Hubble diagram by calculating the correct shape and color parameter coefficients. We can estimate the magnitude of the intrinsic errors required to obtain results with {chi}{sup 2}/degree of freedom = 1. We can use the simulation to estimate the amount of color smearing as indicated by the data for our model. We find that the color smearing model works as a general estimate of the color smearing, and that we are able to use the RMS distribution in the variables as one method of estimating the correct intrinsic errors needed by the data to obtain the correct results for {alpha} and {beta}. We then apply the resultant intrinsic error matrix to the real data and show our results.

  18. Modeling short wave radiation and ground surface temperature: a validation experiment in the Western Alps

    NASA Astrophysics Data System (ADS)

    Pogliotti, P.; Cremonese, E.; Dallamico, M.; Gruber, S.; Migliavacca, M.; Morra di Cella, U.

    2009-12-01

    Permafrost distribution in high-mountain areas is influenced by topography (micro-climate) and high variability of ground covers conditions. Its monitoring is very difficult due to logistical problems like accessibility, costs, weather conditions and reliability of instrumentation. For these reasons physically-based modeling of surface rock/ground temperatures (GST) is fundamental for the study of mountain permafrost dynamics. With this awareness a 1D version of GEOtop model (www.geotop.org) is tested in several high-mountain sites and its accuracy to reproduce GST and incoming short wave radiation (SWin) is evaluated using independent field measurements. In order to describe the influence of topography, both flat and near-vertical sites with different aspects are considered. Since the validation of SWin is difficult on steep rock faces (due to the lack of direct measures) and validation of GST is difficult on flat sites (due to the presence of snow) the two parameters are validated as independent experiments: SWin only on flat morphologies, GST only on the steep ones. The main purpose is to investigate the effect of: (i) distance between driving meteo station location and simulation point location, (ii) cloudiness, (iii) simulation point aspect, (iv) winter/summer period. The temporal duration of model runs is variable from 3 years for the SWin experiment to 8 years for the validation of GST. The model parameterization is constant and tuned for a common massive bedrock of crystalline rock like granite. Ground temperature profile is not initialized because rock temperature is measured at only 10cm depth. A set of 9 performance measures is used for comparing model predictions and observations (including: fractional mean bias (FB), coefficient of residual mass (CMR), mean absolute error (MAE), modelling efficiency (ME), coefficient of determination (R2)). Results are very encouraging. For both experiments the distance (Km) between location of the driving meteo

  19. Mitigation of turbidity currents in reservoirs with passive retention systems: validation of CFD modeling

    NASA Astrophysics Data System (ADS)

    Ferreira, E.; Alves, E.; Ferreira, R. M. L.

    2012-04-01

    Sediment deposition by continuous turbidity currents may affect eco-environmental river dynamics in natural reservoirs and hinder the maneuverability of bottom discharge gates in dam reservoirs. In recent years, innovative techniques have been proposed to enforce the deposition of turbidity further upstream in the reservoir (and away from the dam), namely, the use of solid and permeable obstacles such as water jet screens , geotextile screens, etc.. The main objective of this study is to validate a computational fluid dynamics (CFD) code applied to the simulation of the interaction between a turbidity current and a passive retention system, designed to induce sediment deposition. To accomplish the proposed objective, laboratory tests were conducted where a simple obstacle configuration was subjected to the passage of currents with different initial sediment concentrations. The experimental data was used to build benchmark cases to validate the 3D CFD software ANSYS-CFX. Sensitivity tests of mesh design, turbulence models and discretization requirements were performed. The validation consisted in comparing experimental and numerical results, involving instantaneous and time-averaged sediment concentrations and velocities. In general, a good agreement between the numerical and the experimental values is achieved when: i) realistic outlet conditions are specified, ii) channel roughness is properly calibrated, iii) two equation k - ɛ models are employed iv) a fine mesh is employed near the bottom boundary. Acknowledgements This study was funded by the Portuguese Foundation for Science and Technology through the project PTDC/ECM/099485/2008. The first author thanks the assistance of Professor Moitinho de Almeida from ICIST and to all members of the project and of the Fluvial Hydraulics group of CEHIDRO.

  20. Modelling Ar II spectral emission from the ASTRAL helicon plasma

    NASA Astrophysics Data System (ADS)

    Munoz Burgos, Jorge; Boivin, Robert; Loch, Stuart; Kamar, Ola; Ballance, Connor; Pindzola, Mitch

    2008-11-01

    We describe our spectral modeling of ArII emission from the ASTRAL helicon plasma at Auburn University. Collisional-radiative theory is used to model the emitted spectrum, with account being taken for the density and temperature variation along the line of sight. This study has two main aims. Firstly to test the atomic data used in the model and secondly to identify spectral line ratios in the 200 nm - 1000 nm range that could be used as temperature diagnostics. Using the temperature at which Ar II emission starts to be seen we have been able to test recent ionization and recombination data. Using selected spectral lines we were then able to test the importance of the continuum-coupling effects included in the most recent Ar+ electron impact excitation data. Selected spectral line ratios have been identified that show a strong temperature variation and have potential as a temperature diagnostic.

  1. Validation of Community Models: Identifying Events in Space Weather Model Timelines

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    I develop and document a set of procedures which test the quality of predictions of solar wind speed and polarity of the interplanetary magnetic field (IMF) made by coupled models of the ambient solar corona and heliosphere. The Wang-Sheeley-Arge (WSA) model is used to illustrate the application of these validation procedures. I present an algorithm which detects transitions of the solar wind from slow to high speed. I also present an algorithm which processes the measured polarity of the outward directed component of the IMF. This removes high-frequency variations to expose the longer-scale changes that reflect IMF sector changes. I apply these algorithms to WSA model predictions made using a small set of photospheric synoptic magnetograms obtained by the Global Oscillation Network Group as input to the model. The results of this preliminary validation of the WSA model (version 1.6) are summarized.

  2. On the verification and validation of detonation models

    NASA Astrophysics Data System (ADS)

    Quirk, James

    2013-06-01

    This talk will consider the verification and validation of detonation models, such as Wescott-Stewart-Davis (Journal of Applied Physics. 2005), from the perspective of the American Institute of Aeronautics and Astronautics policy on numerical accuracy (AIAA J. Vol. 36, No. 1, 1998). A key aspect of the policy is that accepted documentation procedures must be used for journal articles with the aim of allowing the reported work to be reproduced by the interested reader. With the rise of electronic documents, since the policy was formulated, it is now possible to satisfy this mandate in its strictest sense: that is, it is now possible to run a comptuational verification study directly in a PDF, thereby allowing a technical author to report numerical subtleties that traditionally have been ignored. The motivation for this document-centric approach is discussed elsewhere (Quirk2003, Adaptive Mesh Refinement Theory and Practice, Springer), leaving the talk to concentrate on specific detonation examples that should be of broad interest to the shock-compression community.

  3. The development and validation of a ductile fracture analysis model

    SciTech Connect

    Kanninen, M.F.; Morrow, T.B.; Grant, T.S.

    1994-05-01

    The ultimate objective of this research is a user-oriented methodology that can be used by gas transmission company engineers for assessing the risk of ductile fracture propagation in the full range of design and operating conditions anticipated for their gas transmission pipeline system. A thoroughly validated procedure is therefore required that encompasses the full range of pipe diameters, operating pressures, pipe steels and rich gas compositions (RGCs) used by North American gas transmission companies. The final result could be incorporated into a user friendly PC-based code that would allow engineering assessment of safety margins for pipeline design and operation to be determined. Towards this objective, this report describes two specific tasks that were undertaken to advance the model development. Parametric calculations of crack tip pressure vs. wave speed were completed that bound the full range of (RGCs) decompression. The results were used in parametric ductile fracture computations, conducted to proclude an interpolating formula for the computation of the upper bound crack driving force for RGCs, supplementing the formula developed previously for methane. The results of the research completed to date can be used for a new pipeline design; or to calculate the critical pressure for an existing pipeline above which any rupture could lead to long propagating fracture. For either application, the ductile fracture resistance of candidate or existing line pipe steel can be determined using the procedures given in App. B.

  4. Model Validation of Flow and Dispersion Around a Cube

    SciTech Connect

    Lee, R.L.; Chan, S.T.

    2000-01-20

    This paper compares results for flow over a cube between laboratory experiments and two numerical simulations. One of the simulations is a Reynolds-averaged Navier-Stokes (RANS) calculation, the other a large eddy simulation (LES). Both the structure of the flow and dispersion of a source behind the cube are compared. It was found that both simulations performed well when mean flows are compared. For dispersion, the LES performed better than the RANS simulation in that it was able to capture the effect of vortex shedding and produce a wider dispersion pattern. The plume in the RANS simulation is very similar to instantaneous realizations of the plume in the LES. Near the cube, the results were very similar. This model validation study suggests that the high cost of LES computations may be warranted when detailed time-varying solutions are of high interest, However the high fidelity RANS approach is a cost-effective alternative to LES in obtaining time-average mean field results.

  5. An independent verification and validation of the Future Theater Level Model conceptual model

    SciTech Connect

    Hartley, D.S. III; Kruse, K.L.; Martellaro, A.J.; Packard, S.L.; Thomas, B. Jr.; Turley, V.K.

    1994-08-01

    This report describes the methodology and results of independent verification and validation performed on a combat model in its design stage. The combat model is the Future Theater Level Model (FTLM), under development by The Joint Staff/J-8. J-8 has undertaken its development to provide an analysis tool that addresses the uncertainties of combat more directly than previous models and yields more rapid study results. The methodology adopted for this verification and validation consisted of document analyses. Included were detailed examination of the FTLM design documents (at all stages of development), the FTLM Mission Needs Statement, and selected documentation for other theater level combat models. These documents were compared to assess the FTLM as to its design stage, its purpose as an analytical combat model, and its capabilities as specified in the Mission Needs Statement. The conceptual design passed those tests. The recommendations included specific modifications as well as a recommendation for continued development. The methodology is significant because independent verification and validation have not been previously reported as being performed on a combat model in its design stage. The results are significant because The Joint Staff/J-8 will be using the recommendations from this study in determining whether to proceed with develop of the model.

  6. Real-time infrared signature model validation for hardware-in-the-loop simulations

    NASA Astrophysics Data System (ADS)

    Sanders, Jeffrey S.; Peters, Trina S.

    1997-07-01

    Techniques and tools for validation of real-time infrared target signature models are presented. The model validation techniques presented in this paper were developed for hardware-in-the-loop (HWIL) simulations at the U.S. Army Missile Command's Research, Development, and Engineering Center. Real-time target model validation is a required deliverable to the customer of a HWIL simulation facility and is a critical part of ensuring the fidelity of a HWIL simulation. There are two levels of real-time target model validation. The first level is comparison of the target model to some baseline or measured data which answers the question `are the simulation inputs correct?'. The second level of validation is a simulation validation which answers the question `for a given target model input is the simulation hardware and software generating the correct output?'. This paper deals primarily with the first level of target model validation. IR target signature models have often been validated by subjective visual inspection or by objective, but limited, statistical comparisons. Subjective methods can be very satisfying to the simulation developer but offer little comfort to the simulation customer since subjective methods cannot be documented. Generic statistical methods offer a level of documentation, yet are often not robust enough to fully test the fidelity of an IR signature. Advances in infrared seeker and sensor technology have led to the necessity of system specific target model validation. For any HWIL simulation it must be demonstrated that the sensor responds to the real-time signature model in a manner which is functionally equivalent to the sensor's response to a baseline model. Depending on the application, a baseline method can be measured IR imagery or the output of a validated IR signature prediction code. Tools are described that generate validation data for HWIL simulations at MICOM and example real-time model validations are presented.

  7. Validation of the integration of CFD and SAS4A/SASSYS-1: Analysis of EBR-II shutdown heat removal test 17

    SciTech Connect

    Thomas, J. W.; Fanning, T. H.; Vilim, R.; Briggs, L. L.

    2012-07-01

    Recent analyses have demonstrated the need to model multidimensional phenomena, particularly thermal stratification in outlet plena, during safety analyses of loss-of-flow transients of certain liquid-metal cooled reactor designs. Therefore, Argonne's reactor systems safety code SAS4A/SASSYS-1 is being enhanced by integrating 3D computational fluid dynamics models of the plena. A validation exercise of the new tool is being performed by analyzing the protected loss-of-flow event demonstrated by the EBR-II Shutdown Heat Removal Test 17. In this analysis, the behavior of the coolant in the cold pool is modeled using the CFD code STAR-CCM+, while the remainder of the cooling system and the reactor core are modeled with SAS4A/SASSYS-1. This paper summarizes the code integration strategy and provides the predicted 3D temperature and velocity distributions inside the cold pool during SHRT-17. The results of the coupled analysis should be considered preliminary at this stage, as the exercise pointed to the need to improve the CFD model of the cold pool tank. (authors)

  8. Experiments for calibration and validation of plasticity and failure material modeling: 304L stainless steel.

    SciTech Connect

    Lee, Kenneth L.; Korellis, John S.; McFadden, Sam X.

    2006-01-01

    Experimental data for material plasticity and failure model calibration and validation were obtained from 304L stainless steel. Model calibration data were taken from smooth tension, notched tension, and compression tests. Model validation data were provided from experiments using thin-walled tube specimens subjected to path dependent combinations of internal pressure, extension, and torsion.

  9. Comprehensive Modeling of the Apache With CAMRAD II

    NASA Technical Reports Server (NTRS)

    Jones, Henry E.; Kunz, Donald L.

    2001-01-01

    This paper presents a report of a multi year study of the U.S. Army LONGBOW APACHE (AH-64D) aircraft. The goals of this study were to provide the Apache Project Managers Office (PMO) with a broad spectrum of calibrated comprehensive and CFD models of the AH-64D aircraft. The goal of this paper is to present an overview of the comprehensive model which has been developed. The CAMRAD II computer code was chosen to complete this task. The paper first discusses issues that must be addressed when modeling the Apache using CAMRAD. The work required the acquisition of a data base for the aircraft and the development and application of a multidisciplinary computer model. Sample results from various parts of the model are presented. Conclusions with regard to the strengths and weaknesses of simulations based on this model are discussed.

  10. Comprehensive Modeling of the Apache with CAMRAD II

    NASA Technical Reports Server (NTRS)

    Jones, Henry E.; Kunz, Donald L.

    2001-01-01

    This paper presents a report of a multi year study of the U.S. Army LONGBOW APACHE (AH-64D) aircraft. The goals of this study were to provide the Apache Project Managers Office (PMO) with a broad spectrum of calibrated comprehensive and CFD models of the AH-64D aircraft. The goal of this paper is to present an overview of the comprehensive model which has been developed. The CAMRAD II computer code was chosen to complete this task. The paper first discusses issues that must be addressed when modeling the Apache using CAMRAD. The work required the acquisition of a data base for the aircraft and the development and application of a multidisciplinary computer model. Sample results from various parts of the model are presented. Conclusions with regard to the strengths and weaknesses of simulations based on this model are discussed.

  11. Second-Moment RANS Model Verification and Validation Using the Turbulence Modeling Resource Website (Invited)

    NASA Technical Reports Server (NTRS)

    Eisfeld, Bernhard; Rumsey, Chris; Togiti, Vamshi

    2015-01-01

    The implementation of the SSG/LRR-omega differential Reynolds stress model into the NASA flow solvers CFL3D and FUN3D and the DLR flow solver TAU is verified by studying the grid convergence of the solution of three different test cases from the Turbulence Modeling Resource Website. The model's predictive capabilities are assessed based on four basic and four extended validation cases also provided on this website, involving attached and separated boundary layer flows, effects of streamline curvature and secondary flow. Simulation results are compared against experimental data and predictions by the eddy-viscosity models of Spalart-Allmaras (SA) and Menter's Shear Stress Transport (SST).

  12. Importance of Sea Ice for Validating Global Climate Models

    NASA Technical Reports Server (NTRS)

    Geiger, Cathleen A.

    1997-01-01

    Reproduction of current day large-scale physical features and processes is a critical test of global climate model performance. Without this benchmark, prognoses of future climate conditions are at best speculation. A fundamental question relevant to this issue is, which processes and observations are both robust and sensitive enough to be used for model validation and furthermore are they also indicators of the problem at hand? In the case of global climate, one of the problems at hand is to distinguish between anthropogenic and naturally occuring climate responses. The polar regions provide an excellent testing ground to examine this problem because few humans make their livelihood there, such that anthropogenic influences in the polar regions usually spawn from global redistribution of a source originating elsewhere. Concomitantly, polar regions are one of the few places where responses to climate are non-anthropogenic. Thus, if an anthropogenic effect has reached the polar regions (e.g. the case of upper atmospheric ozone sensitivity to CFCs), it has most likely had an impact globally but is more difficult to sort out from local effects in areas where anthropogenic activity is high. Within this context, sea ice has served as both a monitoring platform and sensitivity parameter of polar climate response since the time of Fridtjof Nansen. Sea ice resides in the polar regions at the air-sea interface such that changes in either the global atmospheric or oceanic circulation set up complex non-linear responses in sea ice which are uniquely determined. Sea ice currently covers a maximum of about 7% of the earth's surface but was completely absent during the Jurassic Period and far more extensive during the various ice ages. It is also geophysically very thin (typically <10 m in Arctic, <3 m in Antarctic) compared to the troposphere (roughly 10 km) and deep ocean (roughly 3 to 4 km). Because of these unique conditions, polar researchers regard sea ice as one of the

  13. MATSIM -The Development and Validation of a Numerical Voxel Model based on the MATROSHKA Phantom

    NASA Astrophysics Data System (ADS)

    Beck, Peter; Rollet, Sofia; Berger, Thomas; Bergmann, Robert; Hajek, Michael; Latocha, Marcin; Vana, Norbert; Zechner, Andrea; Reitz, Guenther

    The AIT Austrian Institute of Technology coordinates the project MATSIM (MATROSHKA Simulation) in collaboration with the Vienna University of Technology and the German Aerospace Center. The aim of the project is to develop a voxel-based model of the MATROSHKA anthro-pomorphic torso used at the International Space Station (ISS) as foundation to perform Monte Carlo high-energy particle transport simulations for different irradiation conditions. Funded by the Austrian Space Applications Programme (ASAP), MATSIM is a co-investigation with the European Space Agency (ESA) ELIPS project MATROSHKA, an international collaboration of more than 18 research institutes and space agencies from all over the world, under the science and project lead of the German Aerospace Center. The MATROSHKA facility is designed to determine the radiation exposure of an astronaut onboard ISS and especially during an ex-travehicular activity. The numerical model developed in the frame of MATSIM is validated by reference measurements. In this report we give on overview of the model development and compare photon and neutron irradiations of the detector-equipped phantom torso with Monte Carlo simulations using FLUKA. Exposure to Co-60 photons was realized in the standard ir-radiation laboratory at Seibersdorf, while investigations with neutrons were performed at the thermal column of the Vienna TRIGA Mark-II reactor. The phantom was loaded with passive thermoluminescence dosimeters. In addition, first results of the calculated dose distribution within the torso are presented for a simulated exposure in low-Earth orbit.

  14. Transfer matrix modeling and experimental validation of cellular porous material with resonant inclusions.

    PubMed

    Doutres, Olivier; Atalla, Noureddine; Osman, Haisam

    2015-06-01

    Porous materials are widely used for improving sound absorption and sound transmission loss of vibrating structures. However, their efficiency is limited to medium and high frequencies of sound. A solution for improving their low frequency behavior while keeping an acceptable thickness is to embed resonant structures such as Helmholtz resonators (HRs). This work investigates the absorption and transmission acoustic performances of a cellular porous material with a two-dimensional periodic arrangement of HR inclusions. A low frequency model of a resonant periodic unit cell based on the parallel transfer matrix method is presented. The model is validated by comparison with impedance tube measurements and simulations based on both the finite element method and a homogenization based model. At the HR resonance frequency (i) the transmission loss is greatly improved and (ii) the sound absorption of the foam can be either decreased or improved depending on the HR tuning frequency and on the thickness and properties of the host foam. Finally, the diffuse field sound absorption and diffuse field sound transmission loss performance of a 2.6 m(2) resonant cellular material are measured. It is shown that the improvements observed at the Helmholtz resonant frequency on a single cell are confirmed at a larger scale. PMID:26093437

  15. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    NASA Technical Reports Server (NTRS)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL

  16. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  17. An integrated model of the TOPAZ-II electromagnetic pump

    SciTech Connect

    El-Genk, M.S.; Paramonov, D.V. . Inst. of Space Nuclear Power Studies)

    1994-11-01

    A detailed model of the electromagnetic pump of the TOPAZ-II space nuclear reactor power system is developed and compared with experimental data. The magnetic field strength in the pump depends not only on the current supplied by the pump thermionic fuel elements in the reactor core but also on the temperature of the coolant, the magnetic coil, and the pump structure. All electric and thermal properties of the coolant, wall material of the pump ducts, and electric leads are taken to be temperature dependent. The model predictions are in good agreement with experimental data.

  18. Distinguishing 'new' from 'old' organic carbon in reclaimed coal mine sites using thermogravimetry: II. Field validation

    SciTech Connect

    Maharaj, S.; Barton, C.D.; Karathanasis, T.A.D.; Rowe, H.D.; Rimmer, S.M.

    2007-04-15

    Thermogravimetry was used under laboratory conditions to differentiate 'new' and 'old' organic carbon (c) by using grass litter, coal, and limestone to represent the different C fractions. Thermogravimetric and derivative thermogravimetry curves showed pyrolysis peaks at distinctively different temperatures, with the peak for litter occurring at 270 to 395{sup o}C, for coal at 415 to 520 {sup o}C, and for limestone at 700 to 785{sup o}C. To validate this method in a field setting, we studied four reforested coal mine sites in Kentucky representing a chronosequence since reclamation: 0 and 2 years located at Bent Mountain and 3 and 8 years located at the Starfire mine. A nonmined mature (approximate to 80 years old) stand at Robinson Forest, Kentucky, was selected as a reference location. Results indicated a general peak increase in the 270 to 395{sup o}C region with increased time, signifying an increase in the 'new' organic matter (OM) fraction. For the Bent Mountain site, the OM fraction increased from 0.03 to 0.095% between years 0 and 2, whereas the Starfire site showed an increase from 0.095 to 1.47% between years 3 and 8. This equates to a C sequestration rate of 2.92 Mg ha{sup -1} yr{sup -1} for 'new' OM in the upper 10-cm layer during the 8 years of reclamation on eastern Kentucky reclaimed coal mine sites. Results suggest that stable isotopes and elemental data can be used as proxy tools for qualifying soil organic C (SOC) changes over time on the reclaimed coal mine sites but cannot be used to determine the exact SOC accumulation rate. However, results suggested that the thermogravimetric and derivative thermogravimetry methods can be used to quantify SOC accumulation and has the potential to be a more reliable, cost-effective, and rapid means to determine the new organic C fraction in mixed geological material, especially in areas dominated by coal and carbonate materials.

  19. Coupled two-dimensional main-chain torsional potential for protein dynamics II: performance and validation.

    PubMed

    Gao, Ya; Li, Yongxiu; Mou, Lirong; Hu, Wenxin; Zheng, Jun; Zhang, John Z H; Mei, Ye

    2015-03-19

    The accuracy of force fields is of utmost importance in molecular modeling of proteins. Despite successful applications of force fields for about the past 30 years, some inherent flaws lying in force fields, such as biased secondary propensities and fixed atomic charges, have been observed in different aspects of biomolecular research; hence, a correction to current force fields is desirable. Because of the simplified functional form and the limited number of parameters for main chain torsion (MCT) in traditional force fields, it is not easy to propose an exquisite force field that is well-balanced among various conformations. Recently, AMBER-compatible force fields with coupled MCT term have been proposed, which show some improvement over AMBER03 and AMBER99SB force fields. In this work, further calibration of the torsional parameters has been conducted by changing the solvation model in quantum mechanical calculation and minimizing the deviation from the nuclear magnetic resonance experiments for some benchmark model systems and a folded protein. The results show that the revised force fields give excellent agreement with experiments in J coupling, chemical shifts, and secondary structure populations. In addition, the polarization effect is found to be crucial for the systems with ordered secondary structures. PMID:25719206

  20. Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Sebok, Angelia; Gore, Brian; Hooey, Becky

    2012-01-01

    We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures.

  1. Modeling and Simulation of Longitudinal Dynamics for LER-HER PEP II Rings

    SciTech Connect

    Rivetta, Claudio; Mastorides, T.; Fox, J.D.; Teytelman, D.; Van Winkle, D.; /SLAC

    2007-03-06

    A time domain modeling and simulation tool for beam-cavity interactions in LER and HER rings at PEP II are presented. The motivation for this tool is to explore the stability margins and performance limits of PEP II RF systems at higher currents and upgraded RF configurations. It also serves as test bed for new control algorithms and can define the ultimate limits of the architecture. The time domain program captures the dynamical behavior of the beam-cavity interaction based on a reduced model. The ring current is represented by macro-bunches. Multiple RF station in the ring are represented via one or two macro-cavities. Each macro-cavity captures the overall behavior of all the 2 or 4 cavity RF station. Station models include nonlinear elements in the klystron and signal processing. This allows modeling the principal longitudinal impedance control loops interacting with the longitudinal beam model. Validation of simulation tool is in progress by comparing the measured growth rates for both LER and HER rings with simulation results. The simulated behavior of both machines at high currents are presented comparing different control strategies and the effect of non-linear klystrons in the growth rates.

  2. Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition

    NASA Technical Reports Server (NTRS)

    Ewing, Anthony; Adams, Charles

    2004-01-01

    Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.

  3. A Bayesian network approach to knowledge integration and representation of farm irrigation: 2. Model validation

    NASA Astrophysics Data System (ADS)

    Robertson, D. E.; Wang, Q. J.; Malano, H.; Etchells, T.

    2009-02-01

    For models to be useful, they need to adequately describe the systems they represent. The probabilistic nature of Bayesian network models has traditionally meant that model validation is difficult. In this paper we present a process to validate Inteca-Farm, a Bayesian network model of farm irrigation that we described in the first paper of this series. We assessed three aspects of the quality of model predictions, namely, bias, accuracy, and skill, for the two variables for which validation data are available directly or indirectly. We also examined model predictions for any systematic errors. The validation results show that the bias and accuracy of the two validated variables are within acceptable tolerances and that systematic errors are minimal. This suggests that Inteca-Farm is a plausible representation of farm irrigation system in the Shepparton Irrigation Region of northern Victoria, Australia.

  4. Virtual Model Validation of Complex Multiscale Systems: Applications to Nonlinear Elastostatics

    SciTech Connect

    Oden, John Tinsley; Prudencio, Ernest E.; Bauman, Paul T.

    2013-01-01

    We propose a virtual statistical validation process as an aid to the design of experiments for the validation of phenomenological models of the behavior of material bodies, with focus on those cases in which knowledge of the fabrication process used to manufacture the body can provide information on the micro-molecular-scale properties underlying macroscale behavior. One example is given by models of elastomeric solids fabricated using polymerization processes. We describe a framework for model validation that involves Bayesian updates of parameters in statistical calibration and validation phases. The process enables the quanti#12;cation of uncertainty in quantities of interest (QoIs) and the determination of model consistency using tools of statistical information theory. We assert that microscale information drawn from molecular models of the fabrication of the body provides a valuable source of prior information on parameters as well as a means for estimating model bias and designing virtual validation experiments to provide information gain over calibration posteriors.

  5. Early electromagnetic waves from earthquake rupturing: II. validation and numerical experiments

    NASA Astrophysics Data System (ADS)

    Gao, Yongxin; Chen, Xiaofei; Hu, Hengshan; Zhang, Jie

    2013-03-01

    We validate the branch-cut integration (BCI) technique presented in the companion paper. The numerical result shows that the early electromagnetic (EM) signal calculated by the BCI method is in good agreement with that in the full waveform calculated by the real-axis integration method. We further find that to calculate the early EM signal only the integrals along the vertical branch cuts that are around the k0 and kem branch points are needed, whereas neither the integrals along the vertical branch cuts around the Pf(P), S and Ps branch points nor the residues of the poles are necessary. We conduct numerical experiments to analyse the early EM signal generated by an earthquake in a porous half-space, including its component analysis, sensitivities to the conductivity, viscosity and recording depth, and radiation pattern. The component analysis shows that the total early EM (emTotal) signal is not a single wave but a combination of three kinds of EM waves, namely, the direct emd wave radiated from the source, the reflected emr waves converted from the emd wave, the direct P and S waves at the free surface and the critically refracted EM0 waves which are also converted from the emd wave, the direct P and S waves at the free surface. Three pulses, namely, the emd-, P- and S-converted pulses are identified in the emTotal signal according to their different arrival times. The emd-converted pulse arrives immediately after the occurrence of the earthquake and it is a combination of the emd wave, the emr and EM0 waves converted from the emd wave at the free surface. The P-converted (S-converted, respectively) pulse owns an arrival time approximately equal to that spent by the P wave (S wave, respectively) travelling from the hypocentre to the epicentre, and it is a combination of the emr and EM0 waves converted from the P wave (S wave, respectively) at the free surface. The P-converted pulse is usually weaker than the S- and emd-converted pulses in the electrogram and is

  6. A MODEL STUDY OF TRANSVERSE MODE COUPLING INSTABILITY AT NATIONAL SYNCHROTRON LIGHT SOURCE-II (NSLS-II).

    SciTech Connect

    BLEDNYKH, A.; WANG, J.M.

    2005-05-15

    The vertical impedances of the preliminary designs of National Synchrotron Light Source II (NSLS-II) Mini Gap Undulators (MGU) are calculated by means of GdfidL code. The Transverse Mode Coupling Instability (TMCI) thresholds corresponding to these impedances are estimated using an analytically solvable model.

  7. Bound on Z{sup '} mass from CDMS II in the dark left-right gauge model II

    SciTech Connect

    Khalil, Shaaban; Lee, Hye-Sung; Ma, Ernest

    2010-03-01

    With the recent possible signal of dark matter from the CDMS II experiment, the Z{sup '} mass of a new version of the dark left-right gauge model (DLRM II) is predicted to be at around a TeV. As such, it has an excellent discovery prognosis at the operating Large Hadron Collider.

  8. Validation of a vertical progression porcine burn model.

    PubMed

    Singer, Adam J; Hirth, Douglas; McClain, Steve A; Crawford, Laurie; Lin, Fubao; Clark, Richard A F

    2011-01-01

    A major potential goal of burn therapy is to limit progression of partial- to full-thickness burns. To better test therapies, the authors developed and validated a vertical progression porcine burn model in which partial-thickness burns treated with an occlusive dressing convert to full-thickness burns that heal with scarring and wound contraction. Forty contact burns were created on the backs and flanks of two young swine using a 150 g aluminum bar preheated to 70°C, 80°C, or 90°C for 20 or 30 seconds. The necrotic epidermis was removed and the burns were covered with a polyurethane occlusive dressing. Burns were photographed at 1, 24, and 48 hours as well as at 7, 14, 21, and 28 days postinjury. Full-thickness biopsies were obtained at 1, 4, 24, and 48 hours as well as at 7 and 28 days. The primary outcomes were presence of deep contracted scars and wound area 28 days after injury. Secondary outcomes were depth of injury, reepithelialization, and depth of scars. Data were compared across burn conditions using analysis of variance and χ(2) tests. Eight replicate burns were created with the aluminum bar using the following temperature/contact-time combinations: 70/20, 70/30, 80/20, 80/30, and 90/20. The percentage of burns healing with contracted scars were 70/20, 0%; 70/30, 25%; 80/20, 50%; 80/30, 75%; and 90/20, 100% (P = .05). Wound areas at 28 days by injury conditions were 70/20, 8.1 cm(2); 70/30, 7.8 cm(2); 80/20, 6.6 cm(2); 80/30, 4.9 cm(2); and 90/20, 4.8 cm(2) (P = .007). Depth of injury judged by depth of endothelial damage for the 80/20 and 80/30 burns at 1 hour was 36% and 60% of the dermal thickness, respectively. The depth of injury to the endothelial cells 1 hour after injury was inversely correlated with the degree of scar area (Pearson's correlation r = -.71, P < .001). Exposure of porcine skin to an aluminum bar preheated to 80°C for 20 or 30 seconds results initially in a partial-thickness burn that when treated with an occlusive dressing

  9. Cross-Validation of the Taiwan Version of the Moorehead–Ardelt Quality of Life Questionnaire II with WHOQOL and SF-36

    PubMed Central

    Chang, Chi-Yang; Huang, Chih-Kun; Chang, Yu-Yin; Tai, Chi-Ming; Lin, Jaw-Town

    2009-01-01

    Background Obesity has become a major worldwide public health issue. There is a need for tools to measure patient-reported outcomes. The Moorehead–Ardelt Quality of Life Questionnaire II (MA II) contains six items. The objective of this study was to translate the MA II into Chinese and validate it in patients with morbid obesity. Methods The MA II was translated into Chinese and back-translated into English by two language specialists to create the Taiwan version, which was validated by correlations with two other generic questionnaires of health-related quality of life (HRQOL), Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36), and World Health Organization Quality of Life (WHOQOL)-BREF Taiwan version. The convergent validity was accomplished by a series of Spearman rank correlations. Reliability of the MA II Taiwan version was determined by internal consistency obtained by Cronbach’s alpha coefficient and test–retest reliability obtained by intraclass correlation coefficient. Results One hundred subjects with morbid obesity were enrolled to test the MA II Taiwan version convergent validity and internal consistency. Test–retest studies (2 weeks apart) were applied to 30 morbidly obese patients. Satisfactory internal consistency was demonstrated by a Cronbach’s alpha coefficient of 0.79. Good test–retest reliability was shown by intraclass correlations ranging from 0.73 to 0.91. The total sum of MA II scores was significantly correlated with all four domains of the WHOQOL-BREF and two major components of SF-36 (all correlations, p < 0.01; range, 0.44–0.64). All six MA II items showed significant correlations with each other (r = 0.34–0.69, p < 0.01), and the total sum of MA II scores was negatively correlated with body mass index (r = −0.31, p < 0.01), indicating a one-dimensional questionnaire of HRQOL. Conclusions The MA II Taiwan version is an obesity-specific questionnaire for QOL evaluation with satisfactory

  10. Reconceptualizing the Learning Transfer Conceptual Framework: Empirical Validation of a New Systemic Model

    ERIC Educational Resources Information Center

    Kontoghiorghes, Constantine

    2004-01-01

    The main purpose of this study was to examine the validity of a new systemic model of learning transfer and thus determine if a more holistic approach to training transfer could better explain the phenomenon. In all, this study confirmed the validity of the new systemic model and suggested that a high performance work system could indeed serve as…

  11. A Validity Agenda for Growth Models: One Size Doesn't Fit All!

    ERIC Educational Resources Information Center

    Patelis, Thanos

    2012-01-01

    This is a keynote presentation given at AERA on developing a validity agenda for growth models in a large scale (e.g., state) setting. The emphasis of this presentation was to indicate that growth models and the validity agenda designed to provide evidence in supporting the claims to be made need to be personalized to meet the local or…

  12. The Validity of the Job Characteristics Model: A Review and Meta-Analysis.

    ERIC Educational Resources Information Center

    Fried, Yitzhak; Ferris, Gerald R.

    1987-01-01

    Assessed the validity of Hackman and Oldham's Job Characteristics Model by conducting a comprehensive review of nearly 200 relevant studies on the model as well as by applying meta-analytic procedures to much of the data. Available correlational results were reasonably valid and support the multidimensionality of job characteristics and their…

  13. Criteria for Validating the Feasibility of the Components of a Model Teacher Education Program.

    ERIC Educational Resources Information Center

    Johnson, Charles E.

    This paper contains the criteria for validating the technical and sociopsychological validity of the Georgia model for the preparation of elementary school teachers. The criteria, regarded as essential features for the effective operation of the model program, were developed during the planning activities preceding the funding of the project…

  14. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  15. Large-eddy simulation of flow past urban-like surfaces: A model validation study

    NASA Astrophysics Data System (ADS)

    Cheng, Wai Chi; Porté-Agel, Fernando

    2013-04-01

    Accurate prediction of atmospheric boundary layer (ABL) flow and its interaction with urban surfaces is critical for understanding the transport of momentum and scalars within and above cities. This, in turn, is essential for predicting the local climate and pollutant dispersion patterns in urban areas. Large-eddy simulation (LES) explicitly resolves the large-scale turbulent eddy motions and, therefore, can potentially provide improved understanding and prediction of flows inside and above urban canopies. This study focuses on developing and validating an LES framework to simulate flow past urban-like surfaces. In particular, large-eddy simulations were performed of flow past an infinite long two-dimensional (2D) building and an array of 3D cubic buildings. An immersed boundary (IB) method was employed to simulate both 2D and 3D buildings. Four subgrid-scale (SGS) models, including (i) the traditional Smagorinsky model, (ii) the Lagrangian dynamic model, (iii) the Lagrangian scale-dependent dynamic model, and (iv) the modulated gradient model, were evaluated using the 2D building case. The simulated velocity streamlines and the vertical profiles of the mean velocities and variances were compared with experimental results. The modulated gradient model shows the best overall agreement with the experimental results among the four SGS models. In particular, the flow recirculation, the reattachment position and the vertical profiles are accurately reproduced with a grid resolution of (Nx)x(Ny)x(Nz) =160x40x160 ((nx)x(nz) =13x16 covering the block). After validating the LES framework with the 2D building case, it was further applied to simulate a boundary-layer flow past a 3D building array. A regular aligned building array with seven rows of cubic buildings was simulated. The building spacings in the streamwise and spanwise directions were both equal to the building height. A developed turbulent boundary-layer flow was used as the incoming flow. The results were

  16. Hysteresis modeling and experimental validation of a magnetorheological damper

    NASA Astrophysics Data System (ADS)

    Bai, Xian-Xu; Chen, Peng; Qian, Li-Jun; Zhu, An-Ding

    2015-04-01

    In this paper, for modeling the MR dampers, based on the phenomenological model, a normalized phenomenological model is derived through incorporating a "normalization" concept and a restructured model is proposed and realized also with incorporation of the "normalization" concept. In order to demonstrate, a multi-islands genetic algorithm (GA) is employed to identify the parameters of the restructured model, the normalized phenomenological model as well as the phenomenological model. The research results indicate that, as compared with the phenomenological model and the normalized phenomenological model, (1) the restructured model not only can effectively decrease the number of the model parameters and reduce the complexity of the model, but also can describe the nonlinear hysteretic behavior of MR dampers more accurately, and (2) the normalized phenomenological model can improve the model efficiency as compared with the phenomenological model, although not as good as the restructured model.

  17. Validation of SPARC, a suppression pool aerosol capture model

    SciTech Connect

    Owczarski, P.C.; Winegardner, W.K.

    1985-09-01

    A study of the potential for atmospheric release in hypothetical severe core melt accidents in BWRs with suppression pools was recently completed using a prototype of the SPARC code. The process of validating SPARC using an experimental data base is the concern of this paper.

  18. Social Validity of a Positive Behavior Interventions and Support Model

    ERIC Educational Resources Information Center

    Miramontes, Nancy Y.; Marchant, Michelle; Heath, Melissa Allen; Fischer, Lane

    2011-01-01

    As more schools turn to positive behavior interventions and support (PBIS) to address students' academic and behavioral problems, there is an increased need to adequately evaluate these programs for social relevance. The present study used social validation measures to evaluate a statewide PBIS initiative. Active consumers of the program were…

  19. Contributions to the validation of the CJS model for granular materials

    NASA Astrophysics Data System (ADS)

    Elamrani, Khadija

    1992-07-01

    Behavior model validation in the field of geotechnics is addressed, with the objective of showing the advantages and limits of the CJS (Cambou Jafari Sidoroff) behavior model for granular materials. Several levels are addressed: theoretical analysis of the CJS model to reveal consistence and first capacities; shaping (followed by validation by confrontation with other programs) of a computation code by finite elements (FINITEL) to integrate this model and prepare it for complex applications; validation of the code/model structure thus constituted by comparing its results to those of experiments in the case of nonhomogeneous (superficial foundations) problems.

  20. Community-wide model validation studies for systematic assessment of ionosphere-thermosphere models

    NASA Astrophysics Data System (ADS)

    Shim, Ja Soon; Kuznetsova, Maria; Rastätter, Lutz

    2016-07-01

    As an unbiased agent, the Community Coordinated Modeling Center (CCMC) has been leading community-wide model validation efforts; GEM, CEDAR and GEM-CEDAR Modeling Challenges since 2009. The CEDAR ETI (Electrodynamics Thermosphere Ionosphere) Challenge focused on the ability of ionosphere-thermosphere (IT) models to reproduce basic IT system parameters, such as electron and neutral densities, NmF2, hmF2, and Total Electron Content (TEC). Model-data time series comparisons were performed for a set of selected events with different levels of geomagnetic activity (quiet, moderate, storms). The follow-on CEDAR-GEM Challenge aims to quantify geomagnetic storm impacts on the IT system. On-going studies include quantifying the storm energy input, such as increase in auroral precipitation and Joule heating, and quantifying the storm-time variations of neutral density and TEC. In this paper, we will present lessons learned from the Modeling Challenges led by the CCMC.

  1. Cross-validation analysis of bias models in Bayesian multi-model projections of climate

    NASA Astrophysics Data System (ADS)

    Huttunen, J. M. J.; Räisänen, J.; Nissinen, A.; Lipponen, A.; Kolehmainen, V.

    2016-05-01

    Climate change projections are commonly based on multi-model ensembles of climate simulations. In this paper we consider the choice of bias models in Bayesian multimodel predictions. Buser et al. (Clim Res 44(2-3):227-241, 2010a) introduced a hybrid bias model which combines commonly used constant bias and constant relation bias assumptions. The hybrid model includes a weighting parameter which balances these bias models. In this study, we use a cross-validation approach to study which bias model or bias parameter leads to, in a specific sense, optimal climate change projections. The analysis is carried out for summer and winter season means of 2 m-temperatures spatially averaged over the IPCC SREX regions, using 19 model runs from the CMIP5 data set. The cross-validation approach is applied to calculate optimal bias parameters (in the specific sense) for projecting the temperature change from the control period (1961-2005) to the scenario period (2046-2090). The results are compared to the results of the Buser et al. (Clim Res 44(2-3):227-241, 2010a) method which includes the bias parameter as one of the unknown parameters to be estimated from the data.

  2. Examining the Reliability and Validity of Clinician Ratings on the Five-Factor Model Score Sheet

    PubMed Central

    Few, Lauren R.; Miller, Joshua D.; Morse, Jennifer Q.; Yaggi, Kirsten E.; Reynolds, Sarah K.; Pilkonis, Paul A.

    2013-01-01

    Despite substantial research use, measures of the five-factor model (FFM) are infrequently used in clinical settings due, in part, to issues related to administration time and a reluctance to use self-report instruments. The current study examines the reliability and validity of the Five-Factor Model Score Sheet (FFMSS), which is a 30-item clinician rating form designed to assess the five domains and 30 facets of one conceptualization of the FFM. Studied in a sample of 130 outpatients, clinical raters demonstrated reasonably good interrater reliability across personality profiles and the domains manifested good internal consistency with the exception of Neuroticism. The FFMSS ratings also evinced expected relations with self-reported personality traits (e.g., FFMSS Extraversion and Schedule for Nonadaptive and Adaptive Personality Positive Temperament) and consensus-rated personality disorder symptoms (e.g., FFMSS Agreeableness and Narcissistic Personality Disorder). Finally, on average, the FFMSS domains were able to account for approximately 50% of the variance in domains of functioning (e.g., occupational, parental) and were even able to account for variance after controlling for Axis I and Axis II pathology. Given these findings, it is believed that the FFMSS holds promise for clinical use. PMID:20519735

  3. Homology Modeling, Validation and Dynamics of the G Protein-coupled Estrogen Receptor 1 (GPER-1).

    PubMed

    Bruno, Agostino; Aiello, Francesca; Costantino, Gabriele; Radi, Marco

    2016-09-01

    Estrogens exert their action mainly by binding three receptors, namely estrogen receptors α and β (ERα and ERβ) and GPER-1 (G-protein coupled estrogen receptor 1). While the patho-physiological role of both ERα and ERβ has been deeply investigated, the role of GPER-1 in estrogens' signaling has not been clearly defined yet. Unfortunately, only few GPER-1 selective ligands were discovered so far, and the real efficiency of such compounds is still matter of debate. To better understand the physiological relevance of GPER-1, new selective chemical probes are higly needed. In this scenario, we report herein the generation and validation of a three-dimensional (3-D) GPER-1 homology model by means of docking studies and molecular dynamics simulations. The model thus generated was employed to (i) decipher the structural basis underlying the ability of estrogens and some Selective Estrogen Receptor Modulators (SERMs) to bind GPER-1 and classical ERα and ERβ, and (ii) generate a reliable G1/GPER-1 complex useful in rationalizing the pharmacological profile of G1 reported in the literature. The G1/GPER-1 complex herein reported could be further exploited in drug design approaches aimed at improving the pharmacological profile of G1 or at identifying new chemical entities (NCEs) as potential modulators of GPER-1. PMID:27546037

  4. Using Structural Equation Modeling To Test for Differential Reliability and Validity: An Empirical Demonstration.

    ERIC Educational Resources Information Center

    Raines-Eudy, Ruth

    2000-01-01

    Demonstrates empirically a structural equation modeling technique for group comparison of reliability and validity. Data, which are from a study of 495 mothers' attitudes toward pregnancy, have a one-factor measurement model and three sets of subpopulation comparisons. (SLD)

  5. Impact of model development, calibration and validation decisions on hydrological simulations in West Lake Erie Basin

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Watershed simulation models are used extensively to investigate hydrologic processes, landuse and climate change impacts, pollutant load assessments and best management practices (BMPs). Developing, calibrating and validating these models require a number of critical decisions that will influence t...

  6. Testing the validity of the International Atomic Energy Agency (IAEA) safety culture model.

    PubMed

    López de Castro, Borja; Gracia, Francisco J; Peiró, José M; Pietrantoni, Luca; Hernández, Ana

    2013-11-01

    This paper takes the first steps to empirically validate the widely used model of safety culture of the International Atomic Energy Agency (IAEA), composed of five dimensions, further specified by 37 attributes. To do so, three independent and complementary studies are presented. First, 290 students serve to collect evidence about the face validity of the model. Second, 48 experts in organizational behavior judge its content validity. And third, 468 workers in a Spanish nuclear power plant help to reveal how closely the theoretical five-dimensional model can be replicated. Our findings suggest that several attributes of the model may not be related to their corresponding dimensions. According to our results, a one-dimensional structure fits the data better than the five dimensions proposed by the IAEA. Moreover, the IAEA model, as it stands, seems to have rather moderate content validity and low face validity. Practical implications for researchers and practitioners are included. PMID:24076304

  7. An analytical approach to computing biomolecular electrostatic potential. II. Validation and applications.

    PubMed

    Gordon, John C; Fenley, Andrew T; Onufriev, Alexey

    2008-08-21

    An ability to efficiently compute the electrostatic potential produced by molecular charge distributions under realistic solvation conditions is essential for a variety of applications. Here, the simple closed-form analytical approximation to the Poisson equation rigorously derived in Part I for idealized spherical geometry is tested on realistic shapes. The effects of mobile ions are included at the Debye-Huckel level. The accuracy of the resulting closed-form expressions for electrostatic potential is assessed through comparisons with numerical Poisson-Boltzmann (NPB) reference solutions on a test set of 580 representative biomolecular structures under typical conditions of aqueous solvation. For each structure, the deviation from the reference is computed for a large number of test points placed near the dielectric boundary (molecular surface). The accuracy of the approximation, averaged over all test points in each structure, is within 0.6 kcal/mol/mid R:emid R: approximately kT per unit charge for all structures in the test set. For 91.5% of the individual test points, the deviation from the NPB potential is within 0.6 kcal/mol/mid R:emid R:. The deviations from the reference decrease with increasing distance from the dielectric boundary: The approximation is asymptotically exact far away from the source charges. Deviation of the overall shape of a structure from ideal spherical does not, by itself, appear to necessitate decreased accuracy of the approximation. The largest deviations from the NPB reference are found inside very deep and narrow indentations that occur on the dielectric boundaries of some structures. The dimensions of these pockets of locally highly negative curvature are comparable to the size of a water molecule; the applicability of a continuum dielectric models in these regions is discussed. The maximum deviations from the NPB are reduced substantially when the boundary is smoothed by using a larger probe radius (3 A) to generate the

  8. An analytical approach to computing biomolecular electrostatic potential. II. Validation and applications

    NASA Astrophysics Data System (ADS)

    Gordon, John C.; Fenley, Andrew T.; Onufriev, Alexey

    2008-08-01

    An ability to efficiently compute the electrostatic potential produced by molecular charge distributions under realistic solvation conditions is essential for a variety of applications. Here, the simple closed-form analytical approximation to the Poisson equation rigorously derived in Part I for idealized spherical geometry is tested on realistic shapes. The effects of mobile ions are included at the Debye-Hückel level. The accuracy of the resulting closed-form expressions for electrostatic potential is assessed through comparisons with numerical Poisson-Boltzmann (NPB) reference solutions on a test set of 580 representative biomolecular structures under typical conditions of aqueous solvation. For each structure, the deviation from the reference is computed for a large number of test points placed near the dielectric boundary (molecular surface). The accuracy of the approximation, averaged over all test points in each structure, is within 0.6 kcal/mol/|e|~kT per unit charge for all structures in the test set. For 91.5% of the individual test points, the deviation from the NPB potential is within 0.6 kcal/mol/|e|. The deviations from the reference decrease with increasing distance from the dielectric boundary: The approximation is asymptotically exact far away from the source charges. Deviation of the overall shape of a structure from ideal spherical does not, by itself, appear to necessitate decreased accuracy of the approximation. The largest deviations from the NPB reference are found inside very deep and narrow indentations that occur on the dielectric boundaries of some structures. The dimensions of these pockets of locally highly negative curvature are comparable to the size of a water molecule; the applicability of a continuum dielectric models in these regions is discussed. The maximum deviations from the NPB are reduced substantially when the boundary is smoothed by using a larger probe radius (3 A˚) to generate the molecular surface. A detailed accuracy

  9. Comprehensive computer model for magnetron sputtering. II. Charged particle transport

    SciTech Connect

    Jimenez, Francisco J. Dew, Steven K.; Field, David J.

    2014-11-01

    Discharges for magnetron sputter thin film deposition systems involve complex plasmas that are sensitively dependent on magnetic field configuration and strength, working gas species and pressure, chamber geometry, and discharge power. The authors present a numerical formulation for the general solution of these plasmas as a component of a comprehensive simulation capability for planar magnetron sputtering. This is an extensible, fully three-dimensional model supporting realistic magnetic fields and is self-consistently solvable on a desktop computer. The plasma model features a hybrid approach involving a Monte Carlo treatment of energetic electrons and ions, along with a coupled fluid model for thermalized particles. Validation against a well-known one-dimensional system is presented. Various strategies for improving numerical stability are investigated as is the sensitivity of the solution to various model and process parameters. In particular, the effect of magnetic field, argon gas pressure, and discharge power are studied.

  10. Radiation model for row crops: II. Model evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Relatively few radiation transfer studies have considered the impact of varying vegetation cover that typifies row crops, and meth¬ods to account for partial row crop cover have not been well investigated. Our objective was to evaluate a widely used radiation model that was modified for row crops ha...

  11. Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica

    NASA Astrophysics Data System (ADS)

    Oboh, I.; Aluyor, E.; Audu, T.

    2015-03-01

    The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R2), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used to predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem.

  12. Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica

    SciTech Connect

    Oboh, I.; Aluyor, E.; Audu, T.

    2015-03-30

    The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R{sup 2}), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used to predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem.

  13. Crazy like a fox. Validity and ethics of animal models of human psychiatric disease.

    PubMed

    Rollin, Michael D H; Rollin, Bernard E

    2014-04-01

    Animal models of human disease play a central role in modern biomedical science. Developing animal models for human mental illness presents unique practical and philosophical challenges. In this article we argue that (1) existing animal models of psychiatric disease are not valid, (2) attempts to model syndromes are undermined by current nosology, (3) models of symptoms are rife with circular logic and anthropomorphism, (4) any model must make unjustified assumptions about subjective experience, and (5) any model deemed valid would be inherently unethical, for if an animal adequately models human subjective experience, then there is no morally relevant difference between that animal and a human. PMID:24534739

  14. Comparison of occlusal contact areas of class I and class II molar relationships at finishing using three-dimensional digital models

    PubMed Central

    Lee, Hyejoon; Kim, Minji

    2015-01-01

    Objective This study compared occlusal contact areas of ideally planned set-up and accomplished final models against the initial in class I and II molar relationships at finishing. Methods Evaluations were performed for 41 post-orthodontic treatment cases, of which 22 were clinically diagnosed as class I and the remainder were diagnosed as full cusp class II. Class I cases had four first premolars extracted, while class II cases had maxillary first premolars extracted. Occlusal contact areas were measured using a three-dimensional scanner and RapidForm 2004. Independent t-tests were used to validate comparison values between class I and II finishings. Repeated measures analysis of variance was used to compare initial, set up, and final models. Results Molars from cases in the class I finishing for the set-up model showed significantly greater contact areas than those from class II finishing (p < 0.05). The final model class I finishing showed significantly larger contact areas for the second molars (p < 0.05). The first molars of the class I finishing for the final model showed a tendency to have larger contact areas than those of class II finishing, although the difference was not statistically significant (p = 0.078). Conclusions In set-up models, posterior occlusal contact was better in class I than in class II finishing. In final models, class I finishing tended to have larger occlusal contact areas than class II finishing. PMID:26023539

  15. Some guidance on preparing validation plans for the DART Full System Models.

    SciTech Connect

    Gray, Genetha Anne; Hough, Patricia Diane; Hills, Richard Guy

    2009-03-01

    Planning is an important part of computational model verification and validation (V&V) and the requisite planning document is vital for effectively executing the plan. The document provides a means of communicating intent to the typically large group of people, from program management to analysts to test engineers, who must work together to complete the validation activities. This report provides guidelines for writing a validation plan. It describes the components of such a plan and includes important references and resources. While the initial target audience is the DART Full System Model teams in the nuclear weapons program, the guidelines are generally applicable to other modeling efforts. Our goal in writing this document is to provide a framework for consistency in validation plans across weapon systems, different types of models, and different scenarios. Specific details contained in any given validation plan will vary according to application requirements and available resources.

  16. A Formal Algorithm for Verifying the Validity of Clustering Results Based on Model Checking

    PubMed Central

    Huang, Shaobin; Cheng, Yuan; Lang, Dapeng; Chi, Ronghua; Liu, Guofeng

    2014-01-01

    The limitations in general methods to evaluate clustering will remain difficult to overcome if verifying the clustering validity continues to be based on clustering results and evaluation index values. This study focuses on a clustering process to analyze crisp clustering validity. First, we define the properties that must be satisfied by valid clustering processes and model clustering processes based on program graphs and transition systems. We then recast the analysis of clustering validity as the problem of verifying whether the model of clustering processes satisfies the specified properties with model checking. That is, we try to build a bridge between clustering and model checking. Experiments on several datasets indicate the effectiveness and suitability of our algorithms. Compared with traditional evaluation indices, our formal method can not only indicate whether the clustering results are valid but, in the case the results are invalid, can also detect the objects that have led to the invalidity. PMID:24608823

  17. Modeling the World Health Organization Disability Assessment Schedule II using non-parametric item response models.

    PubMed

    Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana

    2015-03-01

    The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology. PMID:25524862

  18. Modeling and Validation of a Three-Stage Solidification Model for Sprays

    NASA Astrophysics Data System (ADS)

    Tanner, Franz X.; Feigl, Kathleen; Windhab, Erich J.

    2010-09-01

    A three-stage freezing model and its validation are presented. In the first stage, the cooling of the droplet down to the freezing temperature is described as a convective heat transfer process in turbulent flow. In the second stage, when the droplet has reached the freezing temperature, the solidification process is initiated via nucleation and crystal growth. The latent heat release is related to the amount of heat convected away from the droplet and the rate of solidification is expressed with a freezing progress variable. After completion of the solidification process, in stage three, the cooling of the solidified droplet (particle) is described again by a convective heat transfer process until the particle approaches the temperature of the gaseous environment. The model has been validated by experimental data of a single cocoa butter droplet suspended in air. The subsequent spray validations have been performed with data obtained from a cocoa butter melt in an experimental spray tower using the open-source computational fluid dynamics code KIVA-3.

  19. Model transparency and validation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force--7.

    PubMed

    Eddy, David M; Hollingworth, William; Caro, J Jaime; Tsevat, Joel; McDonald, Kathryn M; Wong, John B

    2012-01-01

    Trust and confidence are critical to the success of health care models. There are two main methods for achieving this: transparency (people can see how the model is built) and validation (how well the model reproduces reality). This report describes recommendations for achieving transparency and validation developed by a taskforce appointed by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making. Recommendations were developed iteratively by the authors. A nontechnical description--including model type, intended applications, funding sources, structure, intended uses, inputs, outputs, other components that determine function, and their relationships, data sources, validation methods, results, and limitations--should be made available to anyone. Technical documentation, written in sufficient detail to enable a reader with necessary expertise to evaluate the model and potentially reproduce it, should be made available openly or under agreements that protect intellectual property, at the discretion of the modelers. Validation involves face validity (wherein experts evaluate model structure, data sources, assumptions, and results), verification or internal validity (check accuracy of coding), cross validity (comparison of results with other models analyzing the same problem), external validity (comparing model results with real-world results), and predictive validity (comparing model results with prospectively observed events). The last two are the strongest form of validation. Each section of this article contains a number of recommendations that were iterated among the authors, as well as among the wider modeling taskforce, jointly set up by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making. PMID:22999134

  20. Principle and validation of modified hysteretic models for magnetorheological dampers

    NASA Astrophysics Data System (ADS)

    Bai, Xian-Xu; Chen, Peng; Qian, Li-Jun

    2015-08-01

    Magnetorheological (MR) dampers, semi-active actuators for vibration and shock control systems, have attracted increasing attention during the past two decades. However, it is difficult to establish a precise mathematical model for the MR dampers and their control systems due to their intrinsic strong nonlinear hysteretic behavior. A phenomenological model based on the Bouc-Wen model can be used to effectively describe the nonlinear hysteretic behavior of the MR dampers, but the structure of the phenomenological model is complex and the Bouc-Wen model is functionally redundant. In this paper, based on the phenomenological model, (1) a normalized phenomenological model is derived through incorporating a ‘normalization’ concept, and (2) a restructured model, also incorporating the ‘normalization’ concept, is proposed and realized. In order to demonstrate this, a multi-islands genetic algorithm (GA) is employed to identify the parameters of the restructured model, the normalized phenomenological model, and the phenomenological model. The performance of the three models for describing and predicting the damping force characteristics of the MR dampers are compared and analyzed using the identified parameters. The research results indicate that, as compared with the phenomenological model and the normalized phenomenological model, (1) the restructured model can not only effectively decrease the number of the model parameters and reduce the complexity of the model, but can also describe the nonlinear hysteretic behavior of MR dampers more accurately, and (2) the meanings of several model parameters of the restructured model are clearer and the initial ranges of the model parameters are more explicit, which is of significance for parameter identification.