Science.gov

Sample records for ii model validation

  1. Filament winding cylinders. II - Validation of the process model

    NASA Technical Reports Server (NTRS)

    Calius, Emilio P.; Lee, Soo-Yong; Springer, George S.

    1990-01-01

    Analytical and experimental studies were performed to validate the model developed by Lee and Springer for simulating the manufacturing process of filament wound composite cylinders. First, results calculated by the Lee-Springer model were compared to results of the Calius-Springer thin cylinder model. Second, temperatures and strains calculated by the Lee-Springer model were compared to data. The data used in these comparisons were generated during the course of this investigation with cylinders made of Hercules IM-6G/HBRF-55 and Fiberite T-300/976 graphite-epoxy tows. Good agreement was found between the calculated and measured stresses and strains, indicating that the model is a useful representation of the winding and curing processes.

  2. Modeling extracellular electrical stimulation: II. Computational validation and numerical results.

    PubMed

    Tahayori, Bahman; Meffin, Hamish; Dokos, Socrates; Burkitt, Anthony N; Grayden, David B

    2012-12-01

    The validity of approximate equations describing the membrane potential under extracellular electrical stimulation (Meffin et al 2012 J. Neural Eng. 9 065005) is investigated through finite element analysis in this paper. To this end, the finite element method is used to simulate a cylindrical neurite under extracellular stimulation. Laplace's equations with appropriate boundary conditions are solved numerically in three dimensions and the results are compared to the approximate analytic solutions. Simulation results are in agreement with the approximate analytic expressions for longitudinal and transverse modes of stimulation. The range of validity of the equations describing the membrane potential for different values of stimulation and neurite parameters are presented as well. The results indicate that the analytic approach can be used to model extracellular electrical stimulation for realistic physiological parameters with a high level of accuracy.

  3. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    SciTech Connect

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and sensible

  4. Validated Competing Event Model for the Stage I-II Endometrial Cancer Population

    SciTech Connect

    Carmona, Ruben; Gulaya, Sachin; Murphy, James D.; Rose, Brent S.; Wu, John; Noticewala, Sonal; McHale, Michael T.; Yashar, Catheryn M.; Vaida, Florin; Mell, Loren K.

    2014-07-15

    Purpose/Objectives(s): Early-stage endometrial cancer patients are at higher risk of noncancer mortality than of cancer mortality. Competing event models incorporating comorbidity could help identify women most likely to benefit from treatment intensification. Methods and Materials: 67,397 women with stage I-II endometrioid adenocarcinoma after total hysterectomy diagnosed from 1988 to 2009 were identified in Surveillance, Epidemiology, and End Results (SEER) and linked SEER-Medicare databases. Using demographic and clinical information, including comorbidity, we sought to develop and validate a risk score to predict the incidence of competing mortality. Results: In the validation cohort, increasing competing mortality risk score was associated with increased risk of noncancer mortality (subdistribution hazard ratio [SDHR], 1.92; 95% confidence interval [CI], 1.60-2.30) and decreased risk of endometrial cancer mortality (SDHR, 0.61; 95% CI, 0.55-0.78). Controlling for other variables, Charlson Comorbidity Index (CCI) = 1 (SDHR, 1.62; 95% CI, 1.45-1.82) and CCI >1 (SDHR, 3.31; 95% CI, 2.74-4.01) were associated with increased risk of noncancer mortality. The 10-year cumulative incidences of competing mortality within low-, medium-, and high-risk strata were 27.3% (95% CI, 25.2%-29.4%), 34.6% (95% CI, 32.5%-36.7%), and 50.3% (95% CI, 48.2%-52.6%), respectively. With increasing competing mortality risk score, we observed a significant decline in omega (ω), indicating a diminishing likelihood of benefit from treatment intensification. Conclusion: Comorbidity and other factors influence the risk of competing mortality among patients with early-stage endometrial cancer. Competing event models could improve our ability to identify patients likely to benefit from treatment intensification.

  5. Simulation of Potato Late Blight in the Andes. II: Validation of the LATEBLIGHT Model.

    PubMed

    Andrade-Piedra, Jorge L; Hijmans, Robert J; Juárez, Henry S; Forbes, Gregory A; Shtienberg, Dani; Fry, William E

    2005-10-01

    ABSTRACT LATEBLIGHT, a mathematical model that simulates the effect of weather, host growth and resistance, and fungicide use on asexual development and growth of Phytophthora infestans on potato foliage, was validated for the Andes of Peru. Validation was needed due to recent modifications made to the model, and because the model had not been formally tested outside of New York State. Prior to validation, procedures to estimate the starting time of the epidemic, the amount of initial inoculum, and leaf wetness duration were developed. Observed data for validation were from field trials with three potato cultivars in the Peruvian locations of Comas and Huancayo in the department of Junín, and Oxapampa in the department of Pasco in 1999 and 2000 for a total of 12 epidemics. These data had not been used previously for estimating model parameters. Observed and simulated epidemics were compared graphically using disease progress curves and numerically using the area under the disease progress curve in a confidence interval test, an equivalence test, and an envelope of acceptance test. The level of agreement between observed and simulated epidemics was high, and the model was found to be valid according to subjective and objective performance criteria. The approach of measuring fitness components of potato cultivars infected with isolates of a certain clonal lineage of P. infestans under controlled conditions and then using the experimental results as parameters of LATEBLIGHT proved to be effective. Fungicide treatments were not considered in this study.

  6. Evaluation of Reliability and Validity of the Hendrich II Fall Risk Model in a Chinese Hospital Population

    PubMed Central

    Zhang, Congcong; Wu, Xinjuan; Lin, Songbai; Jia, Zhaoxia; Cao, Jing

    2015-01-01

    To translate, validate and examine the reliability and validity of a Chinese version of the Hendrich II Fall risk Model (HFRM) in predicting falls in elderly inpatient. A sample of 989 Chinese elderly inpatients was recruited upon admission at the Peking Union Medical College Hospital. The inpatients were assessed for fall risk using the Chinese version of the HFRM at admission. The reliability of the Chinese version of the HFRM was determined using the internal consistency and test-rested methods. Validity was determined using construct validity and convergent validity. Receiver operating characteristic (ROC) curves were created to determine the sensitivity and specificity. The Chinese version of the HFRM showed excellent repeatability with an intra-class correlation coefficient (ICC) of 0.9950 (95% confidence interval (CI): 0.9923–0.9984). The inter-rater reliability was high with an ICC of 0.9950 (95%CI: 0.9923–0.9984). Cronbach’s alpha coefficient was 0.366. Content validity was excellent, with a content validity ratio of 0.9333. The Chinese version of the HFRM had a sensitivity of 72% and a specificity of 69% when using a cut-off of 5 points on the scale. The area under the curve (AUC) was 0.815 (P<0.001). The Chinese version of the HFRM showed good reliability and validity in assessing the risk of fall in Chinese elderly inpatients. PMID:26544961

  7. A physical model of the bidirectional reflectance of vegetation canopies. I - Theory. II - Inversion and validation

    NASA Technical Reports Server (NTRS)

    Verstraete, Michel M.; Pinty, Bernard; Dickinson, Robert E.

    1990-01-01

    A new physically based analytical model of the bidirectional reflectance of vegetation canopies is derived. The model expresses the bidirectional reflectance field of a semiinfinite canopy as a combination of functions describing (1) the optical properties of the leaves through their single-scattering albedo and their phase function, (2) the average distribution of leaf orientations, and (3) the architecture of the canopy. The model is validated against laboratory and ground-based measurements in the visible and IR spectral regions, taken over two vegetation covers. The intrinsic optical properties of leaves and the information on the geometrical canopy arrangements in space were obtained using an inversion procedure based on a nonlinear optimization technique. Model predictions of bidirectional reflectances obtained using the inversion procedure compare well with actual observations.

  8. Fluids with competing interactions. II. Validating a free energy model for equilibrium cluster size

    NASA Astrophysics Data System (ADS)

    Bollinger, Jonathan A.; Truskett, Thomas M.

    2016-08-01

    Using computer simulations, we validate a simple free energy model that can be analytically solved to predict the equilibrium size of self-limiting clusters of particles in the fluid state governed by a combination of short-range attractive and long-range repulsive pair potentials. The model is a semi-empirical adaptation and extension of the canonical free energy-based result due to Groenewold and Kegel [J. Phys. Chem. B 105, 11702-11709 (2001)], where we use new computer simulation data to systematically improve the cluster-size scalings with respect to the strengths of the competing interactions driving aggregation. We find that one can adapt a classical nucleation like theory for small energetically frustrated aggregates provided one appropriately accounts for a size-dependent, microscopic energy penalty of interface formation, which requires new scaling arguments. This framework is verified in part by considering the extensive scaling of intracluster bonding, where we uncover a superlinear scaling regime distinct from (and located between) the known regimes for small and large aggregates. We validate our model based on comparisons against approximately 100 different simulated systems comprising compact spherical aggregates with characteristic (terminal) sizes between six and sixty monomers, which correspond to wide ranges in experimentally controllable parameters.

  9. Assessing the wildlife habitat value of New England salt marshes: II. Model testing and validation.

    PubMed

    McKinney, Richard A; Charpentier, Michael A; Wigand, Cathleen

    2009-07-01

    We tested a previously described model to assess the wildlife habitat value of New England salt marshes by comparing modeled habitat values and scores with bird abundance and species richness at sixteen salt marshes in Narragansett Bay, Rhode Island USA. As a group, wildlife habitat value assessment scores for the marshes ranged from 307-509, or 31-67% of the maximum attainable score. We recorded 6 species of wading birds (Ardeidae; herons, egrets, and bitterns) at the sites during biweekly survey. Species richness (r (2)=0.24, F=4.53, p=0.05) and abundance (r (2)=0.26, F=5.00, p=0.04) of wading birds significantly increased with increasing assessment score. We optimized our assessment model for wading birds by using Akaike information criteria (AIC) to compare a series of models comprised of specific components and categories of our model that best reflect their habitat use. The model incorporating pre-classification, wading bird habitat categories, and natural land surrounding the sites was substantially supported by AIC analysis as the best model. The abundance of wading birds significantly increased with increasing assessment scores generated with the optimized model (r (2)=0.48, F=12.5, p=0.003), demonstrating that optimizing models can be helpful in improving the accuracy of the assessment for a given species or species assemblage. In addition to validating the assessment model, our results show that in spite of their urban setting our study marshes provide substantial wildlife habitat value. This suggests that even small wetlands in highly urbanized coastal settings can provide important wildlife habitat value if key habitat attributes (e.g., natural buffers, habitat heterogeneity) are present.

  10. SAGE II aerosol data validation based on retrieved aerosol model size distribution from SAGE II aerosol measurements

    NASA Technical Reports Server (NTRS)

    Wang, Pi-Huan; Mccormick, M. P.; Mcmaster, L. R.; Chu, W. P.; Swissler, T. J.; Osborn, M. T.; Russell, P. B.; Oberbeck, V. R.; Livingston, J.; Rosen, J. M.

    1989-01-01

    Consideration is given to aerosol correlative measurements experiments for the Stratospheric Aerosol and Gas Experiment (SAGE) II, conducted between November 1984 and July 1986. The correlative measurements were taken with an impactor/laser probe, a dustsonde, and an airborne 36-cm lidar system. The primary aerosol quantities measured by the ground-based instruments are compared with those calculated from the aerosol size distributions from SAGE II aerosol extinction measurements. Good agreement is found between the two sets of measurements.

  11. Predictive modeling of infrared radiative heating in tomato dry-peeling process: Part II. Model validation and sensitivity analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A predictive mathematical model was developed to simulate heat transfer in a tomato undergoing double sided infrared (IR) heating in a dry-peeling process. The aims of this study were to validate the developed model using experimental data and to investigate different engineering parameters that mos...

  12. Person Heterogeneity of the BDI-II-C and Its Effects on Dimensionality and Construct Validity: Using Mixture Item Response Models

    ERIC Educational Resources Information Center

    Wu, Pei-Chen; Huang, Tsai-Wei

    2010-01-01

    This study was to apply the mixed Rasch model to investigate person heterogeneity of Beck Depression Inventory-II-Chinese version (BDI-II-C) and its effects on dimensionality and construct validity. Person heterogeneity was reflected by two latent classes that differ qualitatively. Additionally, person heterogeneity adversely affected the…

  13. A new 3D finite element model of the IEC 60318-1 artificial ear: II. Experimental and numerical validation

    NASA Astrophysics Data System (ADS)

    Bravo, Agustín; Barham, Richard; Ruiz, Mariano; López, Juan Manuel; De Arcas, Guillermo; Alonso, Jesus

    2012-12-01

    In part I, the feasibility of using three-dimensional (3D) finite elements (FEs) to model the acoustic behaviour of the IEC 60318-1 artificial ear was studied and the numerical approach compared with classical lumped elements modelling. It was shown that by using a more complex acoustic model that took account of thermo-viscous effects, geometric shapes and dimensions, it was possible to develop a realistic model. This model then had clear advantages in comparison with the models based on equivalent circuits using lumped parameters. In fact results from FE modelling produce a better understanding about the physical phenomena produced inside ear simulator couplers, facilitating spatial and temporal visualization of the sound fields produced. The objective of this study (part II) is to extend the investigation by validating the numerical calculations against measurements on an ear simulator conforming to IEC 60318-1. For this purpose, an appropriate commercially available device is taken and a complete 3D FE model developed for it. The numerical model is based on key dimensional data obtained with a non-destructive x-ray inspection technique. Measurements of the acoustic transfer impedance have been carried out on the same device at a national measurement institute using the method embodied in IEC 60318-1. Having accounted for the actual device dimensions, the thermo-viscous effects inside narrow slots and holes and environmental conditions, the results of the numerical modelling were found to be in good agreement with the measured values.

  14. Black liquor combustion validated recovery boiler modeling: Final year report. Volume 2 (Appendices I, section 5 and II, section 1)

    SciTech Connect

    Grace, T.M.; Frederick, W.J.; Salcudean, M.; Wessel, R.A.

    1998-08-01

    This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished were as follows: (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes. (2) Validate the enhanced furnace models, so that users can have confidence in the predicted results. (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler. (4) Facilitate the transfer of codes, black liquid submodels, and fundamental knowledge to the US kraft pulp industry. Volume 2 contains the last section of Appendix I, Radiative heat transfer in kraft recovery boilers, and the first section of Appendix II, The effect of temperature and residence time on the distribution of carbon, sulfur, and nitrogen between gaseous and condensed phase products from low temperature pyrolysis of kraft black liquor.

  15. Assessing Wildlife Habitat Value of New England Salt Marshes: II. Model Testing and Validation

    EPA Science Inventory

    We test a previously described model to assess the wildlife habitat value of New England salt marshes by comparing modeled habitat values and scores with bird abundance and species richness at sixteen salt marshes in Narragansett Bay, Rhode Island USA. Assessment scores ranged f...

  16. The model SIRANE for atmospheric urban pollutant dispersion; PART II, validation of the model on a real case study

    NASA Astrophysics Data System (ADS)

    Soulhac, L.; Salizzoni, P.; Mejean, P.; Didier, D.; Rios, I.

    2012-03-01

    We analyse the performance of the model SIRANE by comparing its outputs to field data measured within an urban district. SIRANE is the first urban dispersion model based on the concept of street network, and contains specific parametrical law to explicitly simulate the main transfer mechanisms within the urban canopy. The model validation is performed by means of field data collected during a 15 days measurement campaign in an urban district in Lyon, France. The campaign provided information on traffic fluxes and cars emissions, meteorological conditions, background pollution levels and pollutant concentration in different location within the district. This data set, together with complementary modelling tools needed to estimate the spatial distribution of traffic fluxes, allowed us to estimate the input data required by the model. The data set provide also the information essential to evaluate the accuracy of the model outputs. Comparison between model predictions and field measurements was performed in two ways. By evaluate the reliability of the model in simulating the spatial distribution of the pollutant and of their time variability. The study includes a sensitivity analysis to identify the key input parameters influencing the performance of the model, namely the emissions rates and the wind velocity. The analysis focuses only on the influence of varying input parameters in the modelling chain in the model predictions and complements the analyses provided by wind tunnel studies focussing on the parameterisation implemented in the model. The study also elucidates the critical role of background concentrations that represent a significant contribution to local pollution levels. The overall model performance, measured using the Chang and Hanna (2004) criteria can be considered as 'good' except for NO and some of BTX species. The results suggest that improvements of the performances on NO require testing new photochemical models, whereas the improvement on BTX

  17. The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models

    EPA Science Inventory

    The second phase of the MicroArray Quality Control (MAQC-II) project evaluated common practices for developing and validating microarray-based models aimed at predicting toxicological and clinical endpoints. Thirty-six teams developed classifiers for 13 endpoints - some easy, som...

  18. TAMDAR Sensor Validation in 2003 AIRS II

    NASA Technical Reports Server (NTRS)

    Daniels, Taumi S.; Murray, John J.; Anderson, Mark V.; Mulally, Daniel J.; Jensen, Kristopher R.; Grainger, Cedric A.; Delene, David J.

    2005-01-01

    This study entails an assessment of TAMDAR in situ temperature, relative humidity and winds sensor data from seven flights of the UND Citation II. These data are undergoing rigorous assessment to determine their viability to significantly augment domestic Meteorological Data Communications Reporting System (MDCRS) and the international Aircraft Meteorological Data Reporting (AMDAR) system observational databases to improve the performance of regional and global numerical weather prediction models. NASA Langley Research Center participated in the Second Alliance Icing Research Study from November 17 to December 17, 2003. TAMDAR data taken during this period is compared with validation data from the UND Citation. The data indicate acceptable performance of the TAMDAR sensor when compared to measurements from the UND Citation research instruments.

  19. Model Validation Status Review

    SciTech Connect

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  20. SOSS ICN Model Validation

    NASA Technical Reports Server (NTRS)

    Zhu, Zhifan

    2016-01-01

    Under the NASA-KAIA-KARI ATM research collaboration agreement, SOSS ICN Model has been developed for Incheon International Airport. This presentation describes the model validation work in the project. The presentation will show the results and analysis of the validation.

  1. Black liquor combustion validated recovery boiler modeling: Final year report. Volume 3 (Appendices II, sections 2--3 and III)

    SciTech Connect

    Grace, T.M.; Frederick, W.J.; Salcudean, M.; Wessel, R.A.

    1998-08-01

    This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished were as follows: (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes. (2) Validate the enhanced furnace models, so that users can have confidence in the predicted results. (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler. (4) Facilitate the transfer of codes, black liquid submodels, and fundamental knowledge to the US kraft pulp industry. Volume 3 contains the following appendix sections: Formation and destruction of nitrogen oxides in recovery boilers; Sintering and densification of recovery boiler deposits laboratory data and a rate model; and Experimental data on rates of particulate formation during char bed burning.

  2. Comparison of Analytical Models of Propagation of CMEs and its Validation Using Type II Radio Bursts Observations

    NASA Astrophysics Data System (ADS)

    Perez Alanis, C. A.; Aguilar-Rodriguez, E.; Corona Romero, P.

    2015-12-01

    Coronal Mass Ejections (CMEs) are large-scale eruptive events arising from the solar corona that are expelled into the interplanetary (IP) medium. The CMEs can be associated with interplanetary shocks and this associated with type II radio-burst emissions. Some of the CMEs carry a magnetic configuration that can generate geomagnetic storm, the main interest in space weather. It is therefore important to predict arrival times of CMEs that are potential to generate a geomagnetic storm. We used a number of hydrodynamic (viscous and inertial) drag force models to approximate the trajectory of a CME. We focus on obtaining proportionality constants to achieve good approximations to CME arrivals. We analized a set of fast CMEs by finding the appropiate drag coefficients that simultaneusly approximated: the in-situ arrivals of the events, their associated type II radio-burst and satellite observations of these phenomena. Our results suggest that quadratic and inertial drag are the dynamic agent that prevails for fast CMEs propagation. Our studies may contribute to future 'space weather forescasting' at the Earth.

  3. Circularly-symmetric complex normal ratio distribution for scalar transmissibility functions. Part II: Probabilistic model and validation

    NASA Astrophysics Data System (ADS)

    Yan, Wang-Ji; Ren, Wei-Xin

    2016-12-01

    In Part I of this study, some new theorems, corollaries and lemmas on circularly-symmetric complex normal ratio distribution have been mathematically proved. This part II paper is dedicated to providing a rigorous treatment of statistical properties of raw scalar transmissibility functions at an arbitrary frequency line. On the basis of statistics of raw FFT coefficients and circularly-symmetric complex normal ratio distribution, explicit closed-form probabilistic models are established for both multivariate and univariate scalar transmissibility functions. Also, remarks on the independence of transmissibility functions at different frequency lines and the shape of the probability density function (PDF) of univariate case are presented. The statistical structures of probabilistic models are concise, compact and easy-implemented with a low computational effort. They hold for general stationary vector processes, either Gaussian stochastic processes or non-Gaussian stochastic processes. The accuracy of proposed models is verified using numerical example as well as field test data of a high-rise building and a long-span cable-stayed bridge. This study yields new insights into the qualitative analysis of the uncertainty of scalar transmissibility functions, which paves the way for developing new statistical methodologies for modal analysis, model updating or damage detection using responses only without input information.

  4. Validation of SAGE II NO2 measurements

    NASA Technical Reports Server (NTRS)

    Cunnold, D. M.; Zawodny, J. M.; Chu, W. P.; Mccormick, M. P.; Pommereau, J. P.; Goutail, F.

    1991-01-01

    The validity of NO2 measurements from the stratospheric aerosol and gas experiment (SAGE) II is examined by comparing the data with climatological distributions of NO2 and by examining the consistency of the observations themselves. The precision at high altitudes is found to be 5 percent, which is also the case at specific low altitudes for certain latitudes where the mixing ratio is 4 ppbv, and the precision is 0.2 ppbv at low altitudes. The autocorrelation distance of the smoothed profile measurement noise is 3-5 km and 10 km for 1-km and 5-km smoothing, respectively. The SAGE II measurements agree with spectroscopic measurements to within 10 percent, and the SAGE measurements are about 20 percent smaller than average limb monitor measurements at the mixing ratio peak. SAGE I and SAGE II measurements are slightly different, but the difference is not attributed to changes in atmospheric NO2.

  5. A colored petri nets based workload evaluation model and its validation through Multi-Attribute Task Battery-II.

    PubMed

    Wang, Peng; Fang, Weining; Guo, Beiyuan

    2017-04-01

    This paper proposed a colored petri nets based workload evaluation model. A formal interpretation of workload was firstly introduced based on the process that reflection of petri nets components to task. A petri net based description of Multiple Resources theory was given by comprehending it from a new angle. A new application of VACP rating scales named V/A-C-P unit, and the definition of colored transitions were proposed to build a model of task process. The calculation of workload mainly has the following four steps: determine token's initial position and values; calculate the weight of directed arcs on the basis of the rules proposed; calculate workload from different transitions, and correct the influence of repetitive behaviors. Verify experiments were carried out based on Multi-Attribute Task Battery-II software. Our results show that there is a strong correlation between the model values and NASA -Task Load Index scores (r=0.9513). In addition, this method can also distinguish behavior characteristics between different people.

  6. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  7. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part II: Benchmark comparisons of PUMA core parameters with MCNP5 and improvements due to a simple cell heterogeneity correction

    SciTech Connect

    Grant, C.; Mollerach, R.; Leszczynski, F.; Serra, O.; Marconi, J.; Fink, J.

    2006-07-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure vessel design with 451 vertical coolant channels and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update of reactor physics calculation methods and models was recently carried out covering cell, supercell (control rod) and core calculations. This paper presents benchmark comparisons of core parameters of a slightly idealized model of the Atucha-I core obtained with the PUMA reactor code with MCNP5. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, more symmetric than Atucha-II, and has some experimental data available. To validate the new models benchmark comparisons of k-effective, channel power and axial power distributions obtained with PUMA and MCNP5 have been performed. In addition, a simple cell heterogeneity correction recently introduced in PUMA is presented, which improves significantly the agreement of calculated channel powers with MCNP5. To complete the validation, the calculation of some of the critical configurations of the Atucha-I reactor measured during the experiments performed at first criticality is also presented. (authors)

  8. Exchange couplings for Mn ions in CdTe: Validity of spin models for dilute magnetic II-VI semiconductors

    NASA Astrophysics Data System (ADS)

    Linneweber, Thorben; Bünemann, Jörg; Löw, Ute; Gebhard, Florian; Anders, Frithjof

    2017-01-01

    We employ density-functional theory (DFT) in the generalized gradient approximation (GGA) and its extensions GGA +U and GGA+Gutzwiller to calculate the magnetic exchange couplings between pairs of Mn ions substituting Cd in a CdTe crystal at very small doping. DFT(GGA) overestimates the exchange couplings by a factor of 3 because it underestimates the charge-transfer gap in Mn-doped II-VI semiconductors. Fixing the nearest-neighbor coupling J1 to its experimental value in GGA +U , in GGA+Gutzwiller, or by a simple scaling of the DFT(GGA) results provides acceptable values for the exchange couplings at second-, third-, and fourth-neighbor distances in Cd(Mn)Te, Zn(Mn)Te, Zn(Mn)Se, and Zn(Mn)S. In particular, we recover the experimentally observed relation J4>J2,J3 . The filling of the Mn 3 d shell is not integer, which puts the underlying Heisenberg description into question. However, using a few-ion toy model the picture of a slightly extended local moment emerges so that an integer 3 d -shell filling is not a prerequisite for equidistant magnetization plateaus, as seen in experiment.

  9. Data Assimilation of Photosynthetic Light-use Efficiency using Multi-angular Satellite Data: II Model Implementation and Validation

    NASA Technical Reports Server (NTRS)

    Hilker, Thomas; Hall, Forest G.; Tucker, J.; Coops, Nicholas C.; Black, T. Andrew; Nichol, Caroline J.; Sellers, Piers J.; Barr, Alan; Hollinger, David Y.; Munger, J. W.

    2012-01-01

    Spatially explicit and temporally continuous estimates of photosynthesis will be of great importance for increasing our understanding of and ultimately closing the terrestrial carbon cycle. Current capabilities to model photosynthesis, however, are limited by accurate enough representations of the complexity of the underlying biochemical processes and the numerous environmental constraints imposed upon plant primary production. A potentially powerful alternative to model photosynthesis through these indirect observations is the use of multi-angular satellite data to infer light-use efficiency (e) directly from spectral reflectance properties in connection with canopy shadow fractions. Hall et al. (this issue) introduced a new approach for predicting gross ecosystem production that would allow the use of such observations in a data assimilation mode to obtain spatially explicit variations in e from infrequent polar-orbiting satellite observations, while meteorological data are used to account for the more dynamic responses of e to variations in environmental conditions caused by changes in weather and illumination. In this second part of the study we implement and validate the approach of Hall et al. (this issue) across an ecologically diverse array of eight flux-tower sites in North America using data acquired from the Compact High Resolution Imaging Spectroradiometer (CHRIS) and eddy-flux observations. Our results show significantly enhanced estimates of e and therefore cumulative gross ecosystem production (GEP) over the course of one year at all examined sites. We also demonstrate that e is greatly heterogeneous even across small study areas. Data assimilation and direct inference of GEP from space using a new, proposed sensor could therefore be a significant step towards closing the terrestrial carbon cycle.

  10. Development and Validation of Methodology to Model Flow in Ventilation Systems Commonly Found in Nuclear Facilities - Phase II

    SciTech Connect

    Strons, Philip; Bailey, James L.; Davis, John; Grudzinski, James; Hlotke, John

    2016-03-01

    In this report we present the results of the Phase II analysis and testing of the flow patterns encountered in the Alpha Gamma Hot Cell Facility (AGHCF), as well as the results from an opportunity to expand upon field test work from Phase I by the use of a Class IIIb laser. The addition to the Phase I work is covered before proceeding to the results of the Phase II work, followed by a summary of findings.

  11. Characteristic Time Model Validation

    DTIC Science & Technology

    1988-09-01

    combustors (Rizk and Mongia , 1986). The only practical m.thod for ascertaining the validity of these models it to develop a well defined experimental datum...layer around the recirculation zone found in the primary zone of a gas turbine combustor . Experi- mental results are used to investigate CTM parameters...Length Scale ....................................... 80 4.3.6 Relation of rsglobal to Combustors ............................... 80 4.4 Spray

  12. Validation Studies for the Diet History Questionnaire II

    Cancer.gov

    Data show that the DHQ I instrument provides reasonable nutrient estimates, and three studies were conducted to assess its validity/calibration. There have been no such validation studies with the DHQ II.

  13. Ab initio structural modeling of and experimental validation for Chlamydia trachomatis protein CT296 reveal structural similarity to Fe(II) 2-oxoglutarate-dependent enzymes

    SciTech Connect

    Kemege, Kyle E.; Hickey, John M.; Lovell, Scott; Battaile, Kevin P.; Zhang, Yang; Hefty, P. Scott

    2012-02-13

    Chlamydia trachomatis is a medically important pathogen that encodes a relatively high percentage of proteins with unknown function. The three-dimensional structure of a protein can be very informative regarding the protein's functional characteristics; however, determining protein structures experimentally can be very challenging. Computational methods that model protein structures with sufficient accuracy to facilitate functional studies have had notable successes. To evaluate the accuracy and potential impact of computational protein structure modeling of hypothetical proteins encoded by Chlamydia, a successful computational method termed I-TASSER was utilized to model the three-dimensional structure of a hypothetical protein encoded by open reading frame (ORF) CT296. CT296 has been reported to exhibit functional properties of a divalent cation transcription repressor (DcrA), with similarity to the Escherichia coli iron-responsive transcriptional repressor, Fur. Unexpectedly, the I-TASSER model of CT296 exhibited no structural similarity to any DNA-interacting proteins or motifs. To validate the I-TASSER-generated model, the structure of CT296 was solved experimentally using X-ray crystallography. Impressively, the ab initio I-TASSER-generated model closely matched (2.72-{angstrom} C{alpha} root mean square deviation [RMSD]) the high-resolution (1.8-{angstrom}) crystal structure of CT296. Modeled and experimentally determined structures of CT296 share structural characteristics of non-heme Fe(II) 2-oxoglutarate-dependent enzymes, although key enzymatic residues are not conserved, suggesting a unique biochemical process is likely associated with CT296 function. Additionally, functional analyses did not support prior reports that CT296 has properties shared with divalent cation repressors such as Fur.

  14. Validation of performance assessment models

    SciTech Connect

    Bergeron, M.P.; Kincaid, C.T.

    1991-11-01

    The purpose of model validation in a low-level waste site performance assessment is to increase confidence in predictions of the migration and fate of future releases from the wastes. Unlike the process of computer code verification, model validation is a site-specific process that requires site-specific data. This paper provides an overview of the topic of model validation and describes the general approaches, strategies, and limitations of model validation being considered by various researchers concerned with the subject.

  15. Model Valid Prediction Period

    NASA Astrophysics Data System (ADS)

    Chu, P. C.

    2002-12-01

    A new concept, valid prediction period (VPP), is presented here to evaluate model predictability. VPP is defined as the time period when the prediction error first exceeds a pre-determined criterion (i.e., the tolerance level). It depends not only on the instantaneous error growth, but also on the noise level, the initial error, and tolerance level. The model predictability skill is then represented by a single scalar, VPP. The longer the VPP, the higher the model predictability skill is. A theoretical framework on the base of the backward Fokker-Planck equation is developed to determine the probability density function (pdf) of VPP. Verification of a Gulf of Mexico nowcast/forecast model is used as an example to demonstrate the usefulness of VPP. Power law scaling is found in the mean square error of displacement between drifting buoy and model trajectories (both at 50 m depth). The pdf of VPP is asymmetric with a long and broad tail on the higher value side, which suggests long-term predictability. The calculations demonstrate that the long-term (extreme long such as 50-60 day) predictability is not an "outlier" and shares the same statistical properties as the short-term predictions. References Chu P. C., L. M. Ivanov, and C.W. Fan, Backward Fokker-Plank equation for determining model predictability with unknown initial error distribution. J. Geophys. Res., in press, 2002. Chu P.C., L.M.Ivanov, T.M. Margolina, and O.V.Melnichenko, 2002b: On probabilistic stability of an atmospheric model to various amplitude perturbations. J. Atmos. Sci., in press Chu P.C., L.M. Ivanov, L. Kantha, O.V. Melnichenko and Y.A. Poberezhny, 2002c: The long-term correlations and power decay law in model prediction skill. Geophys. Res. Let., in press.

  16. Ecological reality and model validation

    SciTech Connect

    Cale, Jr, W. G.; Shugart, H. H.

    1980-01-01

    Definitions of model realism and model validation are developed. Ecological and mathematical arguments are then presented to show that model equations which explicitly treat ecosystem processes can be systematically improved such that greater realism is attained and the condition of validity is approached. Several examples are presented.

  17. A musculoskeletal model of the equine forelimb for determining surface stresses and strains in the humerus-part II. Experimental testing and model validation.

    PubMed

    Pollock, Sarah; Stover, Susan M; Hull, M L; Galuppo, Larry D

    2008-08-01

    The first objective of this study was to experimentally determine surface bone strain magnitudes and directions at the donor site for bone grafts, the site predisposed to stress fracture, the medial and cranial aspects of the transverse cross section corresponding to the stress fracture site, and the middle of the diaphysis of the humerus of a simplified in vitro laboratory preparation. The second objective was to determine whether computing strains solely in the direction of the longitudinal axis of the humerus in the mathematical model was inherently limited by comparing the strains measured along the longitudinal axis of the bone to the principal strain magnitudes and directions. The final objective was to determine whether the mathematical model formulated in Part I [Pollock et al., 2008, ASME J. Biomech. Eng., 130, p. 041006] is valid for determining the bone surface strains at the various locations on the humerus where experimentally measured longitudinal strains are comparable to principal strains. Triple rosette strain gauges were applied at four locations circumferentially on each of two cross sections of interest using a simplified in vitro laboratory preparation. The muscles included the biceps brachii muscle in addition to loaded shoulder muscles that were predicted active by the mathematical model. Strains from the middle grid of each rosette, aligned along the longitudinal axis of the humerus, were compared with calculated principal strain magnitudes and directions. The results indicated that calculating strains solely in the direction of the longitudinal axis is appropriate at six of eight locations. At the cranial and medial aspects of the middle of the diaphysis, the average minimum principal strain was not comparable to the average experimental longitudinal strain. Further analysis at the remaining six locations indicated that the mathematical model formulated in Part I predicts strains within +/-2 standard deviations of experimental strains at

  18. Validation of SAGE II ozone measurements

    NASA Technical Reports Server (NTRS)

    Cunnold, D. M.; Chu, W. P.; Mccormick, M. P.; Veiga, R. E.; Barnes, R. A.

    1989-01-01

    Five ozone profiles from the Stratospheric Aerosol and Gas Experiment (SAGE) II are compared with coincident ozonesonde measurements obtained at Natal, Brazil, and Wallops Island, Virginia. It is shown that the mean difference between all of the measurements is about 1 percent and that the agreement is within 7 percent at altitudes between 20 and 53 km. Good agreement is also found for ozone mixing ratios on pressure surfaces. It is concluded that the SAGE II profiles provide useful ozone information up to about 60 km altitude.

  19. Empirical agreement in model validation.

    PubMed

    Jebeile, Julie; Barberousse, Anouk

    2016-04-01

    Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation.

  20. SAGE II aerosol validation - Selected altitude measurements, including particle micromeasurements

    NASA Technical Reports Server (NTRS)

    Oberbeck, Verne R.; Russell, Philip B.; Pueschel, Rudolf F.; Snetsinger, Kenneth G.; Ferry, Guy V.; Livingston, John M.; Rosen, James N.; Osborn, Mary T.; Kritz, Mark A.

    1989-01-01

    The validity of particulate extinction coefficients derived from limb path solar radiance measurements obtained during the Stratospheric Aerosol and Gas Experiment (SAGE) II is tested. The SAGE II measurements are compared with correlative aerosol measurements taken during January 1985, August 1985, and July 1986 with impactors, laser spectrometers, and filter samplers on a U-2 aircraft, an upward pointing lidar on a P-3 aircraft, and balloon-borne optical particle counters. The data for July 29, 1986 are discussed in detail. The aerosol measurements taken on this day at an altitude of 20.5 km produce particulate extinction values which validate the SAGE II values for similar wavelengths.

  1. Testing and validating environmental models

    USGS Publications Warehouse

    Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.

    1996-01-01

    Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series

  2. Power Plant Model Validation Tool

    SciTech Connect

    2016-02-12

    The PPMV is used to validate generator model using disturbance recordings. The PPMV tool contains a collection of power plant models and model validation studies, as well as disturbance recordings from a number of historic grid events. The user can import data from a new disturbance into the database, which converts PMU and SCADA data into GE PSLF format, and then run the tool to validate (or invalidate) the model for a specific power plant against its actual performance. The PNNL PPMV tool enables the automation of the process of power plant model validation using disturbance recordings. The tool uses PMU and SCADA measurements as input information. The tool automatically adjusts all required EPCL scripts and interacts with GE PSLF in the batch mode. The main tool features includes: - The tool interacts with GE PSLF - The tool uses GE PSLF Play-In Function for generator model validation. - Database of projects (model validation studies) - Database of the historic events. - Database of the power plant - The tool has advanced visualization capabilities - The tool automatically generates reports

  3. Validation of mesoscale models

    NASA Technical Reports Server (NTRS)

    Kuo, Bill; Warner, Tom; Benjamin, Stan; Koch, Steve; Staniforth, Andrew

    1993-01-01

    The topics discussed include the following: verification of cloud prediction from the PSU/NCAR mesoscale model; results form MAPS/NGM verification comparisons and MAPS observation sensitivity tests to ACARS and profiler data; systematic errors and mesoscale verification for a mesoscale model; and the COMPARE Project and the CME.

  4. TUTORIAL: Validating biorobotic models

    NASA Astrophysics Data System (ADS)

    Webb, Barbara

    2006-09-01

    Some issues in neuroscience can be addressed by building robot models of biological sensorimotor systems. What we can conclude from building models or simulations, however, is determined by a number of factors in addition to the central hypothesis we intend to test. These include the way in which the hypothesis is represented and implemented in simulation, how the simulation output is interpreted, how it is compared to the behaviour of the biological system, and the conditions under which it is tested. These issues will be illustrated by discussing a series of robot models of cricket phonotaxis behaviour. .

  5. Validation of the Sexual Assault Symptom Scale II (SASS II) using a panel research design.

    PubMed

    Ruch, Libby O; Wang, Chang-Hwai

    2006-11-01

    To examine the utility of a self-report scale of sexual assault trauma, 223 female victims were interviewed with the 43-item Sexual Assault Symptom Scale II (SASS II) at 1, 3, 7, 11, and 15 months postassault. Factor analyses using principal-components extraction with an oblimin rotation yielded 7 common factors with 31 items. The internal consistency was high for 4 factors and moderate for 2 factors. The multitrait-multimethod matrix, correlating the factor subscale scores of self-reported trauma and clinical assessment ratings, demonstrated both convergent and discriminant validity, indicating that the SASS II has construct validity. Correlations between the SASS II subscales and the intrusion subscale of the Impact of Events Scale also indicated the convergent and discriminant validity of the SASS II. Significant positive correlations between current and prior trauma levels further evidence the validity of the SASS.

  6. The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models.

    PubMed

    Shi, Leming; Campbell, Gregory; Jones, Wendell D; Campagne, Fabien; Wen, Zhining; Walker, Stephen J; Su, Zhenqiang; Chu, Tzu-Ming; Goodsaid, Federico M; Pusztai, Lajos; Shaughnessy, John D; Oberthuer, André; Thomas, Russell S; Paules, Richard S; Fielden, Mark; Barlogie, Bart; Chen, Weijie; Du, Pan; Fischer, Matthias; Furlanello, Cesare; Gallas, Brandon D; Ge, Xijin; Megherbi, Dalila B; Symmans, W Fraser; Wang, May D; Zhang, John; Bitter, Hans; Brors, Benedikt; Bushel, Pierre R; Bylesjo, Max; Chen, Minjun; Cheng, Jie; Cheng, Jing; Chou, Jeff; Davison, Timothy S; Delorenzi, Mauro; Deng, Youping; Devanarayan, Viswanath; Dix, David J; Dopazo, Joaquin; Dorff, Kevin C; Elloumi, Fathi; Fan, Jianqing; Fan, Shicai; Fan, Xiaohui; Fang, Hong; Gonzaludo, Nina; Hess, Kenneth R; Hong, Huixiao; Huan, Jun; Irizarry, Rafael A; Judson, Richard; Juraeva, Dilafruz; Lababidi, Samir; Lambert, Christophe G; Li, Li; Li, Yanen; Li, Zhen; Lin, Simon M; Liu, Guozhen; Lobenhofer, Edward K; Luo, Jun; Luo, Wen; McCall, Matthew N; Nikolsky, Yuri; Pennello, Gene A; Perkins, Roger G; Philip, Reena; Popovici, Vlad; Price, Nathan D; Qian, Feng; Scherer, Andreas; Shi, Tieliu; Shi, Weiwei; Sung, Jaeyun; Thierry-Mieg, Danielle; Thierry-Mieg, Jean; Thodima, Venkata; Trygg, Johan; Vishnuvajjala, Lakshmi; Wang, Sue Jane; Wu, Jianping; Wu, Yichao; Xie, Qian; Yousef, Waleed A; Zhang, Liang; Zhang, Xuegong; Zhong, Sheng; Zhou, Yiming; Zhu, Sheng; Arasappan, Dhivya; Bao, Wenjun; Lucas, Anne Bergstrom; Berthold, Frank; Brennan, Richard J; Buness, Andreas; Catalano, Jennifer G; Chang, Chang; Chen, Rong; Cheng, Yiyu; Cui, Jian; Czika, Wendy; Demichelis, Francesca; Deng, Xutao; Dosymbekov, Damir; Eils, Roland; Feng, Yang; Fostel, Jennifer; Fulmer-Smentek, Stephanie; Fuscoe, James C; Gatto, Laurent; Ge, Weigong; Goldstein, Darlene R; Guo, Li; Halbert, Donald N; Han, Jing; Harris, Stephen C; Hatzis, Christos; Herman, Damir; Huang, Jianping; Jensen, Roderick V; Jiang, Rui; Johnson, Charles D; Jurman, Giuseppe; Kahlert, Yvonne; Khuder, Sadik A; Kohl, Matthias; Li, Jianying; Li, Li; Li, Menglong; Li, Quan-Zhen; Li, Shao; Li, Zhiguang; Liu, Jie; Liu, Ying; Liu, Zhichao; Meng, Lu; Madera, Manuel; Martinez-Murillo, Francisco; Medina, Ignacio; Meehan, Joseph; Miclaus, Kelci; Moffitt, Richard A; Montaner, David; Mukherjee, Piali; Mulligan, George J; Neville, Padraic; Nikolskaya, Tatiana; Ning, Baitang; Page, Grier P; Parker, Joel; Parry, R Mitchell; Peng, Xuejun; Peterson, Ron L; Phan, John H; Quanz, Brian; Ren, Yi; Riccadonna, Samantha; Roter, Alan H; Samuelson, Frank W; Schumacher, Martin M; Shambaugh, Joseph D; Shi, Qiang; Shippy, Richard; Si, Shengzhu; Smalter, Aaron; Sotiriou, Christos; Soukup, Mat; Staedtler, Frank; Steiner, Guido; Stokes, Todd H; Sun, Qinglan; Tan, Pei-Yi; Tang, Rong; Tezak, Zivana; Thorn, Brett; Tsyganova, Marina; Turpaz, Yaron; Vega, Silvia C; Visintainer, Roberto; von Frese, Juergen; Wang, Charles; Wang, Eric; Wang, Junwei; Wang, Wei; Westermann, Frank; Willey, James C; Woods, Matthew; Wu, Shujian; Xiao, Nianqing; Xu, Joshua; Xu, Lei; Yang, Lun; Zeng, Xiao; Zhang, Jialu; Zhang, Li; Zhang, Min; Zhao, Chen; Puri, Raj K; Scherf, Uwe; Tong, Weida; Wolfinger, Russell D

    2010-08-01

    Gene expression data from microarrays are being applied to predict preclinical and clinical endpoints, but the reliability of these predictions has not been established. In the MAQC-II project, 36 independent teams analyzed six microarray data sets to generate predictive models for classifying a sample with respect to one of 13 endpoints indicative of lung or liver toxicity in rodents, or of breast cancer, multiple myeloma or neuroblastoma in humans. In total, >30,000 models were built using many combinations of analytical methods. The teams generated predictive models without knowing the biological meaning of some of the endpoints and, to mimic clinical reality, tested the models on data that had not been used for training. We found that model performance depended largely on the endpoint and team proficiency and that different approaches generated models of similar performance. The conclusions and recommendations from MAQC-II should be useful for regulatory agencies, study committees and independent investigators that evaluate methods for global gene expression analysis.

  7. Partial Validation of Multibody Program to Optimize Simulated Trajectories II (POST II) Parachute Simulation With Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Ben; Queen, Eric M.

    2002-01-01

    A capability to simulate trajectories Of Multiple interacting rigid bodies has been developed. This capability uses the Program to Optimize Simulated Trajectories II (POST II). Previously, POST II had the ability to simulate multiple bodies without interacting forces. The current implementation is used for the Simulation of parachute trajectories, in which the parachute and suspended bodies can be treated as rigid bodies. An arbitrary set of connecting lines can be included in the model and are treated as massless spring-dampers. This paper discusses details of the connection line modeling and results of several test cases used to validate the capability.

  8. A dynamic model of some malaria-transmitting anopheline mosquitoes of the Afrotropical region. II. Validation of species distribution and seasonal variations

    PubMed Central

    2013-01-01

    Background The first part of this study aimed to develop a model for Anopheles gambiae s.l. with separate parametrization schemes for Anopheles gambiae s.s. and Anopheles arabiensis. The characterizations were constructed based on literature from the past decades. This part of the study is focusing on the model’s ability to separate the mean state of the two species of the An. gambiae complex in Africa. The model is also evaluated with respect to capturing the temporal variability of An. arabiensis in Ethiopia. Before conclusions and guidance based on models can be made, models need to be validated. Methods The model used in this paper is described in part one (Malaria Journal 2013, 12:28). For the validation of the model, a data base of 5,935 points on the presence of An. gambiae s.s. and An. arabiensis was constructed. An additional 992 points were collected on the presence An. gambiae s.l.. These data were used to assess if the model could recreate the spatial distribution of the two species. The dataset is made available in the public domain. This is followed by a case study from Madagascar where the model’s ability to recreate the relative fraction of each species is investigated. In the last section the model’s ability to reproduce the temporal variability of An. arabiensis in Ethiopia is tested. The model was compared with data from four papers, and one field survey covering two years. Results Overall, the model has a realistic representation of seasonal and year to year variability in mosquito densities in Ethiopia. The model is also able to describe the distribution of An. gambiae s.s. and An. arabiensis in sub-Saharan Africa. This implies this model can be used for seasonal and long term predictions of changes in the burden of malaria. Before models can be used to improving human health, or guide which interventions are to be applied where, there is a need to understand the system of interest. Validation is an important part of this process. It is

  9. MODEL VALIDATION AND UNCERTAINTY QUANTIFICATION.

    SciTech Connect

    Hemez, F.M.; Doebling, S.W.

    2000-10-01

    This session offers an open forum to discuss issues and directions of research in the areas of model updating, predictive quality of computer simulations, model validation and uncertainty quantification. Technical presentations review the state-of-the-art in nonlinear dynamics and model validation for structural dynamics. A panel discussion introduces the discussion on technology needs, future trends and challenges ahead with an emphasis placed on soliciting participation of the audience, One of the goals is to show, through invited contributions, how other scientific communities are approaching and solving difficulties similar to those encountered in structural dynamics. The session also serves the purpose of presenting the on-going organization of technical meetings sponsored by the U.S. Department of Energy and dedicated to health monitoring, damage prognosis, model validation and uncertainty quantification in engineering applications. The session is part of the SD-2000 Forum, a forum to identify research trends, funding opportunities and to discuss the future of structural dynamics.

  10. SAGE II aerosol validation: selected altitude measurements, including particle micromeasurements.

    PubMed

    Oberbeck, V R; Livingston, J M; Russell, P B; Pueschel, R F; Rosen, J N; Osborn, M T; Kritz, M A; Snetsinger, K G; Ferry, G V

    1989-06-20

    Correlative aerosol measurements taken at a limited number of altitudes during coordinated field experiments are used to test the validity of particulate extinction coefficients derived from limb path solar radiance measurements taken by the Stratospheric Aerosol and Gas Experiment (SAGE) II Sun photometer. In particular, results are presented from correlative measurement missions that were conducted during January 1985, August 1985, and July 1986. Correlative sensors included impactors, laser spectrometers, and filter samplers aboard an U-2-airplane, an upward pointing lidar aboard a P-3 airplane, and balloon-borne optical particle counters (dustsondes). The main body of this paper focuses on the July 29, 1986, validation experiment, which minimized the many difficulties (e.g., spatial and temporal inhomogeneities, imperfect coincidences) that can complicate the validation process. On this day, correlative aerosol measurements taken at an altitude of 20.5 km agreed with each other within their respective uncertainties, and particulate extinction values calculated at SAGE II wavelengths from these measurements validated corresponding SAGE II values. Additional validation efforts on days when measurement and logistical conditions were much less favorable for validation are discussed in an appendix.

  11. Uncertainty Modeling Via Frequency Domain Model Validation

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Andrisani, Dominick, II

    1999-01-01

    Abstract The majority of literature on robust control assumes that a design model is available and that the uncertainty model bounds the actual variations about the nominal model. However, methods for generating accurate design models have not received as much attention in the literature. The influence of the level of accuracy of the uncertainty model on closed loop performance has received even less attention. The research reported herein is an initial step in applying and extending the concept of model validation to the problem of obtaining practical uncertainty models for robust control analysis and design applications. An extension of model validation called 'sequential validation' is presented and applied to a simple spring-mass-damper system to establish the feasibility of the approach and demonstrate the benefits of the new developments.

  12. Simulating long-term dynamics of the coupled North Sea and Baltic Sea ecosystem with ECOSMO II: Model description and validation

    NASA Astrophysics Data System (ADS)

    Daewel, Ute; Schrum, Corinna

    2013-06-01

    The North Sea and the Baltic Sea ecosystems differ substantially in both hydrology and biogeochemical processes. Nonetheless, both systems are closely linked to each other and a coupled modeling approach is indispensable when aiming to simulate and understand long-term ecosystem dynamics in both seas. In this study, we present first an updated version of the fully coupled bio-physical model ECOSMO, a 3d hydrodynamic and a N(utrient)P(hytoplankton)Z(ooplankton)D(etritus) model, which is now adopted to the coupled system North Sea-Baltic Sea. To make the model applicable to both ecosystems, processes relevant for the Baltic Sea (e.g. sedimentation, cyanobacteria) were incorporated into the model formulation. Secondly we assess the validity of the model to describe seasonal, inter-annual and decadal variations in both seas. Our analyses show that the model sufficiently represents the spatial and temporal dynamics in both ecosystems but with some uncertainties in the coastal areas of the North Sea, likely related to the missing representation of tidal flats in the model, and in the deep-water nutrient pool of the Baltic Sea. Finally we present results from a 61-year (1948-2008) hindcast of the coupled North Sea and Baltic Sea ecosystem and identify long-term changes in primary and secondary production. The simulated long-term dynamics of primary and secondary production could be corroborated by observations from available literature and shows a general increase in the last three decades of the simulation when compared to the first 30 years. Regime shifts could be identified for both ecosystems, but with differences in both, timing and magnitude of the related change.

  13. Turbulence Modeling Verification and Validation

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2014-01-01

    Computational fluid dynamics (CFD) software that solves the Reynolds-averaged Navier-Stokes (RANS) equations has been in routine use for more than a quarter of a century. It is currently employed not only for basic research in fluid dynamics, but also for the analysis and design processes in many industries worldwide, including aerospace, automotive, power generation, chemical manufacturing, polymer processing, and petroleum exploration. A key feature of RANS CFD is the turbulence model. Because the RANS equations are unclosed, a model is necessary to describe the effects of the turbulence on the mean flow, through the Reynolds stress terms. The turbulence model is one of the largest sources of uncertainty in RANS CFD, and most models are known to be flawed in one way or another. Alternative methods such as direct numerical simulations (DNS) and large eddy simulations (LES) rely less on modeling and hence include more physics than RANS. In DNS all turbulent scales are resolved, and in LES the large scales are resolved and the effects of the smallest turbulence scales are modeled. However, both DNS and LES are too expensive for most routine industrial usage on today's computers. Hybrid RANS-LES, which blends RANS near walls with LES away from walls, helps to moderate the cost while still retaining some of the scale-resolving capability of LES, but for some applications it can still be too expensive. Even considering its associated uncertainties, RANS turbulence modeling has proved to be very useful for a wide variety of applications. For example, in the aerospace field, many RANS models are considered to be reliable for computing attached flows. However, existing turbulence models are known to be inaccurate for many flows involving separation. Research has been ongoing for decades in an attempt to improve turbulence models for separated and other nonequilibrium flows. When developing or improving turbulence models, both verification and validation are important

  14. Statistical validation of stochastic models

    SciTech Connect

    Hunter, N.F.; Barney, P.; Paez, T.L.; Ferregut, C.; Perez, L.

    1996-12-31

    It is common practice in structural dynamics to develop mathematical models for system behavior, and the authors are now capable of developing stochastic models, i.e., models whose parameters are random variables. Such models have random characteristics that are meant to simulate the randomness in characteristics of experimentally observed systems. This paper suggests a formal statistical procedure for the validation of mathematical models of stochastic systems when data taken during operation of the stochastic system are available. The statistical characteristics of the experimental system are obtained using the bootstrap, a technique for the statistical analysis of non-Gaussian data. The authors propose a procedure to determine whether or not a mathematical model is an acceptable model of a stochastic system with regard to user-specified measures of system behavior. A numerical example is presented to demonstrate the application of the technique.

  15. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part I: Benchmark comparisons of WIMS-D5 and DRAGON cell and control rod parameters with MCNP5

    SciTech Connect

    Mollerach, R.; Leszczynski, F.; Fink, J.

    2006-07-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure-vessel design with 451 vertical coolant channels, and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update calculation methods and models (cell, supercell and reactor) was recently carried out covering cell, supercell (control rod) and core calculations. As a validation of the new models some benchmark comparisons were done with Monte Carlo calculations with MCNP5. This paper presents comparisons of cell and supercell benchmark problems based on a slightly idealized model of the Atucha-I core obtained with the WIMS-D5 and DRAGON codes with MCNP5 results. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, and more symmetric than Atucha-II Cell parameters compared include cell k-infinity, relative power levels of the different rings of fuel rods, and some two-group macroscopic cross sections. Supercell comparisons include supercell k-infinity changes due to the control rods (tubes) of steel and hafnium. (authors)

  16. Factorial and diagnostic validity of the Beck Depression Inventory-II (BDI-II) in Croatian primary health care.

    PubMed

    Jakšić, Nenad; Ivezić, Ena; Jokić-Begić, Nataša; Surányi, Zsuzsanna; Stojanović-Špehar, Stanislava

    2013-09-01

    The aim of this study was to examine the factorial and diagnostic validity of the Beck Depression Inventory-Second Edition (BDI-II) in Croatian primary health care. Data were collected using a medical outpatient sample (N = 314). Reliability measured by internal consistency proved to be high. While the Velicer MAP Test showed that extraction of only one factor is satisfactory, confirmatory factor analysis indicated the best fit for a 3-factor structure model consisting of cognitive, affective and somatic dimensions. Receiver operating characteristics (ROC) analysis demonstrated the BDI-II to have a satisfactory diagnostic validity in differentiating between healthy and depressed individuals in this setting. The area under the curve (AUC), sensitivity and specificity were high with an optimal cut-off score of 15/16. The implications of these findings are discussed regarding the use of the BDI-II as a screening instrument in primary health care settings.

  17. (Validity of environmental transfer models)

    SciTech Connect

    Blaylock, B.G.; Hoffman, F.O.; Gardner, R.H.

    1990-11-07

    BIOMOVS (BIOspheric MOdel Validation Study) is an international cooperative study initiated in 1985 by the Swedish National Institute of Radiation Protection to test models designed to calculate the environmental transfer and bioaccumulation of radionuclides and other trace substances. The objective of the symposium and workshop was to synthesize results obtained during Phase 1 of BIOMOVS (the first five years of the study) and to suggest new directions that might be pursued during Phase 2 of BIOMOVS. The travelers were an instrumental part of the development of BIOMOVS. This symposium allowed the travelers to present a review of past efforts at model validation and a synthesis of current activities and to refine ideas concerning future development of models and data for assessing the fate, effect, and human risks of environmental contaminants. R. H. Gardner also visited the Free University, Amsterdam, and the National Institute of Public Health and Environmental Protection (RIVM) in Bilthoven to confer with scientists about current research in theoretical ecology and the use of models for estimating the transport and effect of environmental contaminants and to learn about the European efforts to map critical loads of acid deposition.

  18. Validation of the Curiosity and Exploration Inventory-II (CEI-II) Among Chinese University Students in Hong Kong.

    PubMed

    Ye, Shengquan; Ng, Ting Kin; Yim, Kin Hang; Wang, Jun

    2015-01-01

    This study aimed at validating the Curiosity and Exploration Inventory-II (CEI-II; Kashdan et al., 2009 ) in a Chinese context. A total of 294 Chinese first-year undergraduate students in Hong Kong completed the CEI-II and measures of satisfaction with university life, the Big Five personality traits, and human values. The results of exploratory structural equation modeling, parallel analysis, and confirmatory factor analysis supported a 1-factor solution and did not replicate the original 2-factor structure. Time invariance of the 1-factor structure was obtained among 242 participants who completed the questionnaires again after 4 months. The latent means and correlation indicated that curiosity as measured by the CEI-II was quite stable over the period of investigation. The CEI-II was found to be positively correlated with satisfaction with university life, extraversion, agreeableness, conscientiousness, openness to experience, and openness to change values, but negatively with neuroticism and conservation values. The results of hierarchical multiple regression analyses showed that the CEI-II score had incremental validity above and beyond the Big Five personality traits in predicting human values and satisfaction with university life.

  19. Assimilation of Geosat Altimetric Data in a Nonlinear Shallow-Water Model of the Indian Ocean by Adjoint Approach. Part II: Some Validation and Interpretation of the Assimilated Results. Part 2; Some Validation and Interpretation of the Assimilated Results

    NASA Technical Reports Server (NTRS)

    Greiner, Eric; Perigaud, Claire

    1996-01-01

    This paper examines the results of assimilating Geosat sea level variations relative to the November 1986-November 1988 mean reference, in a nonlinear reduced-gravity model of the Indian Ocean, Data have been assimilated during one year starting in November 1986 with the objective of optimizing the initial conditions and the yearly averaged reference surface. The thermocline slope simulated by the model with or without assimilation is validated by comparison with the signal, which can be derived from expandable bathythermograph measurements performed in the Indian Ocean at that time. The topography simulated with assimilation on November 1986 is in very good agreement with the hydrographic data. The slopes corresponding to the South Equatorial Current and to the South Equatorial Countercurrent are better reproduced with assimilation than without during the first nine months. The whole circulation of the cyclonic gyre south of the equator is then strongly intensified by assimilation. Another assimilation experiment is run over the following year starting in November 1987. The difference between the two yearly mean surfaces simulated with assimilation is in excellent agreement with Geosat. In the southeastern Indian Ocean, the correction to the yearly mean dynamic topography due to assimilation over the second year is negatively correlated to the one the year before. This correction is also in agreement with hydrographic data. It is likely that the signal corrected by assimilation is not only due to wind error, because simulations driven by various wind forcings present the same features over the two years. Model simulations run with a prescribed throughflow transport anomaly indicate that assimilation is rather correcting in the interior of the model domain for inadequate boundary conditions with the Pacific.

  20. SAGE II aerosol data validation - Comparative studies of SAGE II and SAM II data sets

    NASA Technical Reports Server (NTRS)

    Yue, G. K.; Mccormick, M. P.; Chu, W. P.; Wang, P. H.; Osborn, M. T.

    1989-01-01

    Data from the Stratospheric Aerosol and Gas Experiment (SAGE II) satellite are compared with data from the Stratospheric Aerosol Measurement (SAM II) satellite. Both experiments produce aerosol extinction profiles by measuring the attenuation of solar radiation during each sunrise and sunset observed by the satelltie. The SAGE II obtains profiles at 1.02 microns and three smaller wavelengths, whereas the SAM II measures at only one radiometric channel at 1.0 microns. It is found that the differences between the two sets of data are generally within the error bars associated with each measurement. In addition, the sunrise and sunset data from SAGE II are analyzed.

  1. A Comparison and Validation of Two Surface Ship Readiness Models

    DTIC Science & Technology

    1994-09-01

    Mawnum 200 *wmds Two models are used by the U.S. Navy to predict surface ship readiness: the Surface Ship Resources to Material Readiness Model (SRM...and the Surface Ship Inventory to Material Readiness Model (SIM). This thesis examines both models. in order to validate the model fit and to...Operations Research ’ii iv ABSTRACT Two models are used by the U.S. Navy to predict surface ship readiness: the Surface Ship Resources to Material Readiness

  2. Software Validation via Model Animation

    NASA Technical Reports Server (NTRS)

    Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.

    2015-01-01

    This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.

  3. Factor structure and construct validity of the Behavioral Dyscontrol Scale-II.

    PubMed

    Shura, Robert D; Rowland, Jared A; Yoash-Gantz, Ruth E

    2015-01-01

    The Behavioral Dyscontrol Scale-II (BDS-II) was developed as an improved scoring method to the original BDS, which was designed to evaluate the capacity for independent regulation of behavior and attention. The purpose of this study was to evaluate the factor structure and construct validity of the BDS-II, which had not been adequately re-examined since the development of the new scoring system. In a sample of 164 Veterans with a mean age of 35 years, exploratory factor analysis was used to evaluate BDS-II latent factor structure. Correlations and regressions were used to explore validity against 22 psychometrically sound neurocognitive measures across seven neurocognitive domains of sensation, motor output, processing speed, attention, visual-spatial reasoning, memory, and executive functions. Factor analysis found a two-factor solution for this sample which explained 41% of the variance in the model. Validity analyses found significant correlations among the BDS-II scores and all other cognitive domains except sensation and language (which was not evaluated). Hierarchical regressions revealed that PASAT performance was strongly associated with all three BDS-II scores; dominant hand Finger Tapping Test was also associated with the Total score and Factor 1, and CPT-II Commissions was also associated with Factor 2. These results suggest the BDS-II is both a general test of cerebral functioning, and a more specific test of working memory, motor output, and impulsivity. The BDS-II may therefore show utility with younger populations for measuring frontal lobe abilities and might be very sensitive to neurological injury.

  4. Validation plan for the German CAMAELEON model

    NASA Astrophysics Data System (ADS)

    McManamey, James R.

    1997-06-01

    Engineers and scientists at the US Army's Night Vision and Electronic Sensors Directorate (NVESD) are in the process of evaluating the German CAMAELEON model, a signature evaluation model that was created for use in designing and evaluating camouflage in the visible spectrum and is based on computational vision methodologies. Verification and preliminary validation have been very positive. For this reason, NVESD has planned and is currently in the early execution phase of a more elaborate validation effort using data from an Army field exercise known as DISSTAF-II. The field exercise involved tank gunners, using the currently fielded M1 Abrams tank sights to search for, to target, and to `fire on' (i.e. to pull the trigger to mark target location) a variety of foreign and domestic vehicles in realistic scenarios. Data from this field exercise will be combined with results of a laboratory measurement of perceptual target detectabilities. The purpose of the laboratory measurement is to separate modeled effects from unmodeled effects in the field data. In the laboratory, observers will be performing a task as similar as possible to that modeled by CAMAELEON. An important feature of this data is that the observers will know where the target is located and will rate the detectability of the targets in a paired comparison experiment utilizing the X-based perceptual experiment testbed developed at the University of Tennessee. For the laboratory measurement the subjects will view exactly the same images as those to be analyzed by CAMAELEON. Three correlations that will be found are expected to be especially important. The correlation between perceptual detectability and model predictions will show the accuracy with which the model predicts human performance of the modeled task (rating target detectabilities). The correlation between laboratory and field data will show how well perceived detectability predicts tank gunner target detection in a realistic scenario. Finally

  5. Inert doublet model and LEP II limits

    SciTech Connect

    Lundstroem, Erik; Gustafsson, Michael; Edsjoe, Joakim

    2009-02-01

    The inert doublet model is a minimal extension of the standard model introducing an additional SU(2) doublet with new scalar particles that could be produced at accelerators. While there exists no LEP II analysis dedicated for these inert scalars, the absence of a signal within searches for supersymmetric neutralinos can be used to constrain the inert doublet model. This translation however requires some care because of the different properties of the inert scalars and the neutralinos. We investigate what restrictions an existing DELPHI Collaboration study of neutralino pair production can put on the inert scalars and discuss the result in connection with dark matter. We find that although an important part of the inert doublet model parameter space can be excluded by the LEP II data, the lightest inert particle still constitutes a valid dark matter candidate.

  6. Verifying and Validating Simulation Models

    SciTech Connect

    Hemez, Francois M.

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  7. Obstructive lung disease models: what is valid?

    PubMed

    Ferdinands, Jill M; Mannino, David M

    2008-12-01

    Use of disease simulation models has led to scrutiny of model methods and demand for evidence that models credibly simulate health outcomes. We sought to describe recent obstructive lung disease simulation models and their validation. Medline and EMBASE were used to identify obstructive lung disease simulation models published from January 2000 to June 2006. Publications were reviewed to assess model attributes and four types of validation: first-order (verification/debugging), second-order (comparison with studies used in model development), third-order (comparison with studies not used in model development), and predictive validity. Six asthma and seven chronic obstructive pulmonary disease models were identified. Seven (54%) models included second-order validation, typically by comparing observed outcomes to simulations of source study cohorts. Seven (54%) models included third-order validation, in which modeled outcomes were usually compared qualitatively for agreement with studies independent of the model. Validation endpoints included disease prevalence, exacerbation, and all-cause mortality. Validation was typically described as acceptable, despite near-universal absence of criteria for judging adequacy of validation. Although over half of recent obstructive lung disease simulation models report validation, inconsistencies in validation methods and lack of detailed reporting make assessing adequacy of validation difficult. For simulation modeling to be accepted as a tool for evaluating clinical and public health programs, models must be validated to credibly simulate health outcomes of interest. Defining the required level of validation and providing guidance for quantitative assessment and reporting of validation are important future steps in promoting simulation models as practical decision tools.

  8. Calibration and Validation of the COCOMO II.1997.0 Cost/Schedule Estimating Model to the Space and Missile Systems Center Database

    DTIC Science & Technology

    1997-09-01

    Models seem to be able to estimate more accurately within certain applications; and, 3) The user needs to become as familiar as possible with the...previous AFIT studies which state that model users need to become familiar and experienced with specific models, and that no specific model works best with...models: PRICE-S, SASET, and SEER-SEM ( Apgar , Galorath, Maness, and Stukes, 1991). The mission now has been expanded to include other models of interest to

  9. SAGE II aerosol data validation and initial data use - An introduction and overview

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Mccormick, M. P.

    1989-01-01

    The process of validating data from the Stratospheric Aerosol and Gas Experiment (SAGE) II and the initial use of the validated data are reviewed. The instruments developed for the SAGE II, the influence of the eruption of El Chichon on the global stratospheric aerosol, and various data validation experiments are discussed. Consideration is given to methods for deriving aerosol physical and optical properties from SAGE II extinction data and for inferring particle size distribution moments from SAGE II spectral extinction values.

  10. Geochemistry Model Validation Report: External Accumulation Model

    SciTech Connect

    K. Zarrabi

    2001-09-27

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation

  11. Validation of Multibody Program to Optimize Simulated Trajectories II Parachute Simulation with Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.

    2009-01-01

    A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.

  12. An Aqueous Thermodynamic Model for the Complexation of Sodium and Strontium with Organic Chelators valid to High Ionic Strength. II. N-(2-hydroxyethyl)ethylenedinitrilotriacetic acid (HEDTA)

    SciTech Connect

    Felmy, Andrew R.; Mason, Marvin J.; Qafoku, Odeta

    2003-04-01

    This is the second paper in a two part series on the development of aqueous thermodynamic models for the complexation of Na+ and Sr2+ with organic chelators. In this paper the development of an aqueous thermodynamic model describing the effects of ionic strength, carbonate concentration, and temperature on the complexation of Sr2+ by HEDTA under basic conditions is presented. The thermodynamic model describing the Na+ interactions with the HEDTA3- chelate relies solely on the use of Pitzer ion-interaction parameters. The exclusive use of Pitzer ion-interaction parameters differs significantly from our previous model for EDTA, which required the introduction of a NaEDTA3- ion pair. Estimation of the Pitzer ion-interaction parameters for HEDTA3- and SrHEDTA- with Na+ allows the extrapolation of a standard state equilibrium constant for the SrHEDTA- species which is one order of magnitude greater than the 0.1M reference state value available in the literature. The overall model is developed from data available in the literature on apparent equilibrium constants for HEDTA protonation, the solubility of salts in concentrated HEDTA solutions, and from new data on the solubility of SrCO3(c) obtained as part of this study. The predictions of the final thermodynamic model for the Na-Sr-OH-CO3-NO3-HEDTA-H2O system are tested by application to chemical systems containing competing metal ions (i.e., Ca2+).

  13. Reliability and Validity of the Beck Depression Inventory-II among Korean Adolescents

    PubMed Central

    Lee, Eun-Ho; Lee, Soo-Ji; Hwang, Soon-Taeg; Hong, Sang-Hwang

    2017-01-01

    Objective The Beck Depression Inventory-II (BDI-II) is one of the most popular scales for evaluating the severity of depression in adolescents as well as adults. The prevalence of depression increases during adolescence, and it has shown a rapid increase with occurrence at an earlier age and a tendency to continue into adulthood. Data from an adolescent nonclinical sample provides us more information related to depressive symptoms as potential risk factors. The current study was designed to two objectives: 1) to analyze the reliability and validity the BDI-II among Korean adolescents and 2) to evaluate the factorial structure in a Korean nonclinical adolescent sample. Methods The participants included 1072 adolescent boys and girls. We assessed the internal consistency, corrected item-total correlation, and the convergent validity of the BDI-II. We also performed confirmatory factor analyses to determine the internal structure of the BDI-II for Korean adolescents using Mplus 6.1. Results The Cronbach's alpha for the BDI-II total score was 0.89. The correlation between the BDI-II and the PHQ-9 was strong (r=0.75), and anxiety-related measures were 0.68 and 0.71, which were also in the high range. Among the five different factor structures, the modified three-factor model demonstrated the best overall fit. Conclusion The BDI-II is a reliable tool for measuring the severity of depressive symptoms in Korean adolescents. Therefore, the findings can provide basic information for examining the prevalence rate, intervention strategies for depression in adolescents. PMID:28096872

  14. Validation of Groundwater Models: Meaningful or Meaningless?

    NASA Astrophysics Data System (ADS)

    Konikow, L. F.

    2003-12-01

    Although numerical simulation models are valuable tools for analyzing groundwater systems, their predictive accuracy is limited. People who apply groundwater flow or solute-transport models, as well as those who make decisions based on model results, naturally want assurance that a model is "valid." To many people, model validation implies some authentication of the truth or accuracy of the model. History matching is often presented as the basis for model validation. Although such model calibration is a necessary modeling step, it is simply insufficient for model validation. Because of parameter uncertainty and solution non-uniqueness, declarations of validation (or verification) of a model are not meaningful. Post-audits represent a useful means to assess the predictive accuracy of a site-specific model, but they require the existence of long-term monitoring data. Model testing may yield invalidation, but that is an opportunity to learn and to improve the conceptual and numerical models. Examples of post-audits and of the application of a solute-transport model to a radioactive waste disposal site illustrate deficiencies in model calibration, prediction, and validation.

  15. Preparation of Power Distribution System for High Penetration of Renewable Energy Part I. Dynamic Voltage Restorer for Voltage Regulation Pat II. Distribution Circuit Modeling and Validation

    NASA Astrophysics Data System (ADS)

    Khoshkbar Sadigh, Arash

    by simulation and experimental tests under various conditions considering all possible cases such as different amounts of voltage sag depth (VSD), different amounts of point-on-wave (POW) at which voltage sag occurs, harmonic distortion, line frequency variation, and phase jump (PJ). Furthermore, the ripple amount of fundamental voltage amplitude calculated by the proposed method and its error is analyzed considering the line frequency variation together with harmonic distortion. The best and worst detection time of proposed method were measured 1ms and 8.8ms, respectively. Finally, the proposed method has been compared with other voltage sag detection methods available in literature. Part 2: Power System Modeling for Renewable Energy Integration: As power distribution systems are evolving into more complex networks, electrical engineers have to rely on software tools to perform circuit analysis. There are dozens of powerful software tools available in the market to perform the power system studies. Although their main functions are similar, there are differences in features and formatting structures to suit specific applications. This creates challenges for transferring power system circuit models data (PSCMD) between different software and rebuilding the same circuit in the second software environment. The objective of this part of thesis is to develop a Unified Platform (UP) to facilitate transferring PSCMD among different software packages and relieve the challenges of the circuit model conversion process. UP uses a commonly available spreadsheet file with a defined format, for any home software to write data to and for any destination software to read data from, via a script-based application called PSCMD transfer application. The main considerations in developing the UP are to minimize manual intervention and import a one-line diagram into the destination software or export it from the source software, with all details to allow load flow, short circuit and

  16. Yankee Rowe simulator core model validation

    SciTech Connect

    Napolitano, M.E.

    1990-01-01

    This paper presents the validation of the Yankee Rowe simulator core model. Link-Miles Simulation Corporation is developing the Yankee Rowe simulator and Yankee Atomic Electric Company is involved in input and benchmark data generation, as well as simulator validation. Core model validation by Yankee comprises three tasks: (1) careful generation of fuel reactivity characteristics (B constants); (2) nonintegrated core model testing; and (3) fully integrated core model testing. Simulator core model validation and verification is a multistage process involving input and benchmark data generation as well as interactive debugging. Core characteristics were brought within acceptable criteria by this process. This process was achieved through constant communication between Link-Miles and Yankee engineers. Based on this validation, the Yankee Rowe simulator core model is found to be acceptable for training purposes.

  17. Model validation software -- Theory manual

    SciTech Connect

    Dolin, R.M.

    1997-11-04

    Work began in May of 1991 on the initial Independent Spline (IS) technology. The IS technology was based on research by Dolin showing that numerical topology and geometry could be validated through their topography. A unique contribution to this research is that the IS technology has provided a capability to modify one spline`s topology to match another spline`s topography. Work began in May of 1996 to extend the original IS capability to allow solid model topologies to be compared with corresponding two-dimensional topologies. Work began in July, 1996 to extend the IS capability to allow for tool path and inspection data analyses. Tool path analysis involves spline-spline comparisons. Inspection data analysis involves fitting inspection data with some type of analytical curve and then comparing that curve with the original (i.e., nominal) curve topology. There are three types of curves that the inspection data can be fit with. Using all three types of curve fits help engineers understand the As-Built state of whatever it is that is being interrogated. The ability to compute axi-symmetric volumes of revolution for a data set fit with either of the three curves fitting methods described above will be added later. This involves integrating the area under each curve and then revolving the area through 2{pi} radians to get a volume of revolution. The algorithms for doing this will be taken from the IGVIEW software system. The main IS program module parses out the desired activities into four different logical paths: (1) original IS spline modification; (2) two- or three-dimensional topography evaluated against 2D spline; (3) tool path analysis with tool path modifications; and (4) tool path and inspection data comparisons with nominal topography. Users have the option of running the traditional IS application software, comparing 3D ASCII data to a Wilson-Fowler spline interpolation of 2D data, comparing a Wilson-Fowler spline interpolation to analytical topology, or

  18. NASA GSFC CCMC Recent Model Validation Activities

    NASA Technical Reports Server (NTRS)

    Rastaetter, L.; Pulkkinen, A.; Taktakishvill, A.; Macneice, P.; Shim, J. S.; Zheng, Yihua; Kuznetsova, M. M.; Hesse, M.

    2012-01-01

    The Community Coordinated Modeling Center (CCMC) holds the largest assembly of state-of-the-art physics-based space weather models developed by the international space physics community. In addition to providing the community easy access to these modern space research models to support science research, its another primary goal is to test and validate models for transition from research to operations. In this presentation, we provide an overview of the space science models available at CCMC. Then we will focus on the community-wide model validation efforts led by CCMC in all domains of the Sun-Earth system and the internal validation efforts at CCMC to support space weather servicesjoperations provided its sibling organization - NASA GSFC Space Weather Center (http://swc.gsfc.nasa.gov). We will also discuss our efforts in operational model validation in collaboration with NOAA/SWPC.

  19. Feature Extraction for Structural Dynamics Model Validation

    SciTech Connect

    Farrar, Charles; Nishio, Mayuko; Hemez, Francois; Stull, Chris; Park, Gyuhae; Cornwell, Phil; Figueiredo, Eloi; Luscher, D. J.; Worden, Keith

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  20. Algorithm for model validation: theory and applications.

    PubMed

    Sornette, D; Davis, A B; Ide, K; Vixie, K R; Pisarenko, V; Kamm, J R

    2007-04-17

    Validation is often defined as the process of determining the degree to which a model is an accurate representation of the real world from the perspective of its intended uses. Validation is crucial as industries and governments depend increasingly on predictions by computer models to justify their decisions. We propose to formulate the validation of a given model as an iterative construction process that mimics the often implicit process occurring in the minds of scientists. We offer a formal representation of the progressive build-up of trust in the model. Thus, we replace static claims on the impossibility of validating a given model by a dynamic process of constructive approximation. This approach is better adapted to the fuzzy, coarse-grained nature of validation. Our procedure factors in the degree of redundancy versus novelty of the experiments used for validation as well as the degree to which the model predicts the observations. We illustrate the methodology first with the maturation of quantum mechanics as the arguably best established physics theory and then with several concrete examples drawn from some of our primary scientific interests: a cellular automaton model for earthquakes, a multifractal random walk model for financial time series, an anomalous diffusion model for solar radiation transport in the cloudy atmosphere, and a computational fluid dynamics code for the Richtmyer-Meshkov instability.

  1. Algorithm for model validation: Theory and applications

    PubMed Central

    Sornette, D.; Davis, A. B.; Ide, K.; Vixie, K. R.; Pisarenko, V.; Kamm, J. R.

    2007-01-01

    Validation is often defined as the process of determining the degree to which a model is an accurate representation of the real world from the perspective of its intended uses. Validation is crucial as industries and governments depend increasingly on predictions by computer models to justify their decisions. We propose to formulate the validation of a given model as an iterative construction process that mimics the often implicit process occurring in the minds of scientists. We offer a formal representation of the progressive build-up of trust in the model. Thus, we replace static claims on the impossibility of validating a given model by a dynamic process of constructive approximation. This approach is better adapted to the fuzzy, coarse-grained nature of validation. Our procedure factors in the degree of redundancy versus novelty of the experiments used for validation as well as the degree to which the model predicts the observations. We illustrate the methodology first with the maturation of quantum mechanics as the arguably best established physics theory and then with several concrete examples drawn from some of our primary scientific interests: a cellular automaton model for earthquakes, a multifractal random walk model for financial time series, an anomalous diffusion model for solar radiation transport in the cloudy atmosphere, and a computational fluid dynamics code for the Richtmyer–Meshkov instability. PMID:17420476

  2. Ground-water models cannot be validated

    USGS Publications Warehouse

    Konikow, L.F.; Bredehoeft, J.D.

    1992-01-01

    Ground-water models are embodiments of scientific hypotheses. As such, the models cannot be proven or validated, but only tested and invalidated. However, model testing and the evaluation of predictive errors lead to improved models and a better understanding of the problem at hand. In applying ground-water models to field problems, errors arise from conceptual deficiencies, numerical errors, and inadequate parameter estimation. Case histories of model applications to the Dakota Aquifer, South Dakota, to bedded salts in New Mexico, and to the upper Coachella Valley, California, illustrate that calibration produces a nonunique solution and that validation, per se, is a futile objective. Although models are definitely valuable tools for analyzing ground-water systems, their predictive accuracy is limited. The terms validation and verification are misleading and their use in ground-water science should be abandoned in favor of more meaningful model-assessment descriptors. ?? 1992.

  3. Validity of the Sleep Subscale of the Diagnostic Assessment for the Severely Handicapped-II (DASH-II)

    ERIC Educational Resources Information Center

    Matson, Johnny L.; Malone, Carrie J.

    2006-01-01

    Currently there are no available sleep disorder measures for individuals with severe and profound intellectual disability. We, therefore, attempted to establish the external validity of the "Diagnostic Assessment for the Severely Handicapped-II" (DASH-II) sleep subscale by comparing daily observational sleep data with the responses of…

  4. Reliability and Validity of the Beck Depression Inventory--II with Adolescent Psychiatric Inpatients

    ERIC Educational Resources Information Center

    Osman, Augustine; Kopper, Beverly A; Barrios, Frank; Gutierrez, Peter M.; Bagge, Courtney L.

    2004-01-01

    This investigation was conducted to validate the Beck Depression Inventory--II (BDI-II; A. T. Beck, R. A. Steer, & G. K. Brown, 1996) in samples of adolescent psychiatric inpatients. The sample in each substudy was primarily Caucasian. In Study 1, expert raters (N=7) and adolescent psychiatric inpatients (N=13) evaluated the BDI-II items to assess…

  5. Systematic Independent Validation of Inner Heliospheric Models

    NASA Technical Reports Server (NTRS)

    MacNeice, P. J.; Takakishvili, Alexandre

    2008-01-01

    This presentation is the first in a series which will provide independent validation of community models of the outer corona and inner heliosphere. In this work we establish a set of measures to be used in validating this group of models. We use these procedures to generate a comprehensive set of results from the Wang- Sheeley-Arge (WSA) model which will be used as a baseline, or reference, against which to compare all other models. We also run a test of the validation procedures by applying them to a small set of results produced by the ENLIL Magnetohydrodynamic (MHD) model. In future presentations we will validate other models currently hosted by the Community Coordinated Modeling Center(CCMC), including a comprehensive validation of the ENLIL model. The Wang-Sheeley-Arge (WSA) model is widely used to model the Solar wind, and is used by a number of agencies to predict Solar wind conditions at Earth as much as four days into the future. Because it is so important to both the research and space weather forecasting communities, it is essential that its performance be measured systematically, and independently. In this paper we offer just such an independent and systematic validation. We report skill scores for the model's predictions of wind speed and IMF polarity for a large set of Carrington rotations. The model was run in all its routinely used configurations. It ingests line of sight magnetograms. For this study we generated model results for monthly magnetograms from the National Solar Observatory (SOLIS), Mount Wilson Observatory and the GONG network, spanning the Carrington rotation range from 1650 to 2068. We compare the influence of the different magnetogram sources, performance at quiet and active times, and estimate the effect of different empirical wind speed tunings. We also consider the ability of the WSA model to identify sharp transitions in wind speed from slow to fast wind. These results will serve as a baseline against which to compare future

  6. A stochastic approach to model validation

    NASA Astrophysics Data System (ADS)

    Luis, Steven J.; McLaughlin, Dennis

    This paper describes a stochastic approach for assessing the validity of environmental models. In order to illustrate basic concepts we focus on the problem of modeling moisture movement through an unsaturated porous medium. We assume that the modeling objective is to predict the mean distribution of moisture content over time and space. The mean moisture content describes the large-scale flow behavior of most interest in many practical applications. The model validation process attempts to determine whether the model's predictions are acceptably close to the mean. This can be accomplished by comparing small-scale measurements of moisture content to the model's predictions. Differences between these two quantities can be attributed to three distinct 'error sources': (1) measurement error, (2) spatial heterogeneity, and (3) model error. If we adopt appropriate stochastic descriptions for the first two sources of error we can view model validation as a hypothesis testing problem where the null hypothesis states that model error is negligible. We illustrate this concept by comparing the predictions of a simple two-dimensional deterministic model to measurements collected during a field experiment carried out near Las Cruces, New Mexico. Preliminary results from this field test indicate that a stochastic approach to validation can identify model deficiencies and provide objective standards for model performance.

  7. Empirical assessment of model validity

    SciTech Connect

    Wolfe, R.R. )

    1991-05-01

    The metabolism of amino acids is far more complicated than a 1- to 2-pool model. Yet, these simple models have been extensively used with many different isotopically labeled tracers to study protein metabolism. A tracer of leucine and measurement of leucine kinetics has been a favorite choice for following protein metabolism. However, administering a leucine tracer and following it in blood will not adequately reflect the complex multi-pool nature of the leucine system. Using the tracer enrichment of the ketoacid metabolite of leucine, alpha-ketoisocaproate (KIC), to reflect intracellular events of leucine was an important improvement. Whether this approach is adequate to follow accurately leucine metabolism in vivo or not has not been tested. From data obtained using simultaneous administration of leucine and KIC tracers, we developed a 10-pool model of the in vivo leucine-KIC and bicarbonate kinetic system. Data from this model were compared with conventional measurements of leucine kinetics. The results from the 10-pool model agreed best with the simplified approach using a leucine tracer and measurement of KIC enrichment.

  8. Social anxiety and fear of negative evaluation: construct validity of the BFNE-II.

    PubMed

    Carleton, R Nicholas; Collimore, Kelsey C; Asmundson, Gordon J G

    2007-01-01

    disorder. Psychological Assessment, 17, 179-190]; however [Carleton, R. N., McCreary, D., Norton, P. J., & Asmundson, G. J. G. (in press-a). The Brief Fear of Negative Evaluation Scale, Revised. Depression & Anxiety; Collins, K. A., Westra, H. A., Dozois, D. J. A., & Stewart, S. H. (2005). The validity of the brief version of the fear of negative evaluation scale. Journal of Anxiety Disorders, 19, 345-359] recommend that these items be reworded to maintain scale sensitivity. The present study examined the reliability and validity of the BFNE-II, a version of the BFNE evaluating revisions of the reverse-worded items in a community sample. A unitary model of the BFNE-II resulted in excellent confirmatory factor analysis fit indices. Moderate convergent and discriminant validity were found when BFNE-II items were correlated with additional independent measures of social anxiety [i.e., Social Interaction Anxiety & Social Phobia Scales; Mattick, R. P., & Clarke, J. C. (1998). Development and validation of measures of social phobia scrutiny fear and social interaction anxiety. Behaviour Research and Therapy, 36, 455-470], and fear [i.e., Anxiety Sensitivity Index; Reiss, S., & McNally, R. J. (1985). The expectancy model of fear. In S. Reiss, R. R. Bootzin (Eds.), Theoretical issues in behaviour therapy (pp. 107--121). New York: Academic Press. and the Illness/Injury Sensitivity Index; Carleton, R. N., Park, I., & Asmundson, G. J. G. (in press-b). The Illness/Injury Sensitivity Index: an examination of construct validity. Depression & Anxiety). These findings support the utility of the revised items and the validity of the BFNE-II as a measure of the fear of negative evaluation. Implications and future research directions are discussed.

  9. Validation of the Hot Strip Mill Model

    SciTech Connect

    Richard Shulkosky; David Rosberg; Jerrud Chapman

    2005-03-30

    The Hot Strip Mill Model (HSMM) is an off-line, PC based software originally developed by the University of British Columbia (UBC) and the National Institute of Standards and Technology (NIST) under the AISI/DOE Advanced Process Control Program. The HSMM was developed to predict the temperatures, deformations, microstructure evolution and mechanical properties of steel strip or plate rolled in a hot mill. INTEG process group inc. undertook the current task of enhancing and validating the technology. With the support of 5 North American steel producers, INTEG process group tested and validated the model using actual operating data from the steel plants and enhanced the model to improve prediction results.

  10. Ground-water models: Validate or invalidate

    USGS Publications Warehouse

    Bredehoeft, J.D.; Konikow, L.F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  11. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    SciTech Connect

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.

  12. Structural system identification: Structural dynamics model validation

    SciTech Connect

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  13. Feature extraction for structural dynamics model validation

    SciTech Connect

    Hemez, Francois; Farrar, Charles; Park, Gyuhae; Nishio, Mayuko; Worden, Keith; Takeda, Nobuo

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  14. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  15. Evaluation (not validation) of quantitative models.

    PubMed Central

    Oreskes, N

    1998-01-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  16. Validation of Space Weather Models at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, M. M.; Pulkkinen, A.; Rastaetter, L.; Hesse, M.; Chulaki, A.; Maddox, M.

    2011-01-01

    The Community Coordinated Modeling Center (CCMC) is a multiagency partnership, which aims at the creation of next generation space weather modes. CCMC goal is to support the research and developmental work necessary to substantially increase space weather modeling capabilities and to facilitate advanced models deployment in forecasting operations. The CCMC conducts unbiased model testing and validation and evaluates model readiness for operational environment. The presentation will demonstrate the recent progress in CCMC metrics and validation activities.

  17. Validating the Beck Depression Inventory-II for Hong Kong Community Adolescents

    ERIC Educational Resources Information Center

    Byrne, Barbara M.; Stewart, Sunita M.; Lee, Peter W. H.

    2004-01-01

    The primary purpose of this study was to test for the validity of a Chinese version of the Beck Depression Inventory-II (C-BDI-II) for use with Hong Kong community (i.e., nonclinical) adolescents. Based on a randomized triadic split of the data (N = 1460), we conducted exploratory factor analysis on Group1 (n = 486) and confirmatory factor…

  18. Real-time remote scientific model validation

    NASA Technical Reports Server (NTRS)

    Frainier, Richard; Groleau, Nicolas

    1994-01-01

    This paper describes flight results from the use of a CLIPS-based validation facility to compare analyzed data from a space life sciences (SLS) experiment to an investigator's preflight model. The comparison, performed in real-time, either confirms or refutes the model and its predictions. This result then becomes the basis for continuing or modifying the investigator's experiment protocol. Typically, neither the astronaut crew in Spacelab nor the ground-based investigator team are able to react to their experiment data in real time. This facility, part of a larger science advisor system called Principal Investigator in a Box, was flown on the space shuttle in October, 1993. The software system aided the conduct of a human vestibular physiology experiment and was able to outperform humans in the tasks of data integrity assurance, data analysis, and scientific model validation. Of twelve preflight hypotheses associated with investigator's model, seven were confirmed and five were rejected or compromised.

  19. Validity evidence based on internal structure of scores on the Spanish version of the Self-Description Questionnaire-II.

    PubMed

    Ingles, Cándido J; Torregrosa, María S; Hidalgo, María D; Nuñez, Jose C; Castejón, Juan L; García-Fernández, Jose M; Valles, Antonio

    2012-03-01

    The aim of this study was to analyze the reliability and validity evidence of scores on the Spanish version of Self-Description Questionnaire II (SDQ-II). The instrument was administered in a sample of 2022 Spanish students (51.1% boys) from grades 7 to 10. Confirmatory factor analysis (CFA) was used to examine validity evidence based on internal structure drawn from the scores on the SDQ-II. CFA replicated the correlated II first-order factor structure. Furthermore, hierarchical confirmatory factor analysis (HCFA) was used to examine the hierarchical ordering of self-concept, as measured by scores on the Spanish version of the SDQ-II. Although a series of HCFA models were tested to assess academic and non-academic components organization, support for those hierarchical models was weaker than for the correlated 11 first-order factor structure. Results also indicated that scores on the Spanish version of the SDQ-II had internal consistency and test-retest reliability estimates within an acceptable range.

  20. Validation of Hadronic Models in GEANT4

    SciTech Connect

    Koi, Tatsumi; Wright, Dennis H.; Folger, Gunter; Ivanchenko, Vladimir; Kossov, Mikhail; Starkov, Nikolai; Heikkinen, Aatos; Truscott, Peter; Lei, Fan; Wellisch, Hans-Peter

    2007-09-26

    Geant4 is a software toolkit for the simulation of the passage of particles through matter. It has abundant hadronic models from thermal neutron interactions to ultra relativistic hadrons. An overview of validations in Geant4 hadronic physics is presented based on thin target measurements. In most cases, good agreement is available between Monte Carlo prediction and experimental data; however, several problems have been detected which require some improvement in the models.

  1. Predicting Backdrafting and Spillage for Natural-Draft Gas Combustion Appliances: Validating VENT-II

    SciTech Connect

    Rapp, Vi H.; Pastor-Perez, Albert; Singer, Brett C.; Wray, Craig P.

    2013-04-01

    VENT-II is a computer program designed to provide detailed analysis of natural draft and induced draft combustion appliance vent-systems (i.e., furnace or water heater). This program is capable of predicting house depressurization thresholds that lead to backdrafting and spillage of combustion appliances; however, validation reports of the program being applied for this purpose are not readily available. The purpose of this report is to assess VENT-II’s ability to predict combustion gas spillage events due to house depressurization by comparing VENT-II simulated results with experimental data for four appliance configurations. The results show that VENT-II correctly predicts depressurizations resulting in spillage for natural draft appliances operating in cold and mild outdoor conditions, but not for hot conditions. In the latter case, the predicted depressurizations depend on whether the vent section is defined as part of the vent connector or the common vent when setting up the model. Overall, the VENTII solver requires further investigation before it can be used reliably to predict spillage caused by depressurization over a full year of weather conditions, especially where hot conditions occur.

  2. Calibration and validation of rockfall models

    NASA Astrophysics Data System (ADS)

    Frattini, Paolo; Valagussa, Andrea; Zenoni, Stefania; Crosta, Giovanni B.

    2013-04-01

    Calibrating and validating landslide models is extremely difficult due to the particular characteristic of landslides: limited recurrence in time, relatively low frequency of the events, short durability of post-event traces, poor availability of continuous monitoring data, especially for small landslide and rockfalls. For this reason, most of the rockfall models presented in literature completely lack calibration and validation of the results. In this contribution, we explore different strategies for rockfall model calibration and validation starting from both an historical event and a full-scale field test. The event occurred in 2012 in Courmayeur (Western Alps, Italy), and caused serious damages to quarrying facilities. This event has been studied soon after the occurrence through a field campaign aimed at mapping the blocks arrested along the slope, the shape and location of the detachment area, and the traces of scars associated to impacts of blocks on the slope. The full-scale field test was performed by Geovert Ltd in the Christchurch area (New Zealand) after the 2011 earthquake. During the test, a number of large blocks have been mobilized from the upper part of the slope and filmed with high velocity cameras from different viewpoints. The movies of each released block were analysed to identify the block shape, the propagation path, the location of impacts, the height of the trajectory and the velocity of the block along the path. Both calibration and validation of rockfall models should be based on the optimization of the agreement between the actual trajectories or location of arrested blocks and the simulated ones. A measure that describe this agreement is therefore needed. For calibration purpose, this measure should simple enough to allow trial and error repetitions of the model for parameter optimization. In this contribution we explore different calibration/validation measures: (1) the percentage of simulated blocks arresting within a buffer of the

  3. Measuring avoidance of pain: validation of the Acceptance and Action Questionnaire II-pain version.

    PubMed

    Reneman, Michiel F; Kleen, Marco; Trompetter, Hester R; Schiphorst Preuper, Henrica R; Köke, Albère; van Baalen, Bianca; Schreurs, Karlein M G

    2014-06-01

    Psychometric research on widely used questionnaires aimed at measuring experiential avoidance of chronic pain has led to inconclusive results. To test the structural validity, internal consistency, and construct validity of a recently developed short questionnaire: the Acceptance and Action Questionnaire II-pain version (AAQ-II-P). Cross-sectional validation study among 388 adult patients with chronic nonspecific musculoskeletal pain admitted for multidisciplinary pain rehabilitation in four tertiary rehabilitation centers in the Netherlands. Cronbach's α was calculated to analyze internal consistency. Principal component analysis was performed to analyze factor structure. Construct validity was analyzed by examining the association between acceptance of pain and measures of psychological flexibility (two scales and sum), pain catastrophizing (three scales and sum), and mental and physical functioning. Interpretation was based on a-priori defined hypotheses. The compound of the seven items of the AAQ-II-P shows a Cronbach's α of 0.87. The single component explained 56.2% of the total variance. Correlations ranged from r=-0.21 to 0.73. Two of the predefined hypotheses were rejected and seven were not rejected. The AAQ-II-P measures a single component and has good internal consistency, and construct validity is not rejected. Thus, the construct validity of the AAQ-II-P sum scores as indicator of experiential avoidance of pain was supported.

  4. SELDI Validation Study Phase II — EDRN Public Portal

    Cancer.gov

    This project –A Comprehensive Program for the Validation of Prostate Cancer Early Detection with Novel Protein Identification Techniques -- is divided into three phases. The goal of Phase I was to assess the reproducibility and portability of Surface-Enhanced Laser Desorption and Ionization time-of-flight mass spectrometry (SELDI-TOF-MS) using protein profiles generated from serum. Phase I was recently successfully completed at six institutions using a single source of pooled sera.

  5. Crystallographic Model Validation: from Diagnosis to Healing

    PubMed Central

    Richardson, Jane S.; Prisant, Michael G.; Richardson, David C.

    2013-01-01

    Model validation has evolved from a passive final gatekeeping step to an ongoing diagnosis and healing process that enables significant improvement of accuracy. A recent phase of active development was spurred by the worldwide Protein Data Bank requiring data deposition and establishing Validation Task Force committees, by strong growth in high-quality reference data, by new speed and ease of computations, and by an upswing of interest in large molecular machines and structural ensembles. Progress includes automated correction methods, concise and user-friendly validation reports for referees and on the PDB websites, extension of error correction to RNA and error diagnosis to ligands, carbohydrates, and membrane proteins, and a good start on better methods for low resolution and for multiple conformations. PMID:24064406

  6. Validation of cleaning procedures for highly potent drugs. II. Bisnafide.

    PubMed

    Segretario, J; Cook, S C; Umbles, C L; Walker, J T; Woodeshick, R W; Rubino, J T; Shea, J A

    1998-11-01

    The objective of this work was the development and validation of procedures designed to clean glass and stainless steel surfaces after exposure to the experimental anticancer drug, bisnafide. The cleaning procedures, using 5% acetic acid water, Alconox, and water, were validated using a wipe test and an HPLC method developed to quantitate low levels of bisnafide. The procedure developed for cleaning stainless steel is more stringent than that for glass because of the apparent greater affinity of bisnafide for stainless steel. The HPLC method is shown to be linear and reproducible (RSD 4.4% or less), with a detection limit of 4 ng/ml. Recoveries of 95.1, 83.5, and 70.0% were obtained from the wipe pads, glass plates, and stainless steel plates, respectively, at levels of approximately 0.7-1.7 ng/cm2. The cleaning procedures are shown to clean glass and stainless steel plates to less than 0.19 and 0.33 ng bisnafide/cm2, respectively. These results further demonstrate the need to fully characterize the recovery of drugs from surfaces and swabs in order to properly validate cleaning procedures. In addition, they demonstrate the potential need to develop surface-specific cleaning procedures.

  7. Solar Sail Model Validation from Echo Trajectories

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Brickerhoff, Adam T.

    2007-01-01

    The NASA In-Space Propulsion program has been engaged in a project to increase the technology readiness of solar sails. Recently, these efforts came to fruition in the form of several software tools to model solar sail guidance, navigation and control. Furthermore, solar sails are one of five technologies competing for the New Millennium Program Space Technology 9 flight demonstration mission. The historic Echo 1 and Echo 2 balloons were comprised of aluminized Mylar, which is the near-term material of choice for solar sails. Both spacecraft, but particularly Echo 2, were in low Earth orbits with characteristics similar to the proposed Space Technology 9 orbit. Therefore, the Echo balloons are excellent test cases for solar sail model validation. We present the results of studies of Echo trajectories that validate solar sail models of optics, solar radiation pressure, shape and low-thrust orbital dynamics.

  8. Using Model Checking to Validate AI Planner Domain Models

    NASA Technical Reports Server (NTRS)

    Penix, John; Pecheur, Charles; Havelund, Klaus

    1999-01-01

    This report describes an investigation into using model checking to assist validation of domain models for the HSTS planner. The planner models are specified using a qualitative temporal interval logic with quantitative duration constraints. We conducted several experiments to translate the domain modeling language into the SMV, Spin and Murphi model checkers. This allowed a direct comparison of how the different systems would support specific types of validation tasks. The preliminary results indicate that model checking is useful for finding faults in models that may not be easily identified by generating test plans.

  9. SPR Hydrostatic Column Model Verification and Validation.

    SciTech Connect

    Bettin, Giorgia; Lord, David; Rudeen, David Keith

    2015-10-01

    A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extended nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.

  10. Multi-criteria validation of a precipitation runoff model

    NASA Astrophysics Data System (ADS)

    Beldring, Stein

    2002-02-01

    The multi-criteria calibration procedure MOCOM-UA was used to evaluate the validity of a physically based precipitation-runoff model by forcing the model to simulate several observed system responses simultaneously. The model is based on kinematic wave approximations to saturated subsurface flow and saturation overland flow at the hillslope scale in a landcape with a shallow layer of permeable deposits overlying a relatively impermeable bedrock. The following results were found; (i) the MOCOM-UA method was capable of exploiting information about the physical system contained in the measurement data time series; (ii) the multi-criteria calibration procedure provided estimates of the uncertainty associated with model predictions and parameters; (iii) multi-criteria calibration constraining the behavior of the precipitation-runoff model to observed runoff and groundwater levels reduced the uncertainty of model predictions; (iv) the multi-criteria method reduced the uncertainty of the estimates of model parameters; (v) the precipitation-runoff model was able to reproduce several observed system responses simultaneously during both calibration and validation periods; and (vi) the groundwater table depths exerted a major control on the hydrological response of the investigated catchment.

  11. Establishment and Validation of GV-SAPS II Scoring System for Non-Diabetic Critically Ill Patients

    PubMed Central

    Liu, Wen-Yue; Lin, Shi-Gang; Zhu, Gui-Qi; Poucke, Sven Van; Braddock, Martin; Zhang, Zhongheng; Mao, Zhi; Shen, Fei-Xia

    2016-01-01

    Background and Aims Recently, glucose variability (GV) has been reported as an independent risk factor for mortality in non-diabetic critically ill patients. However, GV is not incorporated in any severity scoring system for critically ill patients currently. The aim of this study was to establish and validate a modified Simplified Acute Physiology Score II scoring system (SAPS II), integrated with GV parameters and named GV-SAPS II, specifically for non-diabetic critically ill patients to predict short-term and long-term mortality. Methods Training and validation cohorts were exacted from the Multiparameter Intelligent Monitoring in Intensive Care database III version 1.3 (MIMIC-III v1.3). The GV-SAPS II score was constructed by Cox proportional hazard regression analysis and compared with the original SAPS II, Sepsis-related Organ Failure Assessment Score (SOFA) and Elixhauser scoring systems using area under the curve of the receiver operator characteristic (auROC) curve. Results 4,895 and 5,048 eligible individuals were included in the training and validation cohorts, respectively. The GV-SAPS II score was established with four independent risk factors, including hyperglycemia, hypoglycemia, standard deviation of blood glucose levels (GluSD), and SAPS II score. In the validation cohort, the auROC values of the new scoring system were 0.824 (95% CI: 0.813–0.834, P< 0.001) and 0.738 (95% CI: 0.725–0.750, P< 0.001), respectively for 30 days and 9 months, which were significantly higher than other models used in our study (all P < 0.001). Moreover, Kaplan-Meier plots demonstrated significantly worse outcomes in higher GV-SAPS II score groups both for 30-day and 9-month mortality endpoints (all P< 0.001). Conclusions We established and validated a modified prognostic scoring system that integrated glucose variability for non-diabetic critically ill patients, named GV-SAPS II. It demonstrated a superior prognostic capability and may be an optimal scoring system

  12. Hierarchical Model Validation of Symbolic Performance Models of Scientific Kernels

    SciTech Connect

    Alam, Sadaf R; Vetter, Jeffrey S

    2006-08-01

    Multi-resolution validation of hierarchical performance models of scientific applications is critical primarily for two reasons. First, the step-by-step validation determines the correctness of all essential components or phases in a science simulation. Second, a model that is validated at multiple resolution levels is the very first step to generate predictive performance models, for not only existing systems but also for emerging systems and future problem sizes. We present the design and validation of hierarchical performance models of two scientific benchmarks using a new technique called the modeling assertions (MA). Our MA prototype framework generates symbolic performance models that can be evaluated efficiently by generating the equivalent model representations in Octave and MATLAB. The multi-resolution modeling and validation is conducted on two contemporary, massively-parallel systems, XT3 and Blue Gene/L system. The workload distribution and the growth rates predictions generated by the MA models are confirmed by the experimental data collected on the MPP platforms. In addition, the physical memory requirements that are generated by the MA models are verified by the runtime values on the Blue Gene/L system, which has 512 MBytes and 256 MBytes physical memory capacity in its two unique execution modes.

  13. Validation of Space Weather Models at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, M. M.; Hesse, M.; Pulkkinen, A.; Maddox, M.; Rastaetter, L.; Berrios, D.; Zheng, Y.; MacNeice, P. J.; Shim, J.; Taktakishvili, A.; Chulaki, A.

    2011-01-01

    The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership to support the research and developmental work necessary to substantially increase space weather modeling capabilities and to facilitate advanced models deployment in forecasting operations. Space weather models and coupled model chains hosted at the CCMC range from the solar corona to the Earth's upper atmosphere. CCMC has developed a number of real-time modeling systems, as well as a large number of modeling and data products tailored to address the space weather needs of NASA's robotic missions. The CCMC conducts unbiased model testing and validation and evaluates model readiness for operational environment. CCMC has been leading recent comprehensive modeling challenges under GEM, CEDAR and SHINE programs. The presentation will focus on experience in carrying out comprehensive and systematic validation of large sets of. space weather models

  14. Turbulence Modeling Validation, Testing, and Development

    NASA Technical Reports Server (NTRS)

    Bardina, J. E.; Huang, P. G.; Coakley, T. J.

    1997-01-01

    The primary objective of this work is to provide accurate numerical solutions for selected flow fields and to compare and evaluate the performance of selected turbulence models with experimental results. Four popular turbulence models have been tested and validated against experimental data often turbulent flows. The models are: (1) the two-equation k-epsilon model of Wilcox, (2) the two-equation k-epsilon model of Launder and Sharma, (3) the two-equation k-omega/k-epsilon SST model of Menter, and (4) the one-equation model of Spalart and Allmaras. The flows investigated are five free shear flows consisting of a mixing layer, a round jet, a plane jet, a plane wake, and a compressible mixing layer; and five boundary layer flows consisting of an incompressible flat plate, a Mach 5 adiabatic flat plate, a separated boundary layer, an axisymmetric shock-wave/boundary layer interaction, and an RAE 2822 transonic airfoil. The experimental data for these flows are well established and have been extensively used in model developments. The results are shown in the following four sections: Part A describes the equations of motion and boundary conditions; Part B describes the model equations, constants, parameters, boundary conditions, and numerical implementation; and Parts C and D describe the experimental data and the performance of the models in the free-shear flows and the boundary layer flows, respectively.

  15. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  16. Concepts of Model Verification and Validation

    SciTech Connect

    B.H.Thacker; S.W.Doebling; F.M.Hemez; M.C. Anderson; J.E. Pepin; E.A. Rodriguez

    2004-10-30

    Model verification and validation (V&V) is an enabling methodology for the development of computational models that can be used to make engineering predictions with quantified confidence. Model V&V procedures are needed by government and industry to reduce the time, cost, and risk associated with full-scale testing of products, materials, and weapon systems. Quantifying the confidence and predictive accuracy of model calculations provides the decision-maker with the information necessary for making high-consequence decisions. The development of guidelines and procedures for conducting a model V&V program are currently being defined by a broad spectrum of researchers. This report reviews the concepts involved in such a program. Model V&V is a current topic of great interest to both government and industry. In response to a ban on the production of new strategic weapons and nuclear testing, the Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship Program (SSP). An objective of the SSP is to maintain a high level of confidence in the safety, reliability, and performance of the existing nuclear weapons stockpile in the absence of nuclear testing. This objective has challenged the national laboratories to develop high-confidence tools and methods that can be used to provide credible models needed for stockpile certification via numerical simulation. There has been a significant increase in activity recently to define V&V methods and procedures. The U.S. Department of Defense (DoD) Modeling and Simulation Office (DMSO) is working to develop fundamental concepts and terminology for V&V applied to high-level systems such as ballistic missile defense and battle management simulations. The American Society of Mechanical Engineers (ASME) has recently formed a Standards Committee for the development of V&V procedures for computational solid mechanics models. The Defense Nuclear Facilities Safety Board (DNFSB) has been a proponent of model V&V for all

  17. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    USGS Publications Warehouse

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead

  18. Validation of the Offending-Related Attitudes Questionnaire of CRIME-PICS II Scale (Chinese)

    ERIC Educational Resources Information Center

    Chui, Wing Hong; Wu, Joseph; Kwok, Yan Yuen; Liu, Liu

    2017-01-01

    This study examined the factor structure, reliability, and validity of the first part of the Chinese version of the CRIME-PICS II Scale, a self-administrated instrument assessing offending-related attitudes. Data were collected from three samples: male Hong Kong young offenders, female Mainland Chinese prisoners, and Hong Kong college students.…

  19. Concurrent Validity of the Online Version of the Keirsey Temperament Sorter II.

    ERIC Educational Resources Information Center

    Kelly, Kevin R.; Jugovic, Heidi

    2001-01-01

    Data from the Keirsey Temperament Sorter II online instrument and Myers Briggs Type Indicator (MBTI) for 203 college freshmen were analyzed. Positive correlations appeared between the concurrent MBTI and Keirsey measures of psychological type, giving preliminary support to the validity of the online version of Keirsey. (Contains 28 references.)…

  20. Reliability and validity of the California Verbal Learning Test (CVLT) -II - Japanese version.

    PubMed

    Sumiyoshi, Tomiki; Toyomaki, Atsuhito; Kawano, Naoko; Tomoko, Kitajima; Kusumi, Ichiro; Ozaki, Norio; Iwata, Nakao; Nakagome, Kazuyuki

    2017-03-20

    In line with the recent initiative to establish a standard measure of cognitive decline in bipolar disorder(1,2)) , the present study tested the criterion-related validity and test-retest reliability of the California Verbal Learning Test (CVLT)-II(3)()) Japanese version.(4)).

  1. Validated predictive modelling of the environmental resistome

    PubMed Central

    Amos, Gregory CA; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-01-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  2. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  3. Integrated Medical Model Verification, Validation, and Credibility

    NASA Technical Reports Server (NTRS)

    Walton, Marlei; Kerstman, Eric; Foy, Millennia; Shah, Ronak; Saile, Lynn; Boley, Lynn; Butler, Doug; Myers, Jerry

    2014-01-01

    The Integrated Medical Model (IMM) was designed to forecast relative changes for a specified set of crew health and mission success risk metrics by using a probabilistic (stochastic process) model based on historical data, cohort data, and subject matter expert opinion. A probabilistic approach is taken since exact (deterministic) results would not appropriately reflect the uncertainty in the IMM inputs. Once the IMM was conceptualized, a plan was needed to rigorously assess input information, framework and code, and output results of the IMM, and ensure that end user requests and requirements were considered during all stages of model development and implementation. METHODS: In 2008, the IMM team developed a comprehensive verification and validation (VV) plan, which specified internal and external review criteria encompassing 1) verification of data and IMM structure to ensure proper implementation of the IMM, 2) several validation techniques to confirm that the simulation capability of the IMM appropriately represents occurrences and consequences of medical conditions during space missions, and 3) credibility processes to develop user confidence in the information derived from the IMM. When the NASA-STD-7009 (7009) was published, the IMM team updated their verification, validation, and credibility (VVC) project plan to meet 7009 requirements and include 7009 tools in reporting VVC status of the IMM. RESULTS: IMM VVC updates are compiled recurrently and include 7009 Compliance and Credibility matrices, IMM VV Plan status, and a synopsis of any changes or updates to the IMM during the reporting period. Reporting tools have evolved over the lifetime of the IMM project to better communicate VVC status. This has included refining original 7009 methodology with augmentation from the NASA-STD-7009 Guidance Document. End user requests and requirements are being satisfied as evidenced by ISS Program acceptance of IMM risk forecasts, transition to an operational model and

  4. Initialization and Validation of a Simulation of Cirrus Using FIRE-II Data.

    NASA Astrophysics Data System (ADS)

    Westphal, D. L.; Kinne, S.; Pilewskie, P.; Alvarez, J. M.; Minnis, P.; Young, D. F.; Benjamin, S. G.; Eberhard, W. L.; Kropfli, R. A.; Matrosov, S. Y.; Snider, J. B.; Uttal, T. A.; Heymsfield, A. J.; Mace, G. G.; Melfi, S. H.; Starr, D. O'c.; Soden, J. J.

    1996-12-01

    Observations from a wide variety of instruments and platforms are used to validate many different aspects of a three-dimensional mesoscale simulation of the dynamics, cloud microphysics, and radiative transfer of a cirrus cloud system observed on 26 November 1991 during the second cirrus field program of the First International Satellite Cloud Climatology Program (ISCCP) Regional Experiment (FIRE-II) located in southeastern Kansas. The simulation was made with a mesoscale dynamical model utilizing a simplified bulk water cloud scheme and a spectral model of radiative transfer. Expressions for cirrus optical properties for solar and infrared wavelength intervals as functions of ice water content and effective particle radius are modified for the midlatitude cirrus observed during FIRE-II and are shown to compare favorably with explicit size-resolving calculations of the optical properties. Rawinsonde, Raman lidar, and satellite data are evaluated and combined to produce a time-height cross section of humidity at the central FIRE-II site for model verification. Due to the wide spacing of rawinsondes and their infrequent release, important moisture features go undetected and are absent in the conventional analyses. The upper-tropospheric humidities used for the initial conditions were generally less than 50% of those inferred from satellite data, yet over the course of a 24-h simulation the model produced a distribution that closely resembles the large-scale features of the satellite analysis. The simulated distribution and concentration of ice compares favorably with data from radar, lidar, satellite, and aircraft. Direct comparison is made between the radiative transfer simulation and data from broadband and spectral sensors and inferred quantities such as cloud albedo, optical depth, and top-of-the-atmosphere 11-µm brightness temperature, and the 6.7-µm brightness temperature. Comparison is also made with theoretical heating rates calculated using the rawinsonde

  5. Boron-10 Lined Proportional Counter Model Validation

    SciTech Connect

    Lintereur, Azaree T.; Siciliano, Edward R.; Kouzes, Richard T.

    2012-06-30

    The Department of Energy Office of Nuclear Safeguards (NA-241) is supporting the project “Coincidence Counting With Boron-Based Alternative Neutron Detection Technology” at Pacific Northwest National Laboratory (PNNL) for the development of an alternative neutron coincidence counter. The goal of this project is to design, build and demonstrate a boron-lined proportional tube-based alternative system in the configuration of a coincidence counter. This report discusses the validation studies performed to establish the degree of accuracy of the computer modeling methods current used to simulate the response of boron-lined tubes. This is the precursor to developing models for the uranium neutron coincidence collar under Task 2 of this project.

  6. Validation of the filament winding process model

    NASA Technical Reports Server (NTRS)

    Calius, Emilo P.; Springer, George S.; Wilson, Brian A.; Hanson, R. Scott

    1987-01-01

    Tests were performed toward validating the WIND model developed previously for simulating the filament winding of composite cylinders. In these tests two 24 in. long, 8 in. diam and 0.285 in. thick cylinders, made of IM-6G fibers and HBRF-55 resin, were wound at + or - 45 deg angle on steel mandrels. The temperatures on the inner and outer surfaces and inside the composite cylinders were recorded during oven cure. The temperatures inside the cylinders were also calculated by the WIND model. The measured and calculated temperatures were then compared. In addition, the degree of cure and resin viscosity distributions inside the cylinders were calculated for the conditions which existed in the tests.

  7. Modelling and validation of electromechanical shock absorbers

    NASA Astrophysics Data System (ADS)

    Tonoli, Andrea; Amati, Nicola; Girardello Detoni, Joaquim; Galluzzi, Renato; Gasparin, Enrico

    2013-08-01

    Electromechanical vehicle suspension systems represent a promising substitute to conventional hydraulic solutions. However, the design of electromechanical devices that are able to supply high damping forces without exceeding geometric dimension and mass constraints is a difficult task. All these challenges meet in off-road vehicle suspension systems, where the power density of the dampers is a crucial parameter. In this context, the present paper outlines a particular shock absorber configuration where a suitable electric machine and a transmission mechanism are utilised to meet off-road vehicle requirements. A dynamic model is used to represent the device. Subsequently, experimental tests are performed on an actual prototype to verify the functionality of the damper and validate the proposed model.

  8. [Catalonia's primary healthcare accreditation model: a valid model].

    PubMed

    Davins, Josep; Gens, Montserrat; Pareja, Clara; Guzmán, Ramón; Marquet, Roser; Vallès, Roser

    2014-07-01

    There are few experiences of accreditation models validated by primary care teams (EAP). The aim of this study was to detail the process of design, development, and subsequent validation of the consensus EAP accreditation model of Catalonia. An Operating Committee of the Health Department of Catalonia revised models proposed by the European Foundation for Quality Management, the Joint Commission International and the Institut Català de la Salut and proposed 628 essential standards to the technical group (25 experts in primary care and quality of care), to establish consensus standards. The consensus document was piloted in 30 EAP for the purpose of validating the contents, testing standards and identifying evidence. Finally, we did a survey to assess acceptance and validation of the document. The Technical Group agreed on a total of 414 essential standards. The pilot selected a total of 379. Mean compliance with the standards of the final document in the 30 EAP was 70.4%. The standards results were the worst fulfilment percentage. The survey target that 83% of the EAP found it useful and 78% found the content of the accreditation manual suitable as a tool to assess the quality of the EAP, and identify opportunities for improvement. On the downside they highlighted its complexity and laboriousness. We have a model that fits the reality of the EAP, and covers all relevant issues for the functioning of an excellent EAP. The model developed in Catalonia is a model for easy understanding.

  9. Validation of Arabic and English versions of the ARSMA-II Acculturation Rating Scale.

    PubMed

    Jadalla, Ahlam; Lee, Jerry

    2015-02-01

    To translate and adapt the Acculturation Rating Scale of Mexican-Americans II (ARSMA-II) for Arab Americans. A multistage translation process followed by a pilot and a large study. The translated and adapted versions, Acculturation Rating Scale for Arabic Americans-II Arabic and English (ARSAA-IIA, ARSAA-IIE), were validated in a sample of 297 Arab Americans. Factor analyses with principal axis factoring extractions and direct oblimin rotations were used to identify the underlying structure of ARSAA-II. Factor analysis confirmed the underlying structure of ARSAA-II and produced two interpretable factors labeled as 'Attraction to American Culture' (AAmC) and 'Attraction to Arabic Culture' (AArC). The Cronbach's alphas of AAmC and AArC were .89 and .85 respectively. Findings support ARSAA-II A & E to assess acculturation among Arab Americans. The emergent factors of ARSAA-II support the theoretical structure of the original ARSMA-II tool and show high internal consistency.

  10. Kinetic modeling of light limitation and sulfur deprivation effects in the induction of hydrogen production with Chlamydomonas reinhardtii. Part II: Definition of model-based protocols and experimental validation.

    PubMed

    Degrenne, B; Pruvost, J; Titica, M; Takache, H; Legrand, J

    2011-10-01

    Photosynthetic hydrogen production under light by the green microalga Chlamydomonas reinhardtii was investigated in a torus-shaped PBR in sulfur-deprived conditions. Culture conditions, represented by the dry biomass concentration of the inoculum, sulfate concentration, and incident photon flux density (PFD), were optimized based on a previously published model (Fouchard et al., 2009. Biotechnol Bioeng 102:232-245). This allowed a strictly autotrophic production, whereas the sulfur-deprived protocol is usually applied in photoheterotrophic conditions. Experimental results combined with additional information from kinetic simulations emphasize effects of sulfur deprivation and light attenuation in the PBR in inducing anoxia and hydrogen production. A broad range of PFD was tested (up to 500 µmol photons m(-2) s(-1) ). Maximum hydrogen productivities were 1.0 ± 0.2 mL H₂ /h/L (or 25 ± 5 mL H₂ /m(2) h) and 3.1 mL ± 0.4 H₂ /h L (or 77.5 ± 10 mL H₂ /m(2) h), at 110 and 500 µmol photons m(-2) s(-1) , respectively. These values approached a maximum specific productivity of approximately 1.9 mL ± 0.4 H₂ /h/g of biomass dry weight, clearly indicative of a limitation in cell capacity to produce hydrogen. The efficiency of the process and further optimizations are discussed.

  11. Unit testing, model validation, and biological simulation

    PubMed Central

    Watts, Mark D.; Ghayoomie, S. Vahid; Larson, Stephen D.; Gerkin, Richard C.

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models. PMID:27635225

  12. Full-Scale Cookoff Model Validation Experiments

    SciTech Connect

    McClelland, M A; Rattanapote, M K; Heimdahl, E R; Erikson, W E; Curran, P O; Atwood, A I

    2003-11-25

    This paper presents the experimental results of the third and final phase of a cookoff model validation effort. In this phase of the work, two generic Heavy Wall Penetrators (HWP) were tested in two heating orientations. Temperature and strain gage data were collected over the entire test period. Predictions for time and temperature of reaction were made prior to release of the live data. Predictions were comparable to the measured values and were highly dependent on the established boundary conditions. Both HWP tests failed at a weld located near the aft closure of the device. More than 90 percent of unreacted explosive was recovered in the end heated experiment and less than 30 percent recovered in the side heated test.

  13. Constructing and Validating a Decadal Prediction Model

    NASA Astrophysics Data System (ADS)

    Foss, I.; Woolf, D. K.; Gagnon, A. S.; Merchant, C. J.

    2010-05-01

    For the purpose of identifying potential sources of predictability of Scottish mean air temperature (SMAT), a redundancy analysis (RA) was accomplished to quantitatively assess the predictability of SMAT from North Atlantic SSTs as well as the temporal consistency of this predictability. The RA was performed between the main principal components of North Atlantic SST anomalies and SMAT anomalies for two time periods: 1890-1960 and 1960-2006. The RA models developed using data from the 1890-1960 period were validated using the 1960-2006 period; in a similar way the model developed based on the 1960-2006 period was validated using data from the 1890-1960 period. The results indicate the potential to forecast decadal trends in SMAT for all seasons in 1960-2006 time period and all seasons with the exception of winter for the period 1890-1960 with the best predictability achieved in summer. The statistical models show the best performance when SST anomalies in the European shelf seas (45°N-65°N, 20W-20E) rather than those for the SSTs over the entire North Atlantic (30°N-75°N, 80°W-30°E) were used as a predictor. The results of the RA demonstrated that similar SSTs modes were responsible for predictions in the first and second half of the 20th century, establishing temporal consistency, though with stronger influence in the more recent half. The SST pattern responsible for explaining the largest amount of variance in SMAT was stronger in the second half of the 20th century and showed increasing influence from the area of the North Sea, possibly due to faster sea-surface warming in that region in comparison with the open North Atlantic. The Wavelet Transform (WT), Cross Wavelet Transform (XWT) and Wavelet Coherence (WTC) techniques were used to analyse RA-model-based forecasts of SMAT in the time-frequency domain. Wavelet-based techniques applied to the predicted and observed time series revealed a good performance of RA models to predict the frequency variability

  14. Some useful statistical methods for model validation.

    PubMed Central

    Marcus, A H; Elias, R W

    1998-01-01

    Although formal hypothesis tests provide a convenient framework for displaying the statistical results of empirical comparisons, standard tests should not be used without consideration of underlying measurement error structure. As part of the validation process, predictions of individual blood lead concentrations from models with site-specific input parameters are often compared with blood lead concentrations measured in field studies that also report lead concentrations in environmental media (soil, dust, water, paint) as surrogates for exposure. Measurements of these environmental media are subject to several sources of variability, including temporal and spatial sampling, sample preparation and chemical analysis, and data entry or recording. Adjustments for measurement error must be made before statistical tests can be used to empirically compare environmental data with model predictions. This report illustrates the effect of measurement error correction using a real dataset of child blood lead concentrations for an undisclosed midwestern community. We illustrate both the apparent failure of some standard regression tests and the success of adjustment of such tests for measurement error using the SIMEX (simulation-extrapolation) procedure. This procedure adds simulated measurement error to model predictions and then subtracts the total measurement error, analogous to the method of standard additions used by analytical chemists. Images Figure 1 Figure 3 PMID:9860913

  15. Diurnal ocean surface layer model validation

    NASA Technical Reports Server (NTRS)

    Hawkins, Jeffrey D.; May, Douglas A.; Abell, Fred, Jr.

    1990-01-01

    The diurnal ocean surface layer (DOSL) model at the Fleet Numerical Oceanography Center forecasts the 24-hour change in a global sea surface temperatures (SST). Validating the DOSL model is a difficult task due to the huge areas involved and the lack of in situ measurements. Therefore, this report details the use of satellite infrared multichannel SST imagery to provide day and night SSTs that can be directly compared to DOSL products. This water-vapor-corrected imagery has the advantages of high thermal sensitivity (0.12 C), large synoptic coverage (nearly 3000 km across), and high spatial resolution that enables diurnal heating events to be readily located and mapped. Several case studies in the subtropical North Atlantic readily show that DOSL results during extreme heating periods agree very well with satellite-imagery-derived values in terms of the pattern of diurnal warming. The low wind and cloud-free conditions necessary for these events to occur lend themselves well to observation via infrared imagery. Thus, the normally cloud-limited aspects of satellite imagery do not come into play for these particular environmental conditions. The fact that the DOSL model does well in extreme events is beneficial from the standpoint that these cases can be associated with the destruction of the surface acoustic duct. This so-called afternoon effect happens as the afternoon warming of the mixed layer disrupts the sound channel and the propagation of acoustic energy.

  16. Outward Bound Outcome Model Validation and Multilevel Modeling

    ERIC Educational Resources Information Center

    Luo, Yuan-Chun

    2011-01-01

    This study was intended to measure construct validity for the Outward Bound Outcomes Instrument (OBOI) and to predict outcome achievement from individual characteristics and course attributes using multilevel modeling. A sample of 2,340 participants was collected by Outward Bound USA between May and September 2009 using the OBOI. Two phases of…

  17. Validation of HEDR models. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid.

  18. Validating agent based models through virtual worlds.

    SciTech Connect

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior

  19. The SEGUE Stellar Parameter Pipeline. II. Validation with Galactic Globular and Open Clusters

    SciTech Connect

    Lee, Y.S.; Beers, T.C.; Sivarani, T.; Johnson, J.A.; An, D.; Wilhelm, R.; Prieto, C.Allende; Koesterke, L.; Re Fiorentin, P.; Bailer-Jones, C.A.L.; Norris, J.E.

    2007-10-01

    The authors validate the performance and accuracy of the current SEGUE (Sloan Extension for Galactic Understanding and Exploration) Stellar Parameter Pipeline (SSPP), which determines stellar atmospheric parameters (effective temperature, surface gravity, and metallicity) by comparing derived overall metallicities and radial velocities from selected likely members of three globular clusters (M 13, M 15, and M 2) and two open clusters (NGC 2420 and M 67) to the literature values. Spectroscopic and photometric data obtained during the course of the original Sloan Digital Sky Survey (SDSS-1) and its first extension (SDSS-II/SEGUE) are used to determine stellar radial velocities and atmospheric parameter estimates for stars in these clusters. Based on the scatter in the metallicities derived for the members of each cluster, they quantify the typical uncertainty of the SSPP values, {sigma}([Fe/H]) = 0.13 dex for stars in the range of 4500 K {le} T{sub eff} {le} 7500 K and 2.0 {le} log g {le} 5.0, at least over the metallicity interval spanned by the clusters studied (-2.3 {le} [Fe/H] < 0). The surface gravities and effective temperatures derived by the SSPP are also compared with those estimated from the comparison of the color-magnitude diagrams with stellar evolution models; they find satisfactory agreement. At present, the SSPP underestimates [Fe/H] for near-solar-metallicity stars, represented by members of M 67 in this study, by {approx} 0.3 dex.

  20. Validation of the Beck Depression Inventory-II in a Low-Income African American Sample of Medical Outpatients

    ERIC Educational Resources Information Center

    Grothe, Karen B.; Dutton, Gareth R.; Jones, Glenn N.; Bodenlos, Jamie; Ancona, Martin; Brantley, Phillip J.

    2005-01-01

    The psychometric properties of the Beck Depression Inventory-II (BDI-II) are well established with primarily Caucasian samples. However, little is known about its reliability and validity with minority groups. This study evaluated the psychometric properties of the BDI-II in a sample of low-income African American medical outpatients (N = 220).…

  1. Reliability and validity of the test of gross motor development-II in Korean preschool children: applying AHP.

    PubMed

    Kim, Chung-Il; Han, Dong-Wook; Park, Il-Hyeok

    2014-04-01

    The Test of Gross Motor Development-II (TGMD-II) is a frequently used assessment tool for measuring motor ability. The purpose of this study is to investigate the reliability and validity of TGMD-II's weighting scores (by comparing pre-weighted TGMD-II scores with post ones) as well as examine applicability of the TGMD-II on Korean preschool children. A total of 121 Korean children (three kindergartens) participated in this study. There were 65 preschoolers who were 5-years-old (37 boys and 28 girls) and 56 preschoolers who were 6-years-old (34 boys and 22 girls). For internal consistency, reliability, and construct validity, only one researcher evaluated all of the children using the TGMD-II in the following areas: running; galloping; sliding; hopping; leaping; horizontal jumping; overhand throwing; underhand rolling; striking a stationary ball; stationary dribbling; kicking; and catching. For concurrent validity, the evaluator measured physical fitness (strength, flexibility, power, agility, endurance, and balance). The key findings were as follows: first, the reliability coefficient and the validity coefficient between pre-weighted and post-weighted TGMD-II scores were quite similar. Second, the research showed adequate reliability and validity of the TGMD-II for Korean preschool children. The TGMD-II is a proper instrument to test Korean children's motor development. Yet, applying relative weighting on the TGMD-II should be a point of consideration.

  2. The Impact of Using Documented but Uncharged Offense Data on JSORRAT-II Predictive Validity.

    PubMed

    Ralston, Christopher A; Sarkar, Amar; Philipp, Grace T; Epperson, Douglas L

    2017-03-01

    Following the implementation of sexual offender notification laws, researchers have found a drop in the rate of prosecutions and an increase in plea bargains for sexual offenses committed by male juveniles. This type of prosecutorial hesitation has implications for the predictive validity of sexual recidivism risk assessments, such as the Juvenile Sexual Offender Recidivism Risk Assessment Tool-II (JSORRAT-II), that require data from officially adjudicated offenses in the scoring of several items. The present study sought to test the impact of including data from documented but uncharged (DBU) sexual offenses in the scoring of the JSORRAT-II on its predictive validity using an exhaustive sample of 1,095 juveniles who offended sexually from the states of Iowa and Utah. Although sexual recidivists had significantly more DBU data, the inclusion of those data did not improve the predictive validity of the tool. The authors discuss additional reasons why changes in prosecutorial practice might remain confound in risk assessment studies and suggest future research to investigate those hypotheses.

  3. Coupled Disturbance Modelling And Validation Of A Reaction Wheel Model

    NASA Astrophysics Data System (ADS)

    Zhang, Zhe; Aglietti, Gugliemo S.

    2012-07-01

    Microvibrations of a RWA are usually studied in either hard-mounted or coupled conditions, although coupled wheel-structure disturbances are more representative than the hard-mounted disturbances. The coupled analysis method of the wheel-structure is not as well developed as the hard-mounted one. A coupled disturbance analysis method is proposed in this paper. One of the most important factors in coupled disturbance analysis - the accelerance or dynamic mass of the wheel is measured and results are validated with an equivalent FE model. The wheel hard-mounted disturbances are also measured from a vibration measurement platform particularly designed for this study. Wheel structural modes are solved from its analytical disturbance model and validated with the test results. The wheel-speed dependent accelerance analysis method is proposed.

  4. Design and Development Research: A Model Validation Case

    ERIC Educational Resources Information Center

    Tracey, Monica W.

    2009-01-01

    This is a report of one case of a design and development research study that aimed to validate an overlay instructional design model incorporating the theory of multiple intelligences into instructional systems design. After design and expert review model validation, The Multiple Intelligence (MI) Design Model, used with an Instructional Systems…

  5. SAGE III Aerosol Extinction Validation in the Arctic Winter: Comparisons with SAGE II and POAM III

    NASA Technical Reports Server (NTRS)

    Thomason, L. W.; Poole, L. R.; Randall, C. E.

    2007-01-01

    The use of SAGE III multiwavelength aerosol extinction coefficient measurements to infer PSC type is contingent on the robustness of both the extinction magnitude and its spectral variation. Past validation with SAGE II and other similar measurements has shown that the SAGE III extinction coefficient measurements are reliable though the comparisons have been greatly weighted toward measurements made at mid-latitudes. Some aerosol comparisons made in the Arctic winter as a part of SOLVE II suggested that SAGE III values, particularly at longer wavelengths, are too small with the implication that both the magnitude and the wavelength dependence are not reliable. Comparisons with POAM III have also suggested a similar discrepancy. Herein, we use SAGE II data as a common standard for comparison of SAGE III and POAM III measurements in the Arctic winters of 2002/2003 through 2004/2005. During the winter, SAGE II measurements are made infrequently at the same latitudes as these instruments. We have mitigated this problem through the use potential vorticity as a spatial coordinate and thus greatly increased the number of coincident events. We find that SAGE II and III extinction coefficient measurements show a high degree of compatibility at both 1020 nm and 450 nm except a 10-20% bias at both wavelengths. In addition, the 452 to 1020-nm extinction ratio shows a consistent bias of approx. 30% throughout the lower stratosphere. We also find that SAGE II and POAM III are on average consistent though the comparisons show a much higher variability and larger bias than SAGE II/III comparisons. In addition, we find that the two data sets are not well correlated below 18 km. Overall, we find both the extinction values and the spectral dependence from SAGE III are robust and we find no evidence of a significant defect within the Arctic vortex.

  6. Techniques and Issues in Agent-Based Modeling Validation

    SciTech Connect

    Pullum, Laura L; Cui, Xiaohui

    2012-01-01

    Validation of simulation models is extremely important. It ensures that the right model has been built and lends confidence to the use of that model to inform critical decisions. Agent-based models (ABM) have been widely deployed in different fields for studying the collective behavior of large numbers of interacting agents. However, researchers have only recently started to consider the issues of validation. Compared to other simulation models, ABM has many differences in model development, usage and validation. An ABM is inherently easier to build than a classical simulation, but more difficult to describe formally since they are closer to human cognition. Using multi-agent models to study complex systems has attracted criticisms because of the challenges involved in their validation [1]. In this report, we describe the challenge of ABM validation and present a novel approach we recently developed for an ABM system.

  7. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    SciTech Connect

    Freeman, J.; Whitmore, J.; Kaffine, L.; Blair, N.; Dobos, A. P.

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  8. Model for Use of Sociometry to Validate Attitude Measures.

    ERIC Educational Resources Information Center

    McGuiness, Thomas P.; Stank, Peggy L.

    A study concerning the development and validation of an instrument intended to measure Goal II of quality education is presented. This goal is that quality education should help every child acquire understanding and appreciation of persons belonging to social, cultural and ethnic groups different from his own. The rationale for measurement…

  9. Model Verification and Validation Using Graphical Information Systems Tools

    DTIC Science & Technology

    2013-07-31

    system: (i) shoreline impacts; (ii) shelf seafloor profiles, which influence subsurface ocean currents; (iii) geographic effects on estuarine flows...functions are included within a validation box tool, a shoreline/ seafloor profile tool, toolkits for satellite images, historical data, feature...littoral zone (continental shelf) is the result of complex interactions between oceanic flows, the seafloor , the shoreline, water input from rivers

  10. Harmonization of strategies for the validation of quantitative analytical procedures. A SFSTP proposal--part II.

    PubMed

    Hubert, Ph; Nguyen-Huu, J-J; Boulanger, B; Chapuzet, E; Chiap, P; Cohen, N; Compagnon, P-A; Dewé, W; Feinberg, M; Lallier, M; Laurentie, M; Mercier, N; Muzard, G; Nivet, C; Valat, L; Rozet, E

    2007-09-21

    As reported in a previous paper, the main objective of the new commission of the Société Française des Sciences et Techniques Pharmaceutiques (SFSTP) was the harmonisation of approaches for the validation of quantitative analytical procedures. In a series of meetings, members of this Commission have first tried to review the objectives of analytical methods and the objectives of validation methods and to recommend the use of two-sided beta-expectation tolerance intervals for total error of validation samples (accuracy profile) in the acceptance/rejection of analytical method in validation phase. In the context of the harmonization, the other objectives were: (i) to propose a consensus on the norms usually recognized, while widely incorporating the ISO terminology; (ii) to recommend to validate the analytical procedure accordingly to the way it will be used in routine; (iii) to elaborate a rational, practical and statistically reliable strategy to assure the quality of the analytical results generated. This strategy has been formalised in a guide and the three latter objectives made by the Commission are summarised in the present paper which is the second part of summary report of the SFSTP commission. The SFSTP guide has been produced to help analysts to validate their analytical methods. It is the result of a consensus between professionals having expertise in analytical and/or statistical fields. The suggestions presented in this paper should therefore help the analyst to design and perform the minimum number validation experiments needed to obtain all the required information to establish and demonstrate the reliability of its analytical procedure.

  11. Validation of the Wisconsin Personality Disorders Inventory-IV with the SCID-II.

    PubMed

    Smith, Tracey L; Klein, Marjorie H; Benjamin, Lorna S

    2003-06-01

    The Wisconsin Personality Disorders Inventory (WISPI-IV; Klein & Benjamin, 1996) is the latest version of a self-report measure of DSM-IV personality disorders (PDs) derived from an interpersonal perspective. When categorical diagnoses derived from the WISPI-IV were compared with independent SCID-II diagnoses, the majority of the kappas were poor (>.40). However, all but one of the effect sizes for the differences in WISPI-IV means between groups with and without SCID-II diagnoses were large (>.80). When SCID-II and WISPI-IV dimensional scores were considered, the average r between profiles was .61 (median = .58) and correlations between corresponding PD scales (mean diagonal r = .48; mean off-diagonal r = .18) indicated good convergent and discriminant validity for five of the WISPI-IV scales. These results add to the cumulating evidence suggesting greater reliability and validity of dimensional over categorical scores for PDs. Researchers and clinicians interested in having an efficient method of assessing PDs may consider using a dimensional approach such as the WISPI-IV as an alternative to diagnostic interview.

  12. Development and Validation of the Juvenile Sexual Offense Recidivism Risk Assessment Tool-II.

    PubMed

    Epperson, Douglas L; Ralston, Christopher A

    2015-12-01

    This article describes the development and initial validation of the Juvenile Sexual Offense Recidivism Risk Assessment Tool-II (JSORRAT-II). Potential predictor variables were extracted from case file information for an exhaustive sample of 636 juveniles in Utah who sexually offended between 1990 and 1992. Simultaneous and hierarchical logistic regression analyses were used to identify the group of variables that was most predictive of subsequent juvenile sexual recidivism. A simple categorical scoring system was applied to these variables without meaningful loss of accuracy in the development sample for any sexual (area under the curve [AUC] = .89) and sexually violent (AUC = .89) juvenile recidivism. The JSORRAT-II was cross-validated on an exhaustive sample of 566 juveniles who had sexually offended in Utah in 1996 and 1997. Reliability of scoring the tool across five coders was quite high (intraclass correlation coefficient [ICC] = .96). Relative to the development sample, however, there was considerable shrinkage in the indices of predictive accuracy for any sexual (AUC = .65) and sexually violent (AUC = .65) juvenile recidivism. The reduced level of accuracy was not explained by severity of the index sexual offense, time at risk, or missing data. Capitalization on chance and other explanations for the possible reduction in predictive accuracy are explored, and potential uses and limitations of the tool are discussed.

  13. Development and Validation of Reaction Wheel Disturbance Models: Empirical Model

    NASA Astrophysics Data System (ADS)

    Masterson, R. A.; Miller, D. W.; Grogan, R. L.

    2002-01-01

    Accurate disturbance models are necessary to predict the effects of vibrations on the performance of precision space-based telescopes, such as the Space Interferometry Mission (SIM). There are many possible disturbance sources on such spacecraft, but mechanical jitter from the reaction wheel assembly (RWA) is anticipated to be the largest. A method has been developed and implemented in the form of a MATLAB toolbox to extract parameters for an empirical disturbance model from RWA micro-vibration data. The disturbance model is based on one that was used to predict the vibration behaviour of the Hubble Space Telescope (HST) wheels and assumes that RWA disturbances consist of discrete harmonics of the wheel speed with amplitudes proportional to the wheel speed squared. The MATLAB toolbox allows the extension of this empirical disturbance model for application to any reaction wheel given steady state vibration data. The toolbox functions are useful for analyzing RWA vibration data, and the model provides a good estimate of the disturbances over most wheel speeds. However, it is shown that the disturbances are under-predicted by a model of this form over some wheel speed ranges. The poor correlation is due to the fact that the empirical model does not account for disturbance amplifications caused by interactions between the harmonics and the structural modes of the wheel. Experimental data from an ITHACO Space Systems E-type reaction wheel are used to illustrate the model development and validation process.

  14. Three Dimensional Vapor Intrusion Modeling: Model Validation and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Akbariyeh, S.; Patterson, B.; Rakoczy, A.; Li, Y.

    2013-12-01

    Volatile organic chemicals (VOCs), such as chlorinated solvents and petroleum hydrocarbons, are prevalent groundwater contaminants due to their improper disposal and accidental spillage. In addition to contaminating groundwater, VOCs may partition into the overlying vadose zone and enter buildings through gaps and cracks in foundation slabs or basement walls, a process termed vapor intrusion. Vapor intrusion of VOCs has been recognized as a detrimental source for human exposures to potential carcinogenic or toxic compounds. The simulation of vapor intrusion from a subsurface source has been the focus of many studies to better understand the process and guide field investigation. While multiple analytical and numerical models were developed to simulate the vapor intrusion process, detailed validation of these models against well controlled experiments is still lacking, due to the complexity and uncertainties associated with site characterization and soil gas flux and indoor air concentration measurement. In this work, we present an effort to validate a three-dimensional vapor intrusion model based on a well-controlled experimental quantification of the vapor intrusion pathways into a slab-on-ground building under varying environmental conditions. Finally, a probabilistic approach based on Monte Carlo simulations is implemented to determine the probability distribution of indoor air concentration based on the most uncertain input parameters.

  15. Translation, adaptation and validation of a Portuguese version of the Moorehead-Ardelt Quality of Life Questionnaire II.

    PubMed

    Maciel, João; Infante, Paulo; Ribeiro, Susana; Ferreira, André; Silva, Artur C; Caravana, Jorge; Carvalho, Manuel G

    2014-11-01

    The prevalence of obesity has increased worldwide. An assessment of the impact of obesity on health-related quality of life (HRQoL) requires specific instruments. The Moorehead-Ardelt Quality of Life Questionnaire II (MA-II) is a widely used instrument to assess HRQoL in morbidly obese patients. The objective of this study was to translate and validate a Portuguese version of the MA-II.The study included forward and backward translations of the original MA-II. The reliability of the Portuguese MA-II was estimated using the internal consistency and test-retest methods. For validation purposes, the Spearman's rank correlation coefficient was used to evaluate the correlation between the Portuguese MA-II and the Portuguese versions of two other questionnaires, the 36-item Short Form Health Survey (SF-36) and the Impact of Weight on Quality of Life-Lite (IWQOL-Lite).One hundred and fifty morbidly obese patients were randomly assigned to test the reliability and validity of the Portuguese MA-II. Good internal consistency was demonstrated by a Cronbach's alpha coefficient of 0.80, and a very good agreement in terms of test-retest reliability was recorded, with an overall intraclass correlation coefficient (ICC) of 0.88. The total sums of MA-II scores and each item of MA-II were significantly correlated with all domains of SF-36 and IWQOL-Lite. A statistically significant negative correlation was found between the MA-II total score and BMI. Moreover, age, gender and surgical status were independent predictors of MA-II total score.A reliable and valid Portuguese version of the MA-II was produced, thus enabling the routine use of MA-II in the morbidly obese Portuguese population.

  16. Validation subset selections for extrapolation oriented QSPAR models.

    PubMed

    Szántai-Kis, Csaba; Kövesdi, István; Kéri, György; Orfi, László

    2003-01-01

    One of the most important features of QSPAR models is their predictive ability. The predictive ability of QSPAR models should be checked by external validation. In this work we examined three different types of external validation set selection methods for their usefulness in in-silico screening. The usefulness of the selection methods was studied in such a way that: 1) We generated thousands of QSPR models and stored them in 'model banks'. 2) We selected a final top model from the model banks based on three different validation set selection methods. 3) We predicted large data sets, which we called 'chemical universe sets', and calculated the corresponding SEPs. The models were generated from small fractions of the available water solubility data during a GA Variable Subset Selection procedure. The external validation sets were constructed by random selections, uniformly distributed selections or by perimeter-oriented selections. We found that the best performing models on the perimeter-oriented external validation sets usually gave the best validation results when the remaining part of the available data was overwhelmingly large, i.e., when the model had to make a lot of extrapolations. We also compared the top final models obtained from external validation set selection methods in three independent and different sizes of 'chemical universe sets'.

  17. A framework for biodynamic feedthrough analysis--part II: validation and application.

    PubMed

    Venrooij, Joost; van Paassen, Marinus M; Mulder, Mark; Abbink, David A; Mulder, Max; van der Helm, Frans C T; Bulthoff, Heinrich H

    2014-09-01

    Biodynamic feedthrough (BDFT) is a complex phenomenon, that has been studied for several decades. However, there is little consensus on how to approach the BDFT problem in terms of definitions, nomenclature, and mathematical descriptions. In this paper, the framework for BDFT analysis, as presented in Part I of this dual publication, is validated and applied. The goal of this framework is twofold. First of all, it provides some common ground between the seemingly large range of different approaches existing in BDFT literature. Secondly, the framework itself allows for gaining new insights into BDFT phenomena. Using recently obtained measurement data, parts of the framework that were not already addressed elsewhere, are validated. As an example of a practical application of the framework, it will be demonstrated how the effects of control device dynamics on BDFT can be understood and accurately predicted. Other ways of employing the framework are illustrated by interpreting the results of three selected studies from the literature using the BDFT framework. The presentation of the BDFT framework is divided into two parts. This paper, Part II, addresses the validation and application of the framework. Part I, which is also published in this journal issue, addresses the theoretical foundations of the framework. The work is presented in two separate papers to allow for a detailed discussion of both the framework's theoretical background and its validation.

  18. Development of a Conservative Model Validation Approach for Reliable Analysis

    DTIC Science & Technology

    2015-01-01

    david.j.gorsich.civ@mail.mil ABSTRACT Simulation models are approximations of real-world physical systems . Therefore, simulation model validation is necessary...Validation, Insufficient Experimental Data, Conservative Output PDF, Bayesian Analysis, Model Bias, Calibration Parameter 1. INTRODUCTION Computer...designed product. However, an accurate, safe, and reliable simulation model is not easy to obtain due to the approximate imitation of real systems

  19. Line emission from H II blister models

    NASA Technical Reports Server (NTRS)

    Rubin, R. H.

    1984-01-01

    Numerical techniques to calculate the thermal and geometric properties of line emission from H II 'blister' regions are presented. It is assumed that the density distributions of the H II regions are a function of two dimensions, with rotational symmetry specifying the shape in three-dimensions. The thermal and ionization equilibrium equations of the problem are solved by spherical modeling, and a spherical sector approximation is used to simplify the three-dimensional treatment of diffuse ionizing radiation. The global properties of H II 'blister' regions near the edges of a molecular cloud are simulated by means of the geometry/density distribution, and the results are compared with observational data. It is shown that there is a monotonic increase of peak surface brightness from the i = 0 deg (pole-on) observational position to the i = 90 deg (edge-on) position. The enhancement of the line peak intensity from the edge-on to the pole-on positions is found to depend on the density, stratification, ionization, and electron temperature weighting. It is found that as i increases, the position of peak line brightness of the lower excitation species is displaced to the high-density side of the high excitation species.

  20. Uncertainty and validation of health economic decision models.

    PubMed

    Kim, Lois G; Thompson, Simon G

    2010-01-01

    Health economic decision models are based on specific assumptions relating to model structure and parameter estimation. Validation of these models is recommended as an indicator of reliability, but is not commonly reported. Furthermore, models derived from different data and employing different assumptions may produce a variety of results.A Markov model for evaluating the long-term cost-effectiveness of screening for abdominal aortic aneurysm is described. Internal, prospective and external validations are carried out using individual participant data from two randomised trials. Validation is assessed in terms of total numbers and timings of key events, and total costs and life-years. Since the initial model validates well only internally, two further models are developed that better fit the prospective and external validation data. All three models are then extrapolated to a life-time horizon, producing cost-effectiveness estimates ranging from pound1600 to pound4200 per life-year gained.Parameter uncertainty is now commonly addressed in health economic decision modelling. However, the derivation of models from different data sources adds another level of uncertainty. This extra uncertainty should be recognised in practical decision-making and, where possible, specifically investigated through independent model validation.

  1. Validation of the Greek translation of the obesity-specific Moorehead-Ardelt Quality-of-Life Questionnaire II.

    PubMed

    Charalampakis, Vasileios; Daskalakis, Markos; Bertsias, Georgios; Papadakis, John A; Melissas, John

    2012-05-01

    Morbid obesity adversely affects quality of life. The assessment of health-related quality of life (HRQoL) needs specific measuring instruments. The Moorehead-Ardelt Quality-of-Life Questionnaire II (MA II) is an obesity-specific instrument widely used in bariatric surgery. The objective of this study was to translate and validate the MA II in Greek language. The study included the translation of the MA II followed by cross-validation with the Greek version of 36-item Short Form Health Survey (SF-36) and a Visual Analogue Scale (VAS) in subjects visiting an obesity clinic. Internal consistency was indicated by Cronbach's alpha coefficient and test-retest reliability by intraclass correlation coefficient (ICC). Construct validity was studied using Pearson's correlations between the MA II, the SF-36 and the VAS. A total of 175 patients were enrolled in the study. Test-retest analysis was applied to 40 patients with a 15-day interval. A very good internal consistency with Cronbach's alpha coefficient of 0.85 was shown. Excellent test-retest reliability was observed with an overall ICC of 0.981. Significant correlations between the Greek MA II and the other instruments as well as of each item of the MA II with the scores of SF-36 and the VAS indicated high construct and convergent validity. A negative correlation between the translated MA II total score and BMI confirmed high clinical validity. The Greek version of the MA II questionnaire has been generated and shown to be valid and reliable in measuring HRQoL in morbidly obese patients before and after bariatric surgery.

  2. Effects of Mg II and Ca II ionization on ab-initio solar chromosphere models

    NASA Technical Reports Server (NTRS)

    Rammacher, W.; Cuntz, M.

    1991-01-01

    Acoustically heated solar chromosphere models are computed considering radiation damping by (non-LTE) emission from H(-) and by Mg II and Ca II emission lines. The radiative transfer equations for the Mg II k and Ca II K emission lines are solved using the core-saturation method with complete redistribution. The Mg II k and Ca II K cooling rates are compared with the VAL model C. Several substantial improvements over the work of Ulmschneider et al. (1987) are included. It is found that the rapid temperature rises caused by the ionization of Mg II are not formed in the middle chromosphere, but occur at larger atmospheric heights. These models represent the temperature structure of the 'real' solar chromosphere much better. This result is a major precondition for the study of ab-initio models for solar flux tubes based on MHD wave propagation and also for ab-initio models for the solar transition layer.

  3. Photon number conserving models of H II bubbles during reionization

    NASA Astrophysics Data System (ADS)

    Paranjape, Aseem; Choudhury, T. Roy; Padmanabhan, Hamsa

    2016-08-01

    Traditional excursion-set-based models of H II bubble growth during the epoch of reionization are known to violate photon number conservation, in the sense that the mass fraction in ionized bubbles in these models does not equal the ratio of the number of ionizing photons produced by sources and the number of hydrogen atoms in the intergalactic medium. E.g. for a Planck13 cosmology with electron scattering optical depth τ ≃ 0.066, the discrepancy is ˜15 per cent for x_{H II}=0.1 and ˜5 per cent for x_{H II}=0.5. We demonstrate that this problem arises from a fundamental conceptual shortcoming of the excursion-set approach (already recognized in the literature on this formalism) which only tracks average mass fractions instead of the exact, stochastic source counts. With this insight, we build an approximately photon number conserving Monte Carlo model of bubble growth based on partitioning regions of dark matter into haloes. Our model, which is formally valid for white noise initial conditions (ICs), shows dramatic improvements in photon number conservation, as well as substantial differences in the bubble size distribution, as compared to traditional models. We explore the trends obtained on applying our algorithm to more realistic ICs, finding that these improvements are robust to changes in the ICs. Since currently popular seminumerical schemes of bubble growth also violate photon number conservation, we argue that it will be worthwhile to pursue new, explicitly photon number conserving approaches. Along the way, we clarify some misconceptions regarding this problem that have appeared in the literature.

  4. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  5. Validation of Numerical Shallow Water Models for Tidal Lagoons

    SciTech Connect

    Eliason, D.; Bourgeois, A.

    1999-11-01

    An analytical solution is presented for the case of a stratified, tidally forced lagoon. This solution, especially its energetics, is useful for the validation of numerical shallow water models under stratified, tidally forced conditions. The utility of the analytical solution for validation is demonstrated for a simple finite difference numerical model. A comparison is presented of the energetics of the numerical and analytical solutions in terms of the convergence of model results to the analytical solution with increasing spatial and temporal resolution.

  6. Psychometric validation of the BDI-II among HIV-positive CHARTER study participants.

    PubMed

    Hobkirk, Andréa L; Starosta, Amy J; De Leo, Joseph A; Marra, Christina M; Heaton, Robert K; Earleywine, Mitch

    2015-06-01

    Rates of depression are high among individuals living with HIV. Accurate assessment of depressive symptoms among this population is important for ensuring proper diagnosis and treatment. The Beck Depression Inventory-II (BDI-II) is a widely used measure for assessing depression, however its psychometric properties have not yet been investigated for use with HIV-positive populations in the United States. The current study was the first to assess the psychometric properties of the BDI-II among a large cohort of HIV-positive participants sampled at multiple sites across the United States as part of the CNS HIV Antiretroviral Therapy Effects Research (CHARTER) study. The BDI-II test scores showed good internal consistency (α = .93) and adequate test-retest reliability (internal consistency coefficient = 0.83) over a 6-mo period. Using a "gold standard" of major depressive disorder determined by the Composite International Diagnostic Interview, sensitivity and specificity were maximized at a total cut-off score of 17 and a receiver operating characteristic analysis confirmed that the BDI-II is an adequate diagnostic measure for the sample (area under the curve = 0.83). The sensitivity and specificity of each score are provided graphically. Confirmatory factor analyses confirmed the best fit for a three-factor model over one-factor and two-factor models and models with a higher-order factor included. The results suggest that the BDI-II is an adequate measure for assessing depressive symptoms among U.S. HIV-positive patients. Cut-off scores should be adjusted to enhance sensitivity or specificity as needed and the measure can be differentiated into cognitive, affective, and somatic depressive symptoms.

  7. Ionospheric Modeling: Development, Verification and Validation

    DTIC Science & Technology

    2007-08-15

    Investigation of the Reliability of the ESIR Ionogram Autoscaling Method (Expert System for Ionogram Reduction) ESIR.book.pdf Dec 06 Quality...Figures and Error Bars for Autoscaled Vertical Incidence Ionograms . Background and User Documentation for QualScan V2007.2 AFRL_QualScan.book.pdf Feb...Distribution of Ionosonde Locations USU_old_new.book.pdf Jul 07 Validation of QualScan when applied to Ionograms scaled by ARTIST 5 A5V.book.pdf Jul 07

  8. Using virtual reality to validate system models

    SciTech Connect

    Winter, V.L.; Caudell, T.P.

    1999-12-09

    To date most validation techniques are highly biased towards calculations involving symbolic representations of problems. These calculations are either formal (in the case of consistency and completeness checks), or informal in the case of code inspections. The authors believe that an essential type of evidence of the correctness of the formalization process must be provided by (i.e., must originate from) human-based calculation. They further believe that human calculation can by significantly amplified by shifting from symbolic representations to graphical representations. This paper describes their preliminary efforts in realizing such a representational shift.

  9. Measurements of Humidity in the Atmosphere and Validation Experiments (Mohave, Mohave II): Results Overview

    NASA Technical Reports Server (NTRS)

    Leblanc, Thierry; McDermid, Iain S.; McGee, Thomas G.; Twigg, Laurence W.; Sumnicht, Grant K.; Whiteman, David N.; Rush, Kurt D.; Cadirola, Martin P.; Venable, Demetrius D.; Connell, R.; Demoz, Belay B.; Vomel, Holger; Miloshevich, L.

    2008-01-01

    The Measurements of Humidity in the Atmosphere and Validation Experiments (MOHAVE, MOHAVE-II) inter-comparison campaigns took place at the Jet Propulsion Laboratory (JPL) Table Mountain Facility (TMF, 34.5(sup o)N) in October 2006 and 2007 respectively. Both campaigns aimed at evaluating the capability of three Raman lidars for the measurement of water vapor in the upper troposphere and lower stratosphere (UT/LS). During each campaign, more than 200 hours of lidar measurements were compared to balloon borne measurements obtained from 10 Cryogenic Frost-point Hygrometer (CFH) flights and over 50 Vaisala RS92 radiosonde flights. During MOHAVE, fluorescence in all three lidar receivers was identified, causing a significant wet bias above 10-12 km in the lidar profiles as compared to the CFH. All three lidars were reconfigured after MOHAVE, and no such bias was observed during the MOHAVE-II campaign. The lidar profiles agreed very well with the CFH up to 13-17 km altitude, where the lidar measurements become noise limited. The results from MOHAVE-II have shown that the water vapor Raman lidar will be an appropriate technique for the long-term monitoring of water vapor in the UT/LS given a slight increase in its power-aperture, as well as careful calibration.

  10. Validity Measures in the Context of Latent Trait Models.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    Test validity is a concept that has often been ignored in the context of latent trait models and in modern test theory, particularly as it relates to computerized adaptive testing. Some considerations about the validity of a test and of a single item are proposed. This paper focuses on measures that are population-free and that will provide local…

  11. Teacher Change Beliefs: Validating a Scale with Structural Equation Modelling

    ERIC Educational Resources Information Center

    Kin, Tai Mei; Abdull Kareem, Omar; Nordin, Mohamad Sahari; Wai Bing, Khuan

    2015-01-01

    The objectives of the study were to validate a substantiated Teacher Change Beliefs Model (TCBM) and an instrument to identify critical components of teacher change beliefs (TCB) in Malaysian secondary schools. Five different pilot test approaches were applied to ensure the validity and reliability of the instrument. A total of 936 teachers from…

  12. The development and validation of a CT-based radiomics signature for the preoperative discrimination of stage I-II and stage III-IV colorectal cancer

    PubMed Central

    He, Lan; Chen, Xin; Ma, Zelan; Dong, Di; Tian, Jie; Liang, Changhong; Liu, Zaiyi

    2016-01-01

    Objectives To investigative the predictive ability of radiomics signature for preoperative staging (I-IIvs.III-IV) of primary colorectal cancer (CRC). Methods This study consisted of 494 consecutive patients (training dataset: n=286; validation cohort, n=208) with stage I–IV CRC. A radiomics signature was generated using LASSO logistic regression model. Association between radiomics signature and CRC staging was explored. The classification performance of the radiomics signature was explored with respect to the receiver operating characteristics(ROC) curve. Results The 16-feature-based radiomics signature was an independent predictor for staging of CRC, which could successfully categorize CRC into stage I-II and III-IV (p <0.0001) in training and validation dataset. The median of radiomics signature of stage III-IV was higher than stage I-II in the training and validation dataset. As for the classification performance of the radiomics signature in CRC staging, the AUC was 0.792(95%CI:0.741-0.853) with sensitivity of 0.629 and specificity of 0.874. The signature in the validation dataset obtained an AUC of 0.708(95%CI:0.698-0.718) with sensitivity of 0.611 and specificity of 0.680. Conclusions A radiomics signature was developed and validated to be a significant predictor for discrimination of stage I-II from III-IV CRC, which may serve as a complementary tool for the preoperative tumor staging in CRC. PMID:27120787

  13. Methodology Investigation of Software Maturity Model Validation

    DTIC Science & Technology

    1989-11-13

    Cassondra Renfro provided helpful coments in the review of the report. I SEcTON 1. SUMALR 1.1 BACIRUND. Software h~s become a major part of Comuand...example, is that each software fault that is discovered is either corrected or not counted again. Brooks and Motley’s Models, on the other hand, assume...Between Error Occurrence ERROR COUNT MODELS 1 The Generalized Poisson Model 2 The Non - Homogeneous Poisson Model 3 The Brooks and Motley Model 4 The

  14. Exploring the Validity of Valproic Acid Animal Model of Autism

    PubMed Central

    Mabunga, Darine Froy N.; Gonzales, Edson Luck T.; Kim, Ji-woon; Kim, Ki Chan

    2015-01-01

    The valproic acid (VPA) animal model of autism spectrum disorder (ASD) is one of the most widely used animal model in the field. Like any other disease models, it can't model the totality of the features seen in autism. Then, is it valid to model autism? This model demonstrates many of the structural and behavioral features that can be observed in individuals with autism. These similarities enable the model to define relevant pathways of developmental dysregulation resulting from environmental manipulation. The uncovering of these complex pathways resulted to the growing pool of potential therapeutic candidates addressing the core symptoms of ASD. Here, we summarize the validity points of VPA that may or may not qualify it as a valid animal model of ASD. PMID:26713077

  15. Economic analysis of model validation for a challenge problem

    SciTech Connect

    Paez, Paul J.; Paez, Thomas L.; Hasselman, Timothy K.

    2016-02-19

    It is now commonplace for engineers to build mathematical models of the systems they are designing, building, or testing. And, it is nearly universally accepted that phenomenological models of physical systems must be validated prior to use for prediction in consequential scenarios. Yet, there are certain situations in which testing only or no testing and no modeling may be economically viable alternatives to modeling and its associated testing. This paper develops an economic framework within which benefit–cost can be evaluated for modeling and model validation relative to other options. The development is presented in terms of a challenge problem. As a result, we provide a numerical example that quantifies when modeling, calibration, and validation yield higher benefit–cost than a testing only or no modeling and no testing option.

  16. Economic analysis of model validation for a challenge problem

    DOE PAGES

    Paez, Paul J.; Paez, Thomas L.; Hasselman, Timothy K.

    2016-02-19

    It is now commonplace for engineers to build mathematical models of the systems they are designing, building, or testing. And, it is nearly universally accepted that phenomenological models of physical systems must be validated prior to use for prediction in consequential scenarios. Yet, there are certain situations in which testing only or no testing and no modeling may be economically viable alternatives to modeling and its associated testing. This paper develops an economic framework within which benefit–cost can be evaluated for modeling and model validation relative to other options. The development is presented in terms of a challenge problem. Asmore » a result, we provide a numerical example that quantifies when modeling, calibration, and validation yield higher benefit–cost than a testing only or no modeling and no testing option.« less

  17. Gear Windage Modeling Progress - Experimental Validation Status

    NASA Technical Reports Server (NTRS)

    Kunz, Rob; Handschuh, Robert F.

    2008-01-01

    In the Subsonics Rotary Wing (SRW) Project being funded for propulsion work at NASA Glenn Research Center, performance of the propulsion system is of high importance. In current rotorcraft drive systems many gearing components operate at high rotational speed (pitch line velocity > 24000 ft/ min). In our testing of high speed helical gear trains at NASA Glenn we have found that the work done on the air - oil mist within the gearbox can become a significant part of the power loss of the system. This loss mechanism is referred to as windage. The effort described in this presentation is to try to understand the variables that affect windage, develop a good experimental data base to validate, the analytical project being conducted at Penn State University by Dr. Rob Kunz under a NASA SRW NRA. The presentation provides an update to the status of these efforts.

  18. An Investigation of Human Performance Model Validation

    DTIC Science & Technology

    2005-03-01

    from both models and empirical activities, and the construction of scenarios that will sufficiently exercise model and human participants. 1 Development...Soar to represent peripheral players/platforms in simulation-based exercises that are used to evaluate system design concepts and tactics.) We expect...and operational concepts can best be employed to assist them." As noted above, models used in the JSB exercise must be highly accurate and accredited

  19. Mortality prediction in Indian cardiac surgery patients: Validation of European System for Cardiac Operative Risk Evaluation II

    PubMed Central

    Kar, Prachi; Geeta, Kanithi; Gopinath, Ramachandran; Durga, Padmaja

    2017-01-01

    Background and Aims: Risk Stratification has an important place in cardiac surgery to identify high-risk cases and optimally allocate resources. Hence various risk scoring systems have been tried to predict mortality. The aim of the present study was to validate the European System for Cardiac Operative Risk Evaluation II (EuroSCORE II) in Indian cardiac surgical patients. Methods: After obtaining ethics committee clearance, data on EuroSCORE II variables were collected for all patients >18 years undergoing on-pump coronary artery bypass graft (CABG), valve surgery and mixed (CABG + valve) procedures between January 2011 and December 2012. Mortality prediction was done using the online calculator from the site www.euroscore.org. The calibration of the EuroSCORE II model was analysed using the Hosmer–Lemeshow test and discrimination was analysed by plotting receiver operating characteristic curves (ROC) and calculating area under the curve (AUC). The analysis was done in the total sample, CABG, valve surgery and in mixed procedures. Results: The overall observed mortality was 5.7% in the total sample, 6.6% in CABG, 4.2% in valve surgeries and 10.2% in mixed procedures whereas the predicted mortality was 2.9%, 3.1%, 2.4%, 5.1% in total sample, CABG, valve surgery and mixed procedure, respectively. The significance (P value) of Hosmer–Lemeshow test was 0.292, 0.45, 0.56 and 1 for the total sample, CABG, valve surgery and mixed procedure, respectively, indicating good calibration. The AUC of ROC was 0.76, 0.70, 0.83 and 0.78 for total sample, CABG, valve surgery and mixed procedure, respectively. Conclusion: Mortality of the sample was under-predicted by EuroSCORE II. Calibration of the EuroSCORE II model was good for total sample as well as for all surgical subcategories. Discrimination was good in the total sample and in the mixed procedure population, acceptable in CABG patients and excellent in valve surgeries. PMID:28250485

  20. Spectral modeling of Type II SNe

    NASA Astrophysics Data System (ADS)

    Dessart, Luc

    2015-08-01

    The red supergiant phase represents the final stage of evolution in the life of moderate mass (8-25Msun) massive stars. Hidden from view, the core changes considerably its structure, progressing through the advanced stages of nuclear burning, and eventually becomes degenerate. Upon reaching the Chandrasekhar mass, this Fe or ONeMg core collapses, leading to the formation of a proto neutron star. A type II supernova results if the shock that forms at core bounce, eventually wins over the envelope accretion and reaches the progenitor surface.The electromagnetic display of such core-collapse SNe starts with this shock breakout, and persists for months as the ejecta releases the energy deposited initially by the shock or continuously through radioactive decay. Over a timescale of weeks to months, the originally optically-thick ejecta thins out and turns nebular. SN radiation contains a wealth of information about the explosion physics (energy, explosive nucleosynthesis), the progenitor properties (structure and composition). Polarised radiation also offers signatures that can help constrain the morphology of the ejecta.In this talk, I will review the current status of type II SN spectral modelling, and emphasise that a proper solution requires a time dependent treatment of the radiative transfer problem. I will discuss the wealth of information that can be gleaned from spectra as well as light curves, from both the early times (photospheric phase) and late times (nebular phase). I will discuss the diversity of Type SNe properties and how they are related to the diversity of red supergiant stars from which they originate.SN radiation offers an alternate means of constraining the properties of red-supergiant stars. To wrap up, I will illustrate how SNe II-P can also be used as probes, for example to constrain the metallicity of their environment.

  1. Validation of a national hydrological model

    NASA Astrophysics Data System (ADS)

    McMillan, H. K.; Booker, D. J.; Cattoën, C.

    2016-10-01

    Nationwide predictions of flow time-series are valuable for development of policies relating to environmental flows, calculating reliability of supply to water users, or assessing risk of floods or droughts. This breadth of model utility is possible because various hydrological signatures can be derived from simulated flow time-series. However, producing national hydrological simulations can be challenging due to strong environmental diversity across catchments and a lack of data available to aid model parameterisation. A comprehensive and consistent suite of test procedures to quantify spatial and temporal patterns in performance across various parts of the hydrograph is described and applied to quantify the performance of an uncalibrated national rainfall-runoff model of New Zealand. Flow time-series observed at 485 gauging stations were used to calculate Nash-Sutcliffe efficiency and percent bias when simulating between-site differences in daily series, between-year differences in annual series, and between-site differences in hydrological signatures. The procedures were used to assess the benefit of applying a correction to the modelled flow duration curve based on an independent statistical analysis. They were used to aid understanding of climatological, hydrological and model-based causes of differences in predictive performance by assessing multiple hypotheses that describe where and when the model was expected to perform best. As the procedures produce quantitative measures of performance, they provide an objective basis for model assessment that could be applied when comparing observed daily flow series with competing simulated flow series from any region-wide or nationwide hydrological model. Model performance varied in space and time with better scores in larger and medium-wet catchments, and in catchments with smaller seasonal variations. Surprisingly, model performance was not sensitive to aquifer fraction or rain gauge density.

  2. Uncertainty Quantification and Validation for RANS Turbulence Models

    NASA Astrophysics Data System (ADS)

    Oliver, Todd; Moser, Robert

    2011-11-01

    Uncertainty quantification and validation procedures for RANS turbulence models are developed and applied. The procedures used here rely on a Bayesian view of probability. In particular, the uncertainty quantification methodology requires stochastic model development, model calibration, and model comparison, all of which are pursued using tools from Bayesian statistics. Model validation is also pursued in a probabilistic framework. The ideas and processes are demonstrated on a channel flow example. Specifically, a set of RANS models--including Baldwin-Lomax, Spalart-Allmaras, k- ɛ, k- ω, and v2- f--and uncertainty representations are analyzed using DNS data for fully-developed channel flow. Predictions of various quantities of interest and the validity (or invalidity) of the various models for making those predictions will be examined. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].

  3. Highlights of Transient Plume Impingement Model Validation and Applications

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael

    2011-01-01

    This paper describes highlights of an ongoing validation effort conducted to assess the viability of applying a set of analytic point source transient free molecule equations to model behavior ranging from molecular effusion to rocket plumes. The validation effort includes encouraging comparisons to both steady and transient studies involving experimental data and direct simulation Monte Carlo results. Finally, this model is applied to describe features of two exotic transient scenarios involving NASA Goddard Space Flight Center satellite programs.

  4. Model Validations and Predictions for Water Barrier Defense

    DTIC Science & Technology

    1998-05-12

    simplifying assumptions that were made. Using the Rayleigh - Plesset equation for modeling a spherical adiabatic gas bubble oscillating in an infinite...Validations of the Two -Dimensional Model 10 Plume Observations 12 Microwave Data 17 Probe Data 21 Optimal Depth Study 24 Validations of the Three...maximum volume. If this maximum volume has an equivalent spherical radius that is between approximately one and two times the initial charge depth, a

  5. HEDR model validation plan. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ``tools`` for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ``validate`` these tools. In the sense of the HEDR Project, ``validation`` is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model.

  6. Angular Distribution Models for Top-of-Atmosphere Radiative Flux Estimation from the Clouds and the Earth's Radiant Energy System Instrument on the Tropical Rainfall Measuring Mission Satellite. Part II; Validation

    NASA Technical Reports Server (NTRS)

    Loeb, N. G.; Loukachine, K.; Wielicki, B. A.; Young, D. F.

    2003-01-01

    Top-of-atmosphere (TOA) radiative fluxes from the Clouds and the Earth s Radiant Energy System (CERES) are estimated from empirical angular distribution models (ADMs) that convert instantaneous radiance measurements to TOA fluxes. This paper evaluates the accuracy of CERES TOA fluxes obtained from a new set of ADMs developed for the CERES instrument onboard the Tropical Rainfall Measuring Mission (TRMM). The uncertainty in regional monthly mean reflected shortwave (SW) and emitted longwave (LW) TOA fluxes is less than 0.5 W/sq m, based on comparisons with TOA fluxes evaluated by direct integration of the measured radiances. When stratified by viewing geometry, TOA fluxes from different angles are consistent to within 2% in the SW and 0.7% (or 2 W/sq m) in the LW. In contrast, TOA fluxes based on ADMs from the Earth Radiation Budget Experiment (ERBE) applied to the same CERES radiance measurements show a 10% relative increase with viewing zenith angle in the SW and a 3.5% (9 W/sq m) decrease with viewing zenith angle in the LW. Based on multiangle CERES radiance measurements, 18 regional instantaneous TOA flux errors from the new CERES ADMs are estimated to be 10 W/sq m in the SW and, 3.5 W/sq m in the LW. The errors show little or no dependence on cloud phase, cloud optical depth, and cloud infrared emissivity. An analysis of cloud radiative forcing (CRF) sensitivity to differences between ERBE and CERES TRMM ADMs, scene identification, and directional models of albedo as a function of solar zenith angle shows that ADM and clear-sky scene identification differences can lead to an 8 W/sq m root-mean-square (rms) difference in 18 daily mean SW CRF and a 4 W/sq m rms difference in LW CRF. In contrast, monthly mean SW and LW CRF differences reach 3 W/sq m. CRF is found to be relatively insensitive to differences between the ERBE and CERES TRMM directional models.

  7. Validation of Modeling Flow Approaching Navigation Locks

    DTIC Science & Technology

    2013-08-01

    instrumentation, direction vernier . ........................................................................ 8  Figure 11. Plan A lock approach, upstream approach...13-9 8 Figure 9. Tools and instrumentation, bracket attached to rail. Figure 10. Tools and instrumentation, direction vernier . Numerical model

  8. VERIFICATION AND VALIDATION OF THE SPARC MODEL

    EPA Science Inventory

    Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values--that is, the physical and chemical constants that govern reactivity. Although empirical structure-activity relationships that allow estimation of some ...

  9. Validating predictions from climate envelope models

    USGS Publications Warehouse

    Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.

  10. Theory and Implementation of Nuclear Safety System Codes - Part II: System Code Closure Relations, Validation, and Limitations

    SciTech Connect

    Glenn A Roth; Fatih Aydogan

    2014-09-01

    This is Part II of two articles describing the details of thermal-hydraulic sys- tem codes. In this second part of the article series, the system code closure relationships (used to model thermal and mechanical non-equilibrium and the coupling of the phases) for the governing equations are discussed and evaluated. These include several thermal and hydraulic models, such as heat transfer coefficients for various flow regimes, two phase pressure correlations, two phase friction correlations, drag coefficients and interfacial models be- tween the fields. These models are often developed from experimental data. The experiment conditions should be understood to evaluate the efficacy of the closure models. Code verification and validation, including Separate Effects Tests (SETs) and Integral effects tests (IETs) is also assessed. It can be shown from the assessments that the test cases cover a significant section of the system code capabilities, but some of the more advanced reactor designs will push the limits of validation for the codes. Lastly, the limitations of the codes are discussed by considering next generation power plants, such as Small Modular Reactors (SMRs), analyz- ing not only existing nuclear power plants, but also next generation nuclear power plants. The nuclear industry is developing new, innovative reactor designs, such as Small Modular Reactors (SMRs), High-Temperature Gas-cooled Reactors (HTGRs) and others. Sub-types of these reactor designs utilize pebbles, prismatic graphite moderators, helical steam generators, in- novative fuel types, and many other design features that may not be fully analyzed by current system codes. This second part completes the series on the comparison and evaluation of the selected reactor system codes by discussing the closure relations, val- idation and limitations. These two articles indicate areas where the models can be improved to adequately address issues with new reactor design and development.

  11. Shuttle Spacesuit: Fabric/LCVG Model Validation

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tweed, J.; Zeitlin, C.; Kim, M.-H. Y.; Anderson, B. M.; Cucinotta, F. A.; Ware, J.; Persans, A. E.

    2001-01-01

    A detailed spacesuit computational model is being developed at the Langley Research Center for radiation exposure evaluation studies. The details of the construction of the spacesuit are critical to estimation of exposures and assessing the risk to the astronaut on EVA. Past evaluations of spacesuit shielding properties assumed the basic fabric lay-up (Thermal Micrometeroid Garment, fabric restraints, and pressure envelope) and Liquid Cooling and Ventilation Garment (LCVG) could be homogenized as a single layer overestimating the protective properties over 60 percent of the fabric area. The present spacesuit model represents the inhomogeneous distributions of LCVG materials (mainly the water filled cooling tubes). An experimental test is performed using a 34-MeV proton beam and highresolution detectors to compare with model-predicted transmission factors. Some suggestions are made on possible improved construction methods to improve the spacesuit's protection properties.

  12. Distinguishing Valid from Invalid Causal Indicator Models

    ERIC Educational Resources Information Center

    Cadogan, John W.; Lee, Nick

    2016-01-01

    In this commentary from Issue 14, n3, authors John Cadogan and Nick Lee applaud the paper by Aguirre-Urreta, Rönkkö, and Marakas "Measurement: Interdisciplinary Research and Perspectives", 14(3), 75-97 (2016), since their explanations and simulations work toward demystifying causal indicator models, which are often used by scholars…

  13. Validation of a Model of Personalised Learning

    ERIC Educational Resources Information Center

    Waldrip, Bruce; Yu, Jeong Jin; Prain, Vaughan

    2016-01-01

    This article focuses on a Personalised Learning model which has 19 scales that were used to evaluate regional students' perceptions of their readiness to learn, assessment processes, engagement, extent to which their learning is personalised and their associations with academic efficacy, academic achievement and student well-being. The data came…

  14. WEPP: Model use, calibration, and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Water Erosion Prediction Project (WEPP) model is a process-based, continuous simulation, distributed parameter, hydrologic and soil erosion prediction system. It has been developed over the past 25 years to allow for easy application to a large number of land management scenarios. Most general o...

  15. WEPP: Model use, calibration and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Water Erosion Prediction Project (WEPP) model is a process-based, continuous simulation, distributed parameter, hydrologic and soil erosion prediction system. It has been developed over the past 25 years to allow for easy application to a large number of land management scenarios. Most general o...

  16. Validation of the Poisson Stochastic Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    Zhuravleva, Tatiana; Marshak, Alexander

    2004-01-01

    A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.

  17. Towards better clinical prediction models: seven steps for development and an ABCD for validation

    PubMed Central

    Steyerberg, Ewout W.; Vergouwe, Yvonne

    2014-01-01

    Clinical prediction models provide risk estimates for the presence of disease (diagnosis) or an event in the future course of disease (prognosis) for individual patients. Although publications that present and evaluate such models are becoming more frequent, the methodology is often suboptimal. We propose that seven steps should be considered in developing prediction models: (i) consideration of the research question and initial data inspection; (ii) coding of predictors; (iii) model specification; (iv) model estimation; (v) evaluation of model performance; (vi) internal validation; and (vii) model presentation. The validity of a prediction model is ideally assessed in fully independent data, where we propose four key measures to evaluate model performance: calibration-in-the-large, or the model intercept (A); calibration slope (B); discrimination, with a concordance statistic (C); and clinical usefulness, with decision-curve analysis (D). As an application, we develop and validate prediction models for 30-day mortality in patients with an acute myocardial infarction. This illustrates the usefulness of the proposed framework to strengthen the methodological rigour and quality for prediction models in cardiovascular research. PMID:24898551

  18. Analysis of Experimental Data for High Burnup PWR Spent Fuel Isotopic Validation - Vandellos II Reactor

    SciTech Connect

    Ilas, Germina; Gauld, Ian C

    2011-01-01

    This report is one of the several recent NUREG/CR reports documenting benchmark-quality radiochemical assay data and the use of the data to validate computer code predictions of isotopic composition for spent nuclear fuel, to establish the uncertainty and bias associated with code predictions. The experimental data analyzed in the current report were acquired from a high-burnup fuel program coordinated by Spanish organizations. The measurements included extensive actinide and fission product data of importance to spent fuel safety applications, including burnup credit, decay heat, and radiation source terms. Six unique spent fuel samples from three uranium oxide fuel rods were analyzed. The fuel rods had a 4.5 wt % {sup 235}U initial enrichment and were irradiated in the Vandellos II pressurized water reactor operated in Spain. The burnups of the fuel samples range from 42 to 78 GWd/MTU. The measurements were used to validate the two-dimensional depletion sequence TRITON in the SCALE computer code system.

  19. Cross-Validation for Nonlinear Mixed Effects Models

    PubMed Central

    Colby, Emily; Bair, Eric

    2013-01-01

    Cross-validation is frequently used for model selection in a variety of applications. However, it is difficult to apply cross-validation to mixed effects models (including nonlinear mixed effects models or NLME models) due to the fact that cross-validation requires “out-of-sample” predictions of the outcome variable, which cannot be easily calculated when random effects are present. We describe two novel variants of cross-validation that can be applied to nonlinear mixed effects models. One variant, where out-of-sample predictions are based on post hoc estimates of the random effects, can be used to select the overall structural model. Another variant, where cross-validation seeks to minimize the estimated random effects rather than the estimated residuals, can be used to select covariates to include in the model. We show that these methods produce accurate results in a variety of simulated data sets and apply them to two publicly available population pharmacokinetic data sets. PMID:23532511

  20. Validation of Air Traffic Controller Workload Models

    DTIC Science & Technology

    1979-09-01

    SAR) tapes dtirinq the data reduc- tion phase of the project. Kentron International Limited provided the software support for the oroject. This included... ETABS ) or to revised traffic control procedures. The models also can be used to verify productivity benefits after new configurations have been...col- lected and processed manually. A preliminary compari- son has been made between standard NAS Stage A and ETABS operations at Miami. 1.2

  1. Model Validation for Simulations of Vehicle Systems

    DTIC Science & Technology

    2012-08-01

    a large number of replicate samples via Monte- Carlo simulation. The test data, on the other hand, is usually provided as a collection of point...can be determined by Monte Carlo simulation. Classical hypothesis testing techniques depend on a normality assumption except for the modified...criteria”, Computer Methods in Applied Mechanics and Engineering, 197:2517-2539, 2008. [14] S. Ferson, W. L. Oberkampf and L. Ginzburg , “Model

  2. Finite State Machines and Modal Models in Ptolemy II

    DTIC Science & Technology

    2009-11-01

    Finite State Machines and Modal Models in Ptolemy II Edward A. Lee Electrical Engineering and Computer Sciences University of California at Berkeley...DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Finite State Machines and Modal Models in Ptolemy II 5a. CONTRACT NUMBER 5b...describes the usage and semantics of finite-state machines (FSMs) and modal models in Ptolemy II. FSMs are actors whose behavior is described using a

  3. A model for the separation of cloud and aerosol in SAGE II occultation data

    NASA Technical Reports Server (NTRS)

    Kent, G. S.; Winker, D. M.; Osborn, M. T.; Skeens, K. M.

    1993-01-01

    The Stratospheric Aerosol and Gas Experiment (SAGE) II satellite experiment measures the extinction due to aerosols and thin cloud, at wavelengths of 0.525 and 1.02 micrometers, down to an altitude of 6 km. The wavelength dependence of the extinction due to aerosols differs from that of the extinction due to cloud and is used as the basis of a model for separating these two components. The model is presented and its validation using airborne lidar data, obtained coincident with SAGE II observations, is described. This comparison shows that smaller SAGE II cloud extinction values correspond to the presence of subvisible cirrus cloud in the lidar record. Examples of aerosol and cloud data products obtained using this model to interpret SAGE II upper tropospheric and lower stratospheric data are also shown.

  4. Experiments for foam model development and validation.

    SciTech Connect

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F.; Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  5. ARCJET plasma modeling with experimental validation

    NASA Astrophysics Data System (ADS)

    Krier, Herman; Burton, Rodney L.; Megli, Thomas W.; Bufton, Scott A.; Tiliakos, Nicholas T.

    1994-09-01

    We report for the first time thermal non-equilibrium (separate electron and gas temperatures) numerical results for a hydrazine arcjet. All viscous flow properties are considered, assuming laminar axisymmetric flow. The model includes anode temperature distribution, and the electrical conductivity is coupled to the flow properties, allowing for a self-consistent current distribution. The numerical solution algorithm employs the compressible form of the PISO algorithm to solve the continuity and momentum equations. Run time is a few hours on a Convex C240 Mainframe with a 44 x 24 grid. Numerical results are presented for low power hydrogen and hydrazine thrusters. Preliminary results of quadruple electrostatic probe measurements at the exit plane of a 1 kW hydrazine arcjet, including ne and Te profiles, are presented. The quadruple probe model includes the effects of Te and Ne gradients across the probe volume to extract Te and Ne radial profiles from the asymmetric raw probe data. A time-of-flight electrostatic probe technique for measuring heavy particle velocities is described which, when coupled with the quadruple probe data can yield radial profiles of Ne(r), Te(r), Ti(r) and Ui(r). Experimental investigations of the energy deposition processes in the nozzle and constrictor regions of a 1-2 kill hydrazine arcjet are being performed. Electron number density and electron temperature measurements, using an array of flush-mounted Langmuir probes, will be made in the boundary layer.

  6. Validating Requirements for Fault Tolerant Systems Using Model Checking

    NASA Technical Reports Server (NTRS)

    Schneider, Francis; Easterbrook, Steve M.; Callahan, John R.; Holzmann, Gerard J.

    1997-01-01

    Model checking is shown to be an effective tool in validating the behavior of a fault tolerant embedded spacecraft controller. The case study presented here shows that by judiciously abstracting away extraneous complexity, the state space of the model could be exhaustively searched allowing critical functional requirements to be validated down to the design level. Abstracting away detail not germane to the problem of interest leaves by definition a partial specification behind. The success of this procedure shows that it is feasible to effectively validate a partial specification with this technique. Three anomalies were found in the system one of which is an error in the detailed requirements, and the other two are missing/ambiguous requirements. Because the method allows validation of partial specifications, it also is an effective methodology towards maintaining fidelity between a co-evolving specification and an implementation.

  7. Modeling Topaz-II system performance

    SciTech Connect

    Lee, H.H.; Klein, A.C. )

    1993-01-01

    The US acquisition of the Topaz-11 in-core thermionic space reactor test system from Russia provides a good opportunity to perform a comparison of the Russian reported data and the results from computer codes such as MCNP (Ref. 3) and TFEHX (Ref. 4). The comparison study includes both neutronic and thermionic performance analyses. The Topaz II thermionic reactor is modeled with MCNP using actual Russian dimensions and parameters. The computation of the neutronic performance considers several important aspects such as the fuel enrichment and location of the thermionic fuel elements (TFES) in the reactor core. The neutronic analysis included the calculation of both radial and axial power distribution, which are then used in the TFEHX code for electrical performance. The reactor modeled consists of 37 single-cell TFEs distributed in a 13-cm-radius zirconium hydride block surrounded by 8 cm of beryllium metal reflector. The TFEs use 90% enriched [sup 235]U and molybdenum coated with a thin layer of [sup 184]W for emitter surface. Electrons emitted are captured by a collector surface with a gap filled with cesium vapor between the collector and emitter surfaces. The collector surface is electrically insulated with alumina. Liquid NaK provides the cooling system for the TFEs. The axial thermal power distribution is obtained by dividing the TFE into 40 axial nodes. Comparison of the true axial power distribution with that produced by electrical heaters was also performed.

  8. Validation of Orthorectified Interferometric Radar Imagery and Digital Elevation Models

    NASA Technical Reports Server (NTRS)

    Smith Charles M.

    2004-01-01

    This work was performed under NASA's Verification and Validation (V&V) Program as an independent check of data supplied by EarthWatch, Incorporated, through the Earth Science Enterprise Scientific Data Purchase (SDP) Program. This document serves as the basis of reporting results associated with validation of orthorectified interferometric interferometric radar imagery and digital elevation models (DEM). This validation covers all datasets provided under the first campaign (Central America & Virginia Beach) plus three earlier missions (Indonesia, Red River: and Denver) for a total of 13 missions.

  9. Functional state modelling approach validation for yeast and bacteria cultivations

    PubMed Central

    Roeva, Olympia; Pencheva, Tania

    2014-01-01

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778

  10. Functional state modelling approach validation for yeast and bacteria cultivations.

    PubMed

    Roeva, Olympia; Pencheva, Tania

    2014-09-03

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach.

  11. Lumped Node Thermal Modeling of EMA with FEA Validation (PREPRINT)

    DTIC Science & Technology

    2010-11-01

    minor heat path. For forced convection, the Nusselt number for a cylinder is determined from[6], ⁄ (3) where and are based on the Reynold’s...of information if it does not display a currently valid OMB control number . PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE...SUBTITLE LUMPED NODE THERMAL MODELING OF EMA WITH FEA VALIDATION (PREPRINT) 5a. CONTRACT NUMBER FA8650-09-2-2940 5b. GRANT NUMBER 5c. PROGRAM

  12. Dynamic Model Validation with Governor Deadband on the Eastern Interconnection

    SciTech Connect

    Kou, Gefei; Hadley, Stanton W; Liu, Yilu

    2014-04-01

    This report documents the efforts to perform dynamic model validation on the Eastern Interconnection (EI) by modeling governor deadband. An on-peak EI dynamic model is modified to represent governor deadband characteristics. Simulation results are compared with synchrophasor measurements collected by the Frequency Monitoring Network (FNET/GridEye). The comparison shows that by modeling governor deadband the simulated frequency response can closely align with the actual system response.

  13. Modeling interactions of Hg(II) and bauxitic soils.

    PubMed

    Weerasooriya, Rohan; Tobschall, Heinz J; Bandara, Atula

    2007-11-01

    The adsorptive interactions of Hg(II) with gibbsite-rich soils (hereafter SOIL-g) were modeled by 1-pK surface complexation theory using charge distribution multi-site ion competition model (CD MUSIC) incorporating basic Stern layer model (BSM) to account for electrostatic effects. The model calibrations were performed for the experimental data of synthetic gibbsite-Hg(II) adsorption. When [NaNO(3)] > or = 0.01M, the Hg(II) adsorption density values, of gibbsite, Gamma(Hg(II)), showed a negligible variation with ionic strength. However, Gamma(Hg(II)) values show a marked variation with the [Cl(-)]. When [Cl(-)] > or = 0.01M, the Gamma(Hg(II)) values showed a significant reduction with the pH. The Hg(II) adsorption behavior in NaNO(3) was modeled assuming homogeneous solid surface. The introduction of high affinity sites, i.e., >Al(s)OH at a low concentration (typically about 0.045 sites nm(-2)) is required to model Hg(II) adsorption in NaCl. According to IR spectroscopic data, the bauxitic soil (SOIL-g) is characterized by gibbsite and bayerite. These mineral phases were not treated discretely in modeling of Hg(II) and soil interactions. The CD MUSIC/BSM model combination can be used to model Hg(II) adsorption on bauxitic soil. The role of organic matter seems to play a role on Hg(II) binding when pH>8. The Hg(II) adsorption in the presence of excess Cl(-) ions required the selection of high affinity sites in modeling.

  14. Comparison with CLPX II airborne data using DMRT model

    USGS Publications Warehouse

    Xu, X.; Liang, D.; Andreadis, K.M.; Tsang, L.; Josberger, E.G.

    2009-01-01

    In this paper, we considered a physical-based model which use numerical solution of Maxwell Equations in three-dimensional simulations and apply into Dense Media Radiative Theory (DMRT). The model is validated in two specific dataset from the second Cold Land Processes Experiment (CLPX II) at Alaska and Colorado. The data were all obtain by the Ku-band (13.95GHz) observations using airborne imaging polarimetric scatterometer (POLSCAT). Snow is a densely packed media. To take into account the collective scattering and incoherent scattering, analytical Quasi-Crystalline Approximation (QCA) and Numerical Maxwell Equation Method of 3-D simulation (NMM3D) are used to calculate the extinction coefficient and phase matrix. DMRT equations were solved by iterative solution up to 2nd order for the case of small optical thickness and full multiple scattering solution by decomposing the diffuse intensities into Fourier series was used when optical thickness exceed unity. It was shown that the model predictions agree with the field experiment not only co-polarization but also cross-polarization. For Alaska region, the input snow structure data was obtain by the in situ ground observations, while for Colorado region, we combined the VIC model to get the snow profile. ??2009 IEEE.

  15. Validation of a terrestrial food chain model.

    PubMed

    Travis, C C; Blaylock, B P

    1992-01-01

    An increasingly important topic in risk assessment is the estimation of human exposure to environmental pollutants through pathways other than inhalation. The Environmental Protection Agency (EPA) has recently developed a computerized methodology (EPA, 1990) to estimate indirect exposure to toxic pollutants from Municipal Waste Combuster emissions. This methodology estimates health risks from exposure to toxic pollutants from the terrestrial food chain (TFC), soil ingestion, drinking water ingestion, fish ingestion, and dermal absorption via soil and water. Of these, one of the most difficult to estimate is exposure through the food chain. This paper estimates the accuracy of the EPA methodology for estimating food chain contamination. To our knowledge, no data exist on measured concentrations of pollutants in food grown around Municipal Waste Incinerators, and few field-scale studies have been performed on the uptake of pollutants in the food chain. Therefore, to evaluate the EPA methodology, we compare actual measurements of background contaminant levels in food with estimates made using EPA's computerized methodology. Background levels of contaminants in air, water, and soil were used as input to the EPA food chain model to predict background levels of contaminants in food. These predicted values were then compared with the measured background contaminant levels. Comparisons were performed for dioxin, pentachlorophenol, polychlorinated biphenyls, benzene, benzo(a)pyrene, mercury, and lead.

  16. Composing, Analyzing and Validating Software Models

    NASA Technical Reports Server (NTRS)

    Sheldon, Frederick T.

    1998-01-01

    This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.

  17. Testing the Testing: Validity of a State Growth Model

    ERIC Educational Resources Information Center

    Brown, Kim Trask

    2008-01-01

    Possible threats to the validity of North Carolina's accountability model used to predict academic growth were investigated in two ways: the state's regression equations were replicated but updated to utilize current testing data and not that from years past as in the state's current model; and the updated equations were expanded to include…

  18. Institutional Effectiveness: A Model for Planning, Assessment & Validation.

    ERIC Educational Resources Information Center

    Truckee Meadows Community Coll., Sparks, NV.

    The report presents Truckee Meadows Community College's (Colorado) model for assessing institutional effectiveness and validating the College's mission and vision, and the strategic plan for carrying out the institutional effectiveness model. It also outlines strategic goals for the years 1999-2001. From the system-wide directive that education…

  19. Validating a Technology Enhanced Student-Centered Learning Model

    ERIC Educational Resources Information Center

    Kang, Myunghee; Hahn, Jungsun; Chung, Warren

    2015-01-01

    The Technology Enhanced Student Centered Learning (TESCL) Model in this study presents the core factors that ensure the quality of learning in a technology-supported environment. Although the model was conceptually constructed using a student-centered learning framework and drawing upon previous studies, it should be validated through real-world…

  20. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  1. Mission Driven Scene Understanding: Candidate Model Training and Validation

    DTIC Science & Technology

    2016-09-01

    line). Top-1 validation accuracy (blue diamonds ). ................................4 Fig. 3 Candidate CNN results showing the top-5 most likely...accuracy (blue diamonds ). 4. Candidate CNN Model: Results In this section, we test the candidate model to examine the impact of dynamic environments on

  2. Validation of 1-D transport and sawtooth models for ITER

    SciTech Connect

    Connor, J.W.; Turner, M.F.; Attenberger, S.E.; Houlberg, W.A.

    1996-12-31

    In this paper the authors describe progress on validating a number of local transport models by comparing their predictions with relevant experimental data from a range of tokamaks in the ITER profile database. This database, the testing procedure and results are discussed. In addition a model for sawtooth oscillations is used to investigate their effect in an ITER plasma with alpha-particles.

  3. Model validation for karst flow using sandbox experiments

    NASA Astrophysics Data System (ADS)

    Ye, M.; Pacheco Castro, R. B.; Tao, X.; Zhao, J.

    2015-12-01

    The study of flow in karst is complex due of the high heterogeneity of the porous media. Several approaches have been proposed in the literature to study overcome the natural complexity of karst. Some of those methods are the single continuum, double continuum and the discrete network of conduits coupled with the single continuum. Several mathematical and computing models are available in the literature for each approach. In this study one computer model has been selected for each category to validate its usefulness to model flow in karst using a sandbox experiment. The models chosen are: Modflow 2005, Modflow CFPV1 and Modflow CFPV2. A sandbox experiment was implemented in such way that all the parameters required for each model can be measured. The sandbox experiment was repeated several times under different conditions. The model validation will be carried out by comparing the results of the model simulation and the real data. This model validation will allows ud to compare the accuracy of each model and the applicability in Karst. Also we will be able to evaluate if the results of the complex models improve a lot compared to the simple models specially because some models require complex parameters that are difficult to measure in the real world.

  4. Low-order dynamic modeling of the Experimental Breeder Reactor II

    SciTech Connect

    Berkan, R.C. . Dept. of Nuclear Engineering); Upadhyaya, B.R.; Kisner, R.A. )

    1990-07-01

    This report describes the development of a low-order, linear model of the Experimental Breeder Reactor II (EBR-II), including the primary system, intermediate heat exchanger, and steam generator subsystems. The linear model is developed to represent full-power steady state dynamics for low-level perturbations. Transient simulations are performed using model building and simulation capabilities of the computer software Matrix{sub x}. The inherently safe characteristics of the EBR-II are verified through the simulation studies. The results presented in this report also indicate an agreement between the linear model and the actual dynamics of the plant for several transients. Such models play a major role in the learning and in the improvement of nuclear reactor dynamics for control and signal validation studies. This research and development is sponsored by the Advanced Controls Program in the Instrumentation and Controls Division of the Oak Ridge National Laboratory. 17 refs., 67 figs., 15 tabs.

  5. On the development and validation of QSAR models.

    PubMed

    Gramatica, Paola

    2013-01-01

    The fundamental and more critical steps that are necessary for the development and validation of QSAR models are presented in this chapter as best practices in the field. These procedures are discussed in the context of predictive QSAR modelling that is focused on achieving models of the highest statistical quality and with external predictive power. The most important and most used statistical parameters needed to verify the real performances of QSAR models (of both linear regression and classification) are presented. Special emphasis is placed on the validation of models, both internally and externally, as well as on the need to define model applicability domains, which should be done when models are employed for the prediction of new external compounds.

  6. Inter-comparison and validation of ozone measurements by SAGE II and SBUV/2 instruments

    NASA Astrophysics Data System (ADS)

    Khatun, Sufia

    Ozone is an important trace gas in the Earth's atmosphere. Its distribution and temporal trends are monitored by a number of instruments. The measurement of atmospheric ozone is complicated and inter-comparison of measurements using different techniques is important in validating and developing a confidence when using these different ozone datasets. In this work measurement of ozone by two classes of space-based instruments SAGE II and SBUV/2 are compared. Twenty-one original layers of SBUV/2 ozone data are merged into five thick layers. The agreement in most regions for all four seasons between the SBUV/2 instruments are within 5-10 DU. The behavior of partial ozone column in layers 2 and 3 has a more dynamic nature. This may be due to Dobson circulation carrying ozone-rich air from equatorial higher altitudes to mid-latitude lower altitudes. Monthly averaged data for the four SBUV/2 instruments show annual cycles and in some cases semi-annual cycles. There are about six months in phase difference between the peaks and valleys in the northern mid-latitude region relative to equatorial region. This suggests the dynamic nature of ozone migration from equatorial region towards the pole. Pearson correlation coefficients for the total column ozone for the equatorial zone between NOAA 17 and the other three SBUV/2 instruments are 0.93, 1.00 and 0.99 respectively. Vast majority of the data falls within +/-1%. The maximum ozone is observed in the equatorial region at altitudes corresponding to pressure levels between 20-35 mbar. As we move away from the equatorial region the maximum ozone is observed at lower altitudes. The very highest values of ozone are concentrated in three or four patches with locations corresponding to centers of the Hadley and Ferrel cells. The agreement between SAGE II and SBUV/2 instruments is within 1-3 DU above the 50 mbar pressure level where the ozone content varies from a few to 65 DU. Time dependent comparisons between SAGE II and NOAA 09

  7. A prediction model for ocular damage - Experimental validation.

    PubMed

    Heussner, Nico; Vagos, Márcia; Spitzer, Martin S; Stork, Wilhelm

    2015-08-01

    With the increasing number of laser applications in medicine and technology, accidental as well as intentional exposure of the human eye to laser sources has become a major concern. Therefore, a prediction model for ocular damage (PMOD) is presented within this work and validated for long-term exposure. This model is a combination of a raytracing model with a thermodynamical model of the human and an application which determines the thermal damage by the implementation of the Arrhenius integral. The model is based on our earlier work and is here validated against temperature measurements taken with porcine eye samples. For this validation, three different powers were used: 50mW, 100mW and 200mW with a spot size of 1.9mm. Also, the measurements were taken with two different sensing systems, an infrared camera and a fibre optic probe placed within the tissue. The temperatures were measured up to 60s and then compared against simulations. The measured temperatures were found to be in good agreement with the values predicted by the PMOD-model. To our best knowledge, this is the first model which is validated for both short-term and long-term irradiations in terms of temperature and thus demonstrates that temperatures can be accurately predicted within the thermal damage regime.

  8. The Validation of Climate Models: The Development of Essential Practice

    NASA Astrophysics Data System (ADS)

    Rood, R. B.

    2011-12-01

    It is possible from both a scientific and philosophical perspective to state that climate models cannot be validated. However, with the realization that the scientific investigation of climate change is as much a subject of politics as of science, maintaining this formal notion of "validation" has significant consequences. For example, it relegates the bulk of work of many climate scientists to an exercise of model evaluation that can be construed as ill-posed. Even within the science community this motivates criticism of climate modeling as an exercise of weak scientific practice. Stepping outside of the science community, statements that validation is impossible are used in political arguments to discredit the scientific investigation of climate, to maintain doubt about projections of climate change, and hence, to prohibit the development of public policy to regulate the emissions of greenhouse gases. With the acceptance of the impossibility of validation, scientists often state that the credibility of models can be established through an evaluation process. A robust evaluation process leads to the quantitative description of the modeling system against a standard set of measures. If this process is standardized as institutional practice, then this provides a measure of model performance from one modeling release to the next. It is argued, here, that such a robust and standardized evaluation of climate models can be structured and quantified as "validation." Arguments about the nuanced meaning of validation and evaluation are a subject about which the climate modeling community needs to develop a standard. It does injustice to a body of science-based knowledge to maintain that validation is "impossible." Rather than following such a premise, which immediately devalues the knowledge base, it is more useful to develop a systematic, standardized approach to robust, appropriate validation. This stands to represent the complexity of the Earth's climate and its

  9. The GEMMA Crustal Model: First Validation and Data Distribution

    NASA Astrophysics Data System (ADS)

    Sampietro, D.; Reguzzoni, M.; Negretti, M.

    2013-12-01

    In the GEMMA project, funded by ESA-STSE and ASI, a new crustal model constrained by GOCE gravity field observations has been developed. This model has a resolution of 0.5°×0.5° and it is composed of seven layers describing geometry and density of oceans, ice sheets, upper, medium and lower sediments, crystalline crust and upper mantle. In the present work the GEMMA model is validated against other global and regional models, showing a good consistency where validation data are reliable. Apart from that the development of a WPS (Web Processing Service) for the distribution of the GEMMA model is also presented. The service gives the possibility to download, interpolate and display the whole crustal model, providing for each layer the depth of its upper and lower boundary, its density as well as its gravitational effect in terms of second radial derivative of the gravitational potential at GOCE altitude.

  10. Validating the Thinking Styles Inventory-Revised II among Chinese University Students with Hearing Impairment through Test Accommodations

    ERIC Educational Resources Information Center

    Cheng, Sanyin; Zhang, Li-Fang

    2014-01-01

    The present study pioneered in adopting test accommodations to validate the Thinking Styles Inventory-Revised II (TSI-R2; Sternberg, Wagner, & Zhang, 2007) among Chinese university students with hearing impairment. A series of three studies were conducted that drew their samples from the same two universities, in which accommodating test…

  11. Concurrent Validity of the WISC-IV and DAS-II in Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Kuriakose, Sarah

    2014-01-01

    Cognitive assessments are used for a variety of research and clinical purposes in children with autism spectrum disorder (ASD). This study establishes concurrent validity of the Wechsler Intelligence Scales for Children-fourth edition (WISC-IV) and Differential Ability Scales-second edition (DAS-II) in a sample of children with ASD with a broad…

  12. Validity of NBME Parts I and II for the Selection of Residents: The Case of Orthopaedic Surgery.

    ERIC Educational Resources Information Center

    Case, Susan M.

    The predictive validity of scores on the National Board of Medical Examiners (NBME) Part I and Part II examinations for the selection of residents in orthopaedic surgery was investigated. Use of NBME scores has been criticized because of the time lag between taking Part I and entering residency and because Part I content is not directly linked to…

  13. Open-source MFIX-DEM software for gas-solids flows: Part II Validation studies

    SciTech Connect

    Li, Tingwen; Garg, Rahul; Galvin, Janine; Pannala, Sreekanth

    2012-01-01

    With rapid advancements in computer hardware and numerical algorithms, computational fluid dynamics (CFD) has been increasingly employed as a useful tool for investigating the complex hydrodynamics inherent in multiphase flows. An important step during the development of a CFD model and prior to its application is conducting careful and comprehensive verification and validation studies. Accordingly, efforts to verify and validate the open-source MFIX-DEM software, which can be used for simulating the gas solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles, have been made at the National Energy Technology Laboratory (NETL). In part I of this paper, extensive verification studies were presented and in this part, detailed validation studies of MFIX-DEM are presented. A series of test cases covering a range of gas solids flow applications were conducted. In particular the numerical results for the random packing of a binary particle mixture, the repose angle of a sandpile formed during a side charge process, velocity, granular temperature, and voidage profiles from a bounded granular shear flow, lateral voidage and velocity profiles from a monodisperse bubbling fluidized bed, lateral velocity profiles from a spouted bed, and the dynamics of segregation of a binary mixture in a bubbling bed were compared with available experimental data, and in some instances with empirical correlations. In addition, sensitivity studies were conducted for various parameters to quantify the error in the numerical simulation.

  14. Open-Source MFIX-DEM Software for Gas-Solids Flows: Part II - Validation Studies

    SciTech Connect

    Li, Tingwen

    2012-04-01

    With rapid advancements in computer hardware and numerical algorithms, computational fluid dynamics (CFD) has been increasingly employed as a useful tool for investigating the complex hydrodynamics inherent in multiphase flows. An important step during the development of a CFD model and prior to its application is conducting careful and comprehensive verification and validation studies. Accordingly, efforts to verify and validate the open-source MFIX-DEM software, which can be used for simulating the gas–solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles, have been made at the National Energy Technology Laboratory (NETL). In part I of this paper, extensive verification studies were presented and in this part, detailed validation studies of MFIX-DEM are presented. A series of test cases covering a range of gas–solids flow applications were conducted. In particular the numerical results for the random packing of a binary particle mixture, the repose angle of a sandpile formed during a side charge process, velocity, granular temperature, and voidage profiles from a bounded granular shear flow, lateral voidage and velocity profiles from a monodisperse bubbling fluidized bed, lateral velocity profiles from a spouted bed, and the dynamics of segregation of a binary mixture in a bubbling bed were compared with available experimental data, and in some instances with empirical correlations. In addition, sensitivity studies were conducted for various parameters to quantify the error in the numerical simulation.

  15. Sub-nanometer Level Model Validation of the SIM Interferometer

    NASA Technical Reports Server (NTRS)

    Korechoff, Robert P.; Hoppe, Daniel; Wang, Xu

    2004-01-01

    The Space Interferometer Mission (SIM) flight instrument will not undergo a full performance, end-to-end system test on the ground due to a number of constraints. Thus, analysis and physics-based models will play a significant role in providing confidence that SIM will meet its science goals on orbit. The various models themselves are validated against the experimental results obtained from the MicroArcsecond Metrology (MAM) testbed adn the Diffraction testbed (DTB). The metric for validation is provided by the SIM astrometric error budget.

  16. Theoretical models of ultrasonic inspection and their validation

    SciTech Connect

    Birchall, D.; Daniels, W.; Hawker, B.M.; Ramsey, A.T.; Temple, J.A.G.

    1994-12-31

    In response to the perception of demands by the public for higher than ever standards of safety, the nuclear industry in Britain embarked on an extensive program of nuclear safety research in support of the safety case for the new Sizewell B pressurized water reactor, which is now approaching completion. A suite of diverse computer models, of various aspects of ultrasonic inspection, is described, ranging from transducer design to ray-tracing in anisotropic stainless steel weldments or complex geometries. These provide aids to inspection design, verification, validation and data analysis, but the models must also be validated.

  17. Climate Model Datasets on Earth System Grid II (ESG II)

    DOE Data Explorer

    Earth System Grid (ESG) is a project that combines the power and capacity of supercomputers, sophisticated analysis servers, and datasets on the scale of petabytes. The goal is to provide a seamless distributed environment that allows scientists in many locations to work with large-scale data, perform climate change modeling and simulation,and share results in innovative ways. Though ESG is more about the computing environment than the data, still there are several catalogs of data available at the web site that can be browsed or search. Most of the datasets are restricted to registered users, but several are open to any access.

  18. Human surrogate models of neuropathic pain: validity and limitations.

    PubMed

    Binder, Andreas

    2016-02-01

    Human surrogate models of neuropathic pain in healthy subjects are used to study symptoms, signs, and the hypothesized underlying mechanisms. Although different models are available, different spontaneous and evoked symptoms and signs are inducible; 2 key questions need to be answered: are human surrogate models conceptually valid, ie, do they share the sensory phenotype of neuropathic pain states, and are they sufficiently reliable to allow consistent translational research?

  19. A Methodology for Validation of High Resolution Combat Models

    DTIC Science & Technology

    1988-06-01

    the primary source of model validation within the Army. It would also provide a method of standardizing the comparison of models. Finally, a...improve the modeling of bombing accuracy [Ref.8, p.309] While this effect can never be countered in total, every care must be taken to minimize...reference system, careful and diligent efforts should be made to minimize the impact of these actions. The observed impacts as well as expected impacts

  20. Validation techniques of agent based modelling for geospatial simulations

    NASA Astrophysics Data System (ADS)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  1. Modeling and Validation of Microwave Ablations with Internal Vaporization

    PubMed Central

    Chiang, Jason; Birla, Sohan; Bedoya, Mariajose; Jones, David; Subbiah, Jeyam; Brace, Christopher L.

    2014-01-01

    Numerical simulation is increasingly being utilized for computer-aided design of treatment devices, analysis of ablation growth, and clinical treatment planning. Simulation models to date have incorporated electromagnetic wave propagation and heat conduction, but not other relevant physics such as water vaporization and mass transfer. Such physical changes are particularly noteworthy during the intense heat generation associated with microwave heating. In this work, a numerical model was created that integrates microwave heating with water vapor generation and transport by using porous media assumptions in the tissue domain. The heating physics of the water vapor model was validated through temperature measurements taken at locations 5, 10 and 20 mm away from the heating zone of the microwave antenna in homogenized ex vivo bovine liver setup. Cross-sectional area of water vapor transport was validated through intra-procedural computed tomography (CT) during microwave ablations in homogenized ex vivo bovine liver. Iso-density contours from CT images were compared to vapor concentration contours from the numerical model at intermittent time points using the Jaccard Index. In general, there was an improving correlation in ablation size dimensions as the ablation procedure proceeded, with a Jaccard Index of 0.27, 0.49, 0.61, 0.67 and 0.69 at 1, 2, 3, 4, and 5 minutes. This study demonstrates the feasibility and validity of incorporating water vapor concentration into thermal ablation simulations and validating such models experimentally. PMID:25330481

  2. Experimental Validation of a Thermoelastic Model for SMA Hybrid Composites

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.

    2001-01-01

    This study presents results from experimental validation of a recently developed model for predicting the thermomechanical behavior of shape memory alloy hybrid composite (SMAHC) structures, composite structures with an embedded SMA constituent. The model captures the material nonlinearity of the material system with temperature and is capable of modeling constrained, restrained, or free recovery behavior from experimental measurement of fundamental engineering properties. A brief description of the model and analysis procedures is given, followed by an overview of a parallel effort to fabricate and characterize the material system of SMAHC specimens. Static and dynamic experimental configurations for the SMAHC specimens are described and experimental results for thermal post-buckling and random response are presented. Excellent agreement is achieved between the measured and predicted results, fully validating the theoretical model for constrained recovery behavior of SMAHC structures.

  3. Electro-thermal modelling of a supercapacitor and experimental validation

    NASA Astrophysics Data System (ADS)

    Berrueta, Alberto; San Martín, Idoia; Hernández, Andoni; Ursúa, Alfredo; Sanchis, Pablo

    2014-08-01

    This paper reports on the electro-thermal modelling of a Maxwell supercapacitor (SC), model BMOD0083 with a rated capacitance of 83 F and rated voltage of 48 V. One electrical equivalent circuit was used to model the electrical behaviour whilst another served to simulate the thermal behaviour. The models were designed to predict the SC operating voltage and temperature, by taking the electric current and ambient temperature as input variables. A five-stage iterative method, applied to three experiments, served to obtain the parameter values for each model. The models were implemented in MATLAB-Simulink®, where they interacted to reciprocally provide information. These models were then validated through a number of tests, subjecting the SC to different current and frequency profiles. These tests included the validation of a bank of supercapacitors integrated into an electric microgrid, in a real operating environment. Satisfactory results were obtained from the electric and thermal models, with RMSE values of less than 0.65 V in all validations.

  4. Validated intraclass correlation statistics to test item performance models.

    PubMed

    Courrieu, Pierre; Brand-D'abrescia, Muriele; Peereman, Ronald; Spieler, Daniel; Rey, Arnaud

    2011-03-01

    A new method, with an application program in Matlab code, is proposed for testing item performance models on empirical databases. This method uses data intraclass correlation statistics as expected correlations to which one compares simple functions of correlations between model predictions and observed item performance. The method rests on a data population model whose validity for the considered data is suitably tested and has been verified for three behavioural measure databases. Contrarily to usual model selection criteria, this method provides an effective way of testing under-fitting and over-fitting, answering the usually neglected question "does this model suitably account for these data?"

  5. Hydrologic and water quality models: Key calibration and validation topics

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As a continuation of efforts to provide a common background and platform for accordant development of calibration and validation (C/V) engineering practices, ASABE members worked to determine critical topics related to model C/V, perform a synthesis of the Moriasi et al. (2012) special collection of...

  6. Hydrologic and water quality models: Use, calibration, and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper introduces a special collection of 22 research articles that present and discuss calibration and validation concepts in detail for hydrologic and water quality models by their developers and presents a broad framework for developing the American Society of Agricultural and Biological Engi...

  7. MODELS FOR SUBMARINE OUTFALL - VALIDATION AND PREDICTION UNCERTAINTIES

    EPA Science Inventory

    This address reports on some efforts to verify and validate dilution models, including those found in Visual Plumes. This is done in the context of problem experience: a range of problems, including different pollutants such as bacteria; scales, including near-field and far-field...

  8. A Model for Investigating Predictive Validity at Highly Selective Institutions.

    ERIC Educational Resources Information Center

    Gross, Alan L.; And Others

    A statistical model for investigating predictive validity at highly selective institutions is described. When the selection ratio is small, one must typically deal with a data set containing relatively large amounts of missing data on both criterion and predictor variables. Standard statistical approaches are based on the strong assumption that…

  9. Validity of Greenspan's Models of Adaptive and Social Intelligence.

    ERIC Educational Resources Information Center

    Mathias, Jane L.; Nettelbeck, Ted

    1992-01-01

    Two studies assessed the construct validity of Greenspan's models of adaptive and social intelligence with 75 adolescents with mental retardation. Factor analysis measures of conceptual intelligence, adaptive behavior, and social intelligence yielded a practice-interpersonal competence construct. The second study, however, failed to establish the…

  10. Validation of a tuber blight (Phytophthora infestans) prediction model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Potato tuber blight caused by Phytophthora infestans accounts for significant losses in storage. There is limited published quantitative data on predicting tuber blight. We validated a tuber blight prediction model developed in New York with cultivars Allegany, NY 101, and Katahdin using independent...

  11. Validating Work Discrimination and Coping Strategy Models for Sexual Minorities

    ERIC Educational Resources Information Center

    Chung, Y. Barry; Williams, Wendi; Dispenza, Franco

    2009-01-01

    The purpose of this study was to validate and expand on Y. B. Chung's (2001) models of work discrimination and coping strategies among lesbian, gay, and bisexual persons. In semistructured individual interviews, 17 lesbians and gay men reported 35 discrimination incidents and their related coping strategies. Responses were coded based on Chung's…

  12. Validating soil phosphorus routines in the SWAT model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phosphorus transfer from agricultural soils to surface waters is an important environmental issue. Commonly used models like SWAT have not always been updated to reflect improved understanding of soil P transformations and transfer to runoff. Our objective was to validate the ability of the P routin...

  13. Solar swimming pool heating: Description of a validated model

    SciTech Connect

    Haaf, W.; Luboschik, U.; Tesche, B. )

    1994-07-01

    In the framework of a European Demonstration Programme, co-financed by CEC and national bodies, a model was elaborated and validated for open-air swimming pools having a minimal surface of 100 m[sup 2] and a minimal depth of 0.5 m. The model consists of two parts, the energy balance of the pool and the solar plant. The theoretical background of the energy balance of an open-air swimming pool was found to be poor. Special monitoring campaigns were used to validate the dynamic model using mathematical parameter identification methods. The final model was simplified in order to shorten calculation time and to improve the user-friendliness by reducing the input values to the most important one. The programme is commercially available. However, it requires the hourly meteorological data of a test reference year (TRY) as an input. The users are mainly designing engineers.

  14. Development, Selection, and Validation of Tumor Growth Models

    NASA Astrophysics Data System (ADS)

    Shahmoradi, Amir; Lima, Ernesto; Oden, J. Tinsley

    In recent years, a multitude of different mathematical approaches have been taken to develop multiscale models of solid tumor growth. Prime successful examples include the lattice-based, agent-based (off-lattice), and phase-field approaches, or a hybrid of these models applied to multiple scales of tumor, from subcellular to tissue level. Of overriding importance is the predictive power of these models, particularly in the presence of uncertainties. This presentation describes our attempt at developing lattice-based, agent-based and phase-field models of tumor growth and assessing their predictive power through new adaptive algorithms for model selection and model validation embodied in the Occam Plausibility Algorithm (OPAL), that brings together model calibration, determination of sensitivities of outputs to parameter variances, and calculation of model plausibilities for model selection. Institute for Computational Engineering and Sciences.

  15. Finite Element Model Development and Validation for Aircraft Fuselage Structures

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Fleming, Gary A.; Pappa, Richard S.; Grosveld, Ferdinand W.

    2000-01-01

    The ability to extend the valid frequency range for finite element based structural dynamic predictions using detailed models of the structural components and attachment interfaces is examined for several stiffened aircraft fuselage structures. This extended dynamic prediction capability is needed for the integration of mid-frequency noise control technology. Beam, plate and solid element models of the stiffener components are evaluated. Attachment models between the stiffener and panel skin range from a line along the rivets of the physical structure to a constraint over the entire contact surface. The finite element models are validated using experimental modal analysis results. The increased frequency range results in a corresponding increase in the number of modes, modal density and spatial resolution requirements. In this study, conventional modal tests using accelerometers are complemented with Scanning Laser Doppler Velocimetry and Electro-Optic Holography measurements to further resolve the spatial response characteristics. Whenever possible, component and subassembly modal tests are used to validate the finite element models at lower levels of assembly. Normal mode predictions for different finite element representations of components and assemblies are compared with experimental results to assess the most accurate techniques for modeling aircraft fuselage type structures.

  16. Experimentally validated finite element model of electrocaloric multilayer ceramic structures

    SciTech Connect

    Smith, N. A. S. E-mail: maciej.rokosz@npl.co.uk Correia, T. M. E-mail: maciej.rokosz@npl.co.uk; Rokosz, M. K. E-mail: maciej.rokosz@npl.co.uk

    2014-07-28

    A novel finite element model to simulate the electrocaloric response of a multilayer ceramic capacitor (MLCC) under real environment and operational conditions has been developed. The two-dimensional transient conductive heat transfer model presented includes the electrocaloric effect as a source term, as well as accounting for radiative and convective effects. The model has been validated with experimental data obtained from the direct imaging of MLCC transient temperature variation under application of an electric field. The good agreement between simulated and experimental data, suggests that the novel experimental direct measurement methodology and the finite element model could be used to support the design of optimised electrocaloric units and operating conditions.

  17. Experimentally validated finite element model of electrocaloric multilayer ceramic structures

    NASA Astrophysics Data System (ADS)

    Smith, N. A. S.; Rokosz, M. K.; Correia, T. M.

    2014-07-01

    A novel finite element model to simulate the electrocaloric response of a multilayer ceramic capacitor (MLCC) under real environment and operational conditions has been developed. The two-dimensional transient conductive heat transfer model presented includes the electrocaloric effect as a source term, as well as accounting for radiative and convective effects. The model has been validated with experimental data obtained from the direct imaging of MLCC transient temperature variation under application of an electric field. The good agreement between simulated and experimental data, suggests that the novel experimental direct measurement methodology and the finite element model could be used to support the design of optimised electrocaloric units and operating conditions.

  18. Criteria for Validating Mouse Models of Psychiatric Diseases

    PubMed Central

    Chadman, Kathryn K.; Yang, Mu; Crawley, Jacqueline N.

    2010-01-01

    Animal models of human diseases are in widespread use for biomedical research. Mouse models with a mutation in a single gene or multiple genes are excellent research tools for understanding the role of a specific gene in the etiology of a human genetic disease. Ideally, the mouse phenotypes will recapitulate the human phenotypes exactly. However, exact matches are rare, particularly in mouse models of neuropsychiatric disorders. This article summarizes the current strategies for optimizing the validity of a mouse model of a human brain dysfunction. We address the common question raised by molecular geneticists and clinical researchers in psychiatry, “what is a ‘good enough’ mouse model”? PMID:18484083

  19. Validation and upgrading of physically based mathematical models

    NASA Technical Reports Server (NTRS)

    Duval, Ronald

    1992-01-01

    The validation of the results of physically-based mathematical models against experimental results was discussed. Systematic techniques are used for: (1) isolating subsets of the simulator mathematical model and comparing the response of each subset to its experimental response for the same input conditions; (2) evaluating the response error to determine whether it is the result of incorrect parameter values, incorrect structure of the model subset, or unmodeled external effects of cross coupling; and (3) modifying and upgrading the model and its parameter values to determine the most physically appropriate combination of changes.

  20. Outer atmospheres of cool stars. VI - Models for Epsilon Eridani based on IUE spectra of C II, Mg II, Si II, and Si III

    NASA Technical Reports Server (NTRS)

    Simon, T.; Kelch, W. L.; Linsky, J. L.

    1980-01-01

    Observations of the ultraviolet line spectrum of the active chromosphere star, Epsilon Eridani, obtained with the IUE satellite have been analyzed. The coupled statistical equilibrium and radiative transfer equations for the prominent transitions of C II, Mg II, Si II, and Si III. A satisfactory fit to all of the line strengths can be achieved with a model similar to that recently proposed to explain bright points on the quiet sun. A surface pressure at the base of the transition region of 0.5 dynes/sq cm is derived, which is a factor of 3 higher than the quiet sun value, but a factor of 3 smaller than predicted by scaling laws, assuming a conductively heated stellar transition region. It is found that the surface fluxes of the C II 1334, 1335 and Si III 1892 emission lines are good diagnostics of pressure at the base of the transition region, but line ratio techniques using the 1892 line for estimating electron densities may be valid.

  1. Prediction of estrogenicity: validation of a classification model.

    PubMed

    Saliner, A Gallegos; Netzeva, T I; Worth, A P

    2006-04-01

    (Q)SAR models can be used to reduce animal testing as well as to minimise the testing costs. In particular, classification models have been widely used for estimating endpoints with binary activity. The aim of the present study was to develop and validate a classification-based quantitative structure-activity relationship (QSAR) model for endocrine disruption, based on interpretable mechanistic descriptors related to estrogenic gene activation. The model predicts the presence or absence of estrogenic activity according to a pre-defined cut-off in activity as determined in a recombinant yeast assay. The experimental data was obtained from the literature. A two-descriptor classification model was developed that has the form of a decision tree. The predictivity of the model was evaluated by using an external test set and by taking into account the limitations associated with the applicability domain (AD) of the model. The AD was determined as coverage of the model descriptor space. After removing the compounds present in the training set and the compounds outside of the AD, the overall accuracy of classification of the test chemicals was used to assess the predictivity of the model. In addition, the model was shown to meet the OECD Principles for (Q)SAR Validation, making it potentially useful for regulatory purposes.

  2. Propeller aircraft interior noise model utilization study and validation

    NASA Technical Reports Server (NTRS)

    Pope, L. D.

    1984-01-01

    Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.

  3. Cross-Cultural Validation of the Beck Depression Inventory-II across U.S. and Turkish Samples

    ERIC Educational Resources Information Center

    Canel-Cinarbas, Deniz; Cui, Ying; Lauridsen, Erica

    2011-01-01

    The purpose of this study was to test the Beck Depression Inventory-II (BDI-II) for factorial invariance across Turkish and U.S. college student samples. The results indicated that (a) a two-factor model has an adequate fit for both samples, thus providing evidence of configural invariance, and (b) there is a metric invariance but "no"…

  4. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and Space Station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  5. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  6. Inducible Mouse Models for Cancer Drug Target Validation

    PubMed Central

    Jeong, Joseph H.

    2016-01-01

    Genetically-engineered mouse (GEM) models have provided significant contributions to our understanding of cancer biology and developing anticancer therapeutic strategies. The development of GEM models that faithfully recapitulate histopathological and clinical features of human cancers is one of the most pressing needs to successfully conquer cancer. In particular, doxycycline-inducible transgenic mouse models allow us to regulate (induce or suppress) the expression of a specific gene of interest within a specific tissue in a temporal manner. Leveraging this mouse model system, we can determine whether the transgene expression is required for tumor maintenance, thereby validating the transgene product as a target for anticancer drug development (target validation study). In addition, there is always a risk of tumor recurrence with cancer therapy. By analyzing recurrent tumors derived from fully regressed tumors after turning off transgene expression in tumor-bearing mice, we can gain an insight into the molecular basis of how tumor cells escape from their dependence on the transgene (tumor recurrence study). Results from such studies will ultimately allow us to predict therapeutic responses in clinical settings and develop new therapeutic strategies against recurrent tumors. The aim of this review is to highlight the significance of doxycycline-inducible transgenic mouse models in studying target validation and tumor recurrence. PMID:28053958

  7. Validation of a finite element model of the human metacarpal.

    PubMed

    Barker, D S; Netherway, D J; Krishnan, J; Hearn, T C

    2005-03-01

    Implant loosening and mechanical failure of components are frequently reported following metacarpophalangeal (MCP) joint replacement. Studies of the mechanical environment of the MCP implant-bone construct are rare. The objective of this study was to evaluate the predictive ability of a finite element model of the intact second human metacarpal to provide a validated baseline for further mechanical studies. A right index human metacarpal was subjected to torsion and combined axial/bending loading using strain gauge (SG) and 3D finite element (FE) analysis. Four different representations of bone material properties were considered. Regression analyses were performed comparing maximum and minimum principal surface strains taken from the SG and FE models. Regression slopes close to unity and high correlation coefficients were found when the diaphyseal cortical shell was modelled as anisotropic and cancellous bone properties were derived from quantitative computed tomography. The inclusion of anisotropy for cortical bone was strongly influential in producing high model validity whereas variation in methods of assigning stiffness to cancellous bone had only a minor influence. The validated FE model provides a tool for future investigations of current and novel MCP joint prostheses.

  8. Prediction of driving ability: Are we building valid models?

    PubMed

    Hoggarth, Petra A; Innes, Carrie R H; Dalrymple-Alford, John C; Jones, Richard D

    2015-04-01

    The prediction of on-road driving ability using off-road measures is a key aim in driving research. The primary goal in most classification models is to determine a small number of off-road variables that predict driving ability with high accuracy. Unfortunately, classification models are often over-fitted to the study sample, leading to inflation of predictive accuracy, poor generalization to the relevant population and, thus, poor validity. Many driving studies do not report sufficient details to determine the risk of model over-fitting and few report any validation technique, which is critical to test the generalizability of a model. After reviewing the literature, we generated a model using a moderately large sample size (n=279) employing best practice techniques in the context of regression modelling. By then randomly selecting progressively smaller sample sizes we show that a low ratio of participants to independent variables can result in over-fitted models and spurious conclusions regarding model accuracy. We conclude that more stable models can be constructed by following a few guidelines.

  9. Potential of Ceilometer Networks for Validation of models

    NASA Astrophysics Data System (ADS)

    Wagner, Frank; Mattis, Ina; Flentje, Harald

    2016-04-01

    There exist various models which can treat aerosol particles within the model. Due to the limited availability of high quality profiles of particle properties most models are only validated with ground based particle measurements and/or with columnar particle amounts, e.g. aerosol optical depth, derived from satellites. Modern ceilometers are capable of providing aerosol vertical profiles and they are not too expensive and hence several national weather services operate a network of ceilometers. The Deutscher Wetterdienst operates currently a ceilometer network of about 75 devices providing aerosol profiles. Within the next few years the number of instruments will double. Each station has always several neighboring stations within 100km distance. Recently automated routines for quality checks and calibration of the devices were developed and implemented. Such automated tools together with the good spatial coverage make the DWD ceilometer network an excellent tool for model validation with respect to aerosol particle properties. The Copernicus Atmosphere service provides operational forecast of five aerosol species (sea-salt, dust, sulphate as well as organic and black carbon which are summarized as biomass burning aerosol) and the boundary layer height. These parameters can be compared with the outcome of ceilometer measurements and consequently the model can be validated. Especially long-range transported aerosol particles above the boundary layer can be investigated. At the conference the network will be presented, the validation strategy of the CAMS models by using ceilometer measurements will be explained and results will be shown. An outlook to international measuring networks will be given.

  10. Numerical modeling, calibration, and validation of an ultrasonic separator.

    PubMed

    Cappon, Hans; Keesman, Karel J

    2013-03-01

    Our overall goal is to apply acoustic separation technology for the recovery of valuable particulate matter from wastewater in industry. Such large-scale separator systems require detailed design and evaluation to optimize the system performance at the earliest stage possible. Numerical models can facilitate and accelerate the design of this application; therefore, a finite element (FE) model of an ultrasonic particle separator is a prerequisite. In our application, the particle separator consists of a glass resonator chamber with a piezoelectric transducer attached to the glass by means of epoxy adhesive. Separation occurs most efficiently when the system is operated at its main eigenfrequency. The goal of the paper is to calibrate and validate a model of a demonstrator ultrasonic separator, preserving known physical parameters and estimating the remaining unknown or less-certain parameters to allow extrapolation of the model beyond the measured system. A two-step approach was applied to obtain a validated model of the separator. The first step involved the calibration of the piezoelectric transducer. The second step, the subject of this paper, involves the calibration and validation of the entire separator using nonlinear optimization techniques. The results show that the approach lead to a fully calibrated 2-D model of the empty separator, which was validated with experiments on a filled separator chamber. The large sensitivity of the separator to small variations indicated that such a system should either be made and operated within tight specifications to obtain the required performance or the operation of the system should be adaptable to cope with a slightly off-spec system, requiring a feedback controller.

  11. PASTIS: Bayesian extrasolar planet validation - II. Constraining exoplanet blend scenarios using spectroscopic diagnoses

    NASA Astrophysics Data System (ADS)

    Santerne, A.; Díaz, R. F.; Almenara, J.-M.; Bouchy, F.; Deleuil, M.; Figueira, P.; Hébrard, G.; Moutou, C.; Rodionov, S.; Santos, N. C.

    2015-08-01

    The statistical validation of transiting exoplanets proved to be an efficient technique to secure the nature of small exoplanet signals which cannot be established by purely spectroscopic means. However, the spectroscopic diagnoses are providing us with useful constraints on the presence of blended stellar contaminants. In this paper, we present how a contaminating star affects the measurements of the various spectroscopic diagnoses as a function of the parameters of the target and contaminating stars using the model implemented into the PASTIS planet-validation software. We find particular cases for which a blend might produce a large radial velocity signal but no bisector variation. It might also produce a bisector variation anticorrelated with the radial velocity one, as in the case of stellar spots. In those cases, the full width at half-maximum variation provides complementary constraints. These results can be used to constrain blend scenarios for transiting planet candidates or radial velocity planets. We review all the spectroscopic diagnoses reported in the literature so far, especially the ones to monitor the line asymmetry. We estimate their uncertainty and compare their sensitivity to blends. Based on that, we recommend the use of BiGauss which is the most sensitive diagnosis to monitor line-profile asymmetry. In this paper, we also investigate the sensitivity of the radial velocities to constrain blend scenarios and develop a formalism to estimate the level of dilution of a blended signal. Finally, we apply our blend model to re-analyse the spectroscopic diagnoses of HD 16702, an unresolved face-on binary which exhibits bisector variations.

  12. A verification and validation process for model-driven engineering

    NASA Astrophysics Data System (ADS)

    Delmas, R.; Pires, A. F.; Polacsek, T.

    2013-12-01

    Model Driven Engineering practitioners already benefit from many well established verification tools, for Object Constraint Language (OCL), for instance. Recently, constraint satisfaction techniques have been brought to Model-Driven Engineering (MDE) and have shown promising results on model verification tasks. With all these tools, it becomes possible to provide users with formal support from early model design phases to model instantiation phases. In this paper, a selection of such tools and methods is presented, and an attempt is made to define a verification and validation process for model design and instance creation centered on UML (Unified Modeling Language) class diagrams and declarative constraints, and involving the selected tools. The suggested process is illustrated with a simple example.

  13. PARALLEL MEASUREMENT AND MODELING OF TRANSPORT IN THE DARHT II BEAMLINE ON ETA II

    SciTech Connect

    Chambers, F W; Raymond, B A; Falabella, S; Lee, B S; Richardson, R A; Weir, J T; Davis, H A; Schultze, M E

    2005-05-31

    To successfully tune the DARHT II transport beamline requires the close coupling of a model of the beam transport and the measurement of the beam observables as the beam conditions and magnet settings are varied. For the ETA II experiment using the DARHT II beamline components this was achieved using the SUICIDE (Simple User Interface Connecting to an Integrated Data Environment) data analysis environment and the FITS (Fully Integrated Transport Simulation) model. The SUICIDE environment has direct access to the experimental beam transport data at acquisition and the FITS predictions of the transport for immediate comparison. The FITS model is coupled into the control system where it can read magnet current settings for real time modeling. We find this integrated coupling is essential for model verification and the successful development of a tuning aid for the efficient convergence on a useable tune. We show the real time comparisons of simulation and experiment and explore the successes and limitations of this close coupled approach.

  14. Robust cross-validation of linear regression QSAR models.

    PubMed

    Konovalov, Dmitry A; Llewellyn, Lyndon E; Vander Heyden, Yvan; Coomans, Danny

    2008-10-01

    A quantitative structure-activity relationship (QSAR) model is typically developed to predict the biochemical activity of untested compounds from the compounds' molecular structures. "The gold standard" of model validation is the blindfold prediction when the model's predictive power is assessed from how well the model predicts the activity values of compounds that were not considered in any way during the model development/calibration. However, during the development of a QSAR model, it is necessary to obtain some indication of the model's predictive power. This is often done by some form of cross-validation (CV). In this study, the concepts of the predictive power and fitting ability of a multiple linear regression (MLR) QSAR model were examined in the CV context allowing for the presence of outliers. Commonly used predictive power and fitting ability statistics were assessed via Monte Carlo cross-validation when applied to percent human intestinal absorption, blood-brain partition coefficient, and toxicity values of saxitoxin QSAR data sets, as well as three known benchmark data sets with known outlier contamination. It was found that (1) a robust version of MLR should always be preferred over the ordinary-least-squares MLR, regardless of the degree of outlier contamination and that (2) the model's predictive power should only be assessed via robust statistics. The Matlab and java source code used in this study is freely available from the QSAR-BENCH section of www.dmitrykonovalov.org for academic use. The Web site also contains the java-based QSAR-BENCH program, which could be run online via java's Web Start technology (supporting Windows, Mac OSX, Linux/Unix) to reproduce most of the reported results or apply the reported procedures to other data sets.

  15. ESEEM Analysis of Multi-Histidine Cu(II)-Coordination in Model Complexes, Peptides, and Amyloid-β

    PubMed Central

    2015-01-01

    We validate the use of ESEEM to predict the number of 14N nuclei coupled to a Cu(II) ion by the use of model complexes and two small peptides with well-known Cu(II) coordination. We apply this method to gain new insight into less explored aspects of Cu(II) coordination in amyloid-β (Aβ). Aβ has two coordination modes of Cu(II) at physiological pH. A controversy has existed regarding the number of histidine residues coordinated to the Cu(II) ion in component II, which is dominant at high pH (∼8.7) values. Importantly, with an excess amount of Zn(II) ions, as is the case in brain tissues affected by Alzheimer’s disease, component II becomes the dominant coordination mode, as Zn(II) selectively substitutes component I bound to Cu(II). We confirm that component II only contains single histidine coordination, using ESEEM and set of model complexes. The ESEEM experiments carried out on systematically 15N-labeled peptides reveal that, in component II, His 13 and His 14 are more favored as equatorial ligands compared to His 6. Revealing molecular level details of subcomponents in metal ion coordination is critical in understanding the role of metal ions in Alzheimer’s disease etiology. PMID:25014537

  16. A pesticide emission model (PEM) Part II: model evaluation

    NASA Astrophysics Data System (ADS)

    Scholtz, M. T.; Voldner, E.; Van Heyst, B. J.; McMillan, A. C.; Pattey, E.

    In the first part of the paper, the development of a numerical pesticide emission model (PEM) is described for predicting the volatilization of pesticides applied to agricultural soils and crops through soil incorporation, surface spraying, or in the furrow at the time of planting. In this paper the results of three steps toward the evaluation of PEM are reported. The evaluation involves: (i) verifying the numerical algorithms and computer code through comparison of PEM simulations with an available analytical solution of the advection/diffusion equation for semi-volatile solutes in soil; (ii) comparing hourly heat, moisture and emission fluxes of trifluralin and triallate modeled by PEM with fluxes measured using the relaxed eddy-accumulation technique; and (iii) comparison of the PEM predictions of persistence half-life for 29 pesticides with the ranges of persistence found in the literature. The overall conclusion from this limited evaluation study is that PEM is a useful model for estimating the volatilization rates of pesticides from agricultural soils and crops. The lack of reliable estimates of chemical and photochemical degradation rates of pesticide on foliage, however, introduces large uncertainties in the estimates from any model of the volatilization of pesticide that impacts the canopy.

  17. KINEROS2-AGWA: Model Use, Calibration, and Validation

    NASA Technical Reports Server (NTRS)

    Goodrich, D C.; Burns, I. S.; Unkrich, C. L.; Semmens, D. J.; Guertin, D. P.; Hernandez, M.; Yatheendradas, S.; Kennedy, J. R.; Levick, L. R..

    2013-01-01

    KINEROS (KINematic runoff and EROSion) originated in the 1960s as a distributed event-based model that conceptualizes a watershed as a cascade of overland flow model elements that flow into trapezoidal channel model elements. KINEROS was one of the first widely available watershed models that interactively coupled a finite difference approximation of the kinematic overland flow equations to a physically based infiltration model. Development and improvement of KINEROS continued from the 1960s on a variety of projects for a range of purposes, which has resulted in a suite of KINEROS-based modeling tools. This article focuses on KINEROS2 (K2), a spatially distributed, event-based watershed rainfall-runoff and erosion model, and the companion ArcGIS-based Automated Geospatial Watershed Assessment (AGWA) tool. AGWA automates the time-consuming tasks of watershed delineation into distributed model elements and initial parameterization of these elements using commonly available, national GIS data layers. A variety of approaches have been used to calibrate and validate K2 successfully across a relatively broad range of applications (e.g., urbanization, pre- and post-fire, hillslope erosion, erosion from roads, runoff and recharge, and manure transport). The case studies presented in this article (1) compare lumped to stepwise calibration and validation of runoff and sediment at plot, hillslope, and small watershed scales; and (2) demonstrate an uncalibrated application to address relative change in watershed response to wildfire.

  18. Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns

    PubMed Central

    Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain

    2015-01-01

    Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model’s predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917

  19. Verifying and Validating Proposed Models for FSW Process Optimization

    NASA Technical Reports Server (NTRS)

    Schneider, Judith

    2008-01-01

    This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms

  20. Experimental validation of a solar-chimney power plant model

    NASA Astrophysics Data System (ADS)

    Fathi, Nima; Wayne, Patrick; Trueba Monje, Ignacio; Vorobieff, Peter

    2016-11-01

    In a solar chimney power plant system (SCPPS), the energy of buoyant hot air is converted to electrical energy. SCPPS includes a collector at ground level covered with a transparent roof. Solar radiation heats the air inside and the ground underneath. There is a tall chimney at the center of the collector, and a turbine located at the base of the chimney. Lack of detailed experimental data for validation is one of the important issues in modeling this type of power plants. We present a small-scale experimental prototype developed to perform validation analysis for modeling and simulation of SCCPS. Detailed velocity measurements are acquired using particle image velocimetry (PIV) at a prescribed Reynolds number. Convection is driven by a temperature-controlled hot plate at the bottom of the prototype. Velocity field data are used to perform validation analysis and measure any mismatch of the experimental results and the CFD data. CFD Code verification is also performed, to assess the uncertainly of the numerical model with respect to our grid and the applied mathematical model. The dimensionless output power of the prototype is calculated and compared with a recent analytical solution and the experimental results.

  1. Development and validation of a building design waste reduction model.

    PubMed

    Llatas, C; Osmani, M

    2016-10-01

    Reduction in construction waste is a pressing need in many countries. The design of building elements is considered a pivotal process to achieve waste reduction at source, which enables an informed prediction of their wastage reduction levels. However the lack of quantitative methods linking design strategies to waste reduction hinders designing out waste practice in building projects. Therefore, this paper addresses this knowledge gap through the design and validation of a Building Design Waste Reduction Strategies (Waste ReSt) model that aims to investigate the relationships between design variables and their impact on onsite waste reduction. The Waste ReSt model was validated in a real-world case study involving 20 residential buildings in Spain. The validation process comprises three stages. Firstly, design waste causes were analyzed. Secondly, design strategies were applied leading to several alternative low waste building elements. Finally, their potential source reduction levels were quantified and discussed within the context of the literature. The Waste ReSt model could serve as an instrumental tool to simulate designing out strategies in building projects. The knowledge provided by the model could help project stakeholders to better understand the correlation between the design process and waste sources and subsequently implement design practices for low-waste buildings.

  2. Finite Element Model and Validation of Nasal Tip Deformation.

    PubMed

    Manuel, Cyrus T; Harb, Rani; Badran, Alan; Ho, David; Wong, Brian J F

    2017-03-01

    Nasal tip mechanical stability is important for functional and cosmetic nasal airway surgery. Palpation of the nasal tip provides information on tip strength to the surgeon, though it is a purely subjective assessment. Providing a means to simulate nasal tip deformation with a validated model can offer a more objective approach in understanding the mechanics and nuances of the nasal tip support and eventual nasal mechanics as a whole. Herein we present validation of a finite element (FE) model of the nose using physical measurements recorded using an ABS plastic-silicone nasal phantom. Three-dimensional photogrammetry was used to capture the geometry of the phantom at rest and while under steady state load. The silicone used to make the phantom was mechanically tested and characterized using a linear elastic constitutive model. Surface point clouds of the silicone and FE model were compared for both the loaded and unloaded state. The average Hausdorff distance between actual measurements and FE simulations across the nose were 0.39 ± 1.04 mm and deviated up to 2 mm at the outermost boundaries of the model. FE simulation and measurements were in near complete agreement in the immediate vicinity of the nasal tip with millimeter accuracy. We have demonstrated validation of a two-component nasal FE model, which could be used to model more complex modes of deformation where direct measurement may be challenging. This is the first step in developing a nasal model to simulate nasal mechanics and ultimately the interaction between geometry and airflow.

  3. Calibration of Predictor Models Using Multiple Validation Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  4. A benchmark for the validation of solidification modelling algorithms

    NASA Astrophysics Data System (ADS)

    Kaschnitz, E.; Heugenhauser, S.; Schumacher, P.

    2015-06-01

    This work presents two three-dimensional solidification models, which were solved by several commercial solvers (MAGMASOFT, FLOW-3D, ProCAST, WinCast, ANSYS, and OpenFOAM). Surprisingly, the results show noticeable differences. The results are analyzed similar to a round-robin test procedure to obtain reference values for temperatures and their uncertainties at selected positions in the model. The first model is similar to an adiabatic calorimeter with an aluminum alloy solidifying in a copper block. For this model, an analytical solution for the overall temperature at steady state can be calculated. The second model implements additional heat transfer boundary conditions at outer faces. The geometry of the models, the initial and boundary conditions as well as the material properties are kept as simple as possible but, nevertheless, close to a realistic solidification situation. The gained temperature results can be used to validate self-written solidification solvers and check the accuracy of commercial solidification programs.

  5. Rationality Validation of a Layered Decision Model for Network Defense

    SciTech Connect

    Wei, Huaqiang; Alves-Foss, James; Zhang, Du; Frincke, Deb

    2007-08-31

    We propose a cost-effective network defense strategy built on three key: three decision layers: security policies, defense strategies, and real-time defense tactics for countering immediate threats. A layered decision model (LDM) can be used to capture this decision process. The LDM helps decision-makers gain insight into the hierarchical relationships among inter-connected entities and decision types, and supports the selection of cost-effective defense mechanisms to safeguard computer networks. To be effective as a business tool, it is first necessary to validate the rationality of model before applying it to real-world business cases. This paper describes our efforts in validating the LDM rationality through simulation.

  6. Approaches to Validation of Models for Low Gravity Fluid Behavior

    NASA Technical Reports Server (NTRS)

    Chato, David J.; Marchetta, Jeffery; Hochstein, John I.; Kassemi, Mohammad

    2005-01-01

    This paper details the author experiences with the validation of computer models to predict low gravity fluid behavior. It reviews the literature of low gravity fluid behavior as a starting point for developing a baseline set of test cases. It examines authors attempts to validate their models against these cases and the issues they encountered. The main issues seem to be that: Most of the data is described by empirical correlation rather than fundamental relation; Detailed measurements of the flow field have not been made; Free surface shapes are observed but through thick plastic cylinders, and therefore subject to a great deal of optical distortion; and Heat transfer process time constants are on the order of minutes to days but the zero-gravity time available has been only seconds.

  7. Cosmological Parameter Uncertainties from SALT-II Type Ia Supernova Light Curve Models

    SciTech Connect

    Mosher, J.; Guy, J.; Kessler, R.; Astier, P.; Marriner, J.; Betoule, M.; Sako, M.; El-Hage, P.; Biswas, R.; Pain, R.; Kuhlmann, S.; Regnault, N.; Frieman, J. A.; Schneider, D. P.

    2014-08-29

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ~120 low-redshift (z < 0.1) SNe Ia, ~255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ~290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w (input) – w (recovered)) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty, the average bias on w is –0.014 ± 0.007.

  8. Cosmological parameter uncertainties from SALT-II type Ia supernova light curve models

    SciTech Connect

    Mosher, J.; Sako, M.; Guy, J.; Astier, P.; Betoule, M.; El-Hage, P.; Pain, R.; Regnault, N.; Marriner, J.; Biswas, R.; Kuhlmann, S.; Schneider, D. P.

    2014-09-20

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ∼120 low-redshift (z < 0.1) SNe Ia, ∼255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ∼290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w {sub input} – w {sub recovered}) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty; the average bias on w is –0.014 ± 0.007.

  9. Creation of Robotic Snake to Validate Contact Modeling in Simulation

    DTIC Science & Technology

    2013-12-01

    invasive surgery . Given the potential of snake robots as highly maneuverable and dynamic robots , it would be advantageous to have one that could be...Creation of Robotic Snake to Validate Contact Modeling in Simulation by Mark Hoppel ARL-CR-0725 December 2013...return it to the originator. Army Research Laboratory Aberdeen Proving Ground, MD 21005 ARL-CR-0725 December 2013 Creation of Robotic

  10. Model Validation for a Noninvasive Arterial Stenosis Detection Problem

    DTIC Science & Technology

    2013-06-09

    Model validation for a noninvasive arterial stenosis detection problem H.T. Banks, Shuhua Hu and Zackary R. Kenz Center for Research in Scientific...invasive method for detection, localization, and characterization of an arterial stenosis (a blockage or partial blockage in an artery). A method has been...proposed to detect shear waves in the chest cavity which have been generated by disturbances in the blood flow resulting from a stenosis . In order to

  11. A community diagnostic tool for chemistry climate model validation

    NASA Astrophysics Data System (ADS)

    Gettelman, A.; Eyring, V.; Fischer, C.; Shiona, H.; Cionni, I.; Neish, M.; Morgenstern, O.; Wood, S. W.; Li, Z.

    2012-09-01

    This technical note presents an overview of the Chemistry-Climate Model Validation Diagnostic (CCMVal-Diag) tool for model evaluation. The CCMVal-Diag tool is a flexible and extensible open source package that facilitates the complex evaluation of global models. Models can be compared to other models, ensemble members (simulations with the same model), and/or many types of observations. The initial construction and application is to coupled chemistry-climate models (CCMs) participating in CCMVal, but the evaluation of climate models that submitted output to the Coupled Model Intercomparison Project (CMIP) is also possible. The package has been used to assist with analysis of simulations for the 2010 WMO/UNEP Scientific Ozone Assessment and the SPARC Report on the Evaluation of CCMs. The CCMVal-Diag tool is described and examples of how it functions are presented, along with links to detailed descriptions, instructions and source code. The CCMVal-Diag tool supports model development as well as quantifies model changes, both for different versions of individual models and for different generations of community-wide collections of models used in international assessments. The code allows further extensions by different users for different applications and types, e.g. to other components of the Earth system. User modifications are encouraged and easy to perform with minimum coding.

  12. A community diagnostic tool for Chemistry Climate Model Validation

    NASA Astrophysics Data System (ADS)

    Gettelman, A.; Eyring, V.; Fischer, C.; Shiona, H.; Cionni, I.; Neish, M.; Morgenstern, O.; Wood, S. W.; Li, Z.

    2012-05-01

    This technical note presents an overview of the Chemistry-Climate Model Validation Diagnostic (CCMVal-Diag) tool for model evaluation. The CCMVal-Diag tool is a flexible and extensible open source package that facilitates the complex evaluation of global models. Models can be compared to other models, ensemble members (simulations with the same model), and/or many types of observations. The tool can also compute quantitative performance metrics. The initial construction and application is to coupled Chemistry-Climate Models (CCMs) participating in CCMVal, but the evaluation of climate models that submitted output to the Coupled Model Intercomparison Project (CMIP) is also possible. The package has been used to assist with analysis of simulations for the 2010 WMO/UNEP Scientific Ozone Assessment and the SPARC Report on the Evaluation of CCMs. The CCMVal-Diag tool is described and examples of how it functions are presented, along with links to detailed descriptions, instructions and source code. The CCMVal-Diag tool is supporting model development as well as quantifying model improvements, both for different versions of individual models and for different generations of community-wide collections of models used in international assessments. The code allows further extensions by different users for different applications and types, e.g. to other components of the Earth System. User modifications are encouraged and easy to perform with a minimum of coding.

  13. Multi-objective model calibration and validation based on runoff and groundwater levels

    NASA Astrophysics Data System (ADS)

    Beldring, S.

    2003-04-01

    The multi-objective calibration procedure MOCOM-UA was used to evaluate the validity of a precipitation-runoff model by forcing the model to simulate several observed system responses simultaneously. The model is based on kinematic wave approximations to saturated subsurface flow and saturation overland flow at the hillslope scale in a landcape with a shallow layer of permeable deposits overlying a relatively impermeable bedrock. Data from a catchment with till deposits in the boreal forest zone in south-east Norway were used in this study. The following results were found; (i) The MOCOM-UA method was capable of exploiting information about the physical system contained in the measurement data time series; (ii) The multi-objective calibration procedure provided estimates of the uncertainty associated with model predictions and parameters; (iii) Multi-objective calibration constraining the behaviour of the precipitation-runoff model to observed runoff and groundwater levels reduced the uncertainty of model predictions; (iv) The multi-objective method reduced the uncertainty of the estimates of model parameters; (v) The precipitation-runoff model was able to reproduce several observed system responses simultaneously during both calibration and validation periods; and (vi) Groundwater table depths exerted a major control on the hydrological response of the investigated catchment.

  14. Estimating the predictive validity of diabetic animal models in rosiglitazone studies.

    PubMed

    Varga, O E; Zsíros, N; Olsson, I A S

    2015-06-01

    For therapeutic studies, predictive validity of animal models - arguably the most important feature of animal models in terms of human relevance - can be calculated retrospectively by obtaining data on treatment efficacy from human and animal trials. Using rosiglitazone as a case study, we aim to determine the predictive validity of animal models of diabetes, by analysing which models perform most similarly to humans during rosiglitazone treatment in terms of changes in standard diabetes diagnosis parameters (glycosylated haemoglobin [HbA1c] and fasting glucose levels). A further objective of this paper was to explore the impact of four covariates on the predictive capacity: (i) diabetes induction method; (ii) drug administration route; (iii) sex of animals and (iv) diet during the experiments. Despite the variable consistency of animal species-based models with the human reference for glucose and HbA1c treatment effects, our results show that glucose and HbA1c treatment effects in rats agreed better with the expected values based on human data than in other species. Induction method was also found to be a substantial factor affecting animal model performance. The study concluded that regular reassessment of animal models can help to identify human relevance of each model and adapt research design for actual research goals.

  15. Higgs potential in the type II seesaw model

    SciTech Connect

    Arhrib, A.; Benbrik, R.; Chabab, M.; Rahili, L.; Ramadan, J.; Moultaka, G.; Peyranere, M. C.

    2011-11-01

    The standard model Higgs sector, extended by one weak gauge triplet of scalar fields with a very small vacuum expectation value, is a very promising setting to account for neutrino masses through the so-called type II seesaw mechanism. In this paper we consider the general renormalizable doublet/triplet Higgs potential of this model. We perform a detailed study of its main dynamical features that depend on five dimensionless couplings and two mass parameters after spontaneous symmetry breaking, and highlight the implications for the Higgs phenomenology. In particular, we determine (i) the complete set of tree-level unitarity constraints on the couplings of the potential and (ii) the exact tree-level boundedness from below constraints on these couplings, valid for all directions. When combined, these constraints delineate precisely the theoretically allowed parameter space domain within our perturbative approximation. Among the seven physical Higgs states of this model, the mass of the lighter (heavier) CP{sub even} state h{sup 0} (H{sup 0}) will always satisfy a theoretical upper (lower) bound that is reached for a critical value {mu}{sub c} of {mu} (the mass parameter controlling triple couplings among the doublet/triplet Higgses). Saturating the unitarity bounds, we find an upper bound m{sub h}{sup 0} or approx. {mu}{sub c} and {mu} < or approx. {mu}{sub c}. In the first regime the Higgs sector is typically very heavy, and only h{sup 0} that becomes SM-like could be accessible to the LHC. In contrast, in the second regime, somewhat overlooked in the literature, most of the Higgs sector is light. In particular, the heaviest state H{sup 0} becomes SM-like, the lighter states being the CP{sub odd} Higgs, the (doubly) charged Higgses, and a decoupled h{sup 0}, possibly

  16. Model of the expansion of H II region RCW 82

    SciTech Connect

    Krasnobaev, K. V.; Kotova, G. Yu.; Tagirova, R. R. E-mail: gviana2005@gmail.com

    2014-05-10

    This paper aims to resolve the problem of formation of young objects observed in the RCW 82 H II region. In the framework of a classical trigger model the estimated time of fragmentation is larger than the estimated age of the H II region. Thus the young objects could not have formed during the dynamical evolution of the H II region. We propose a new model that helps resolve this problem. This model suggests that the H II region RCW 82 is embedded in a cloud of limited size that is denser than the surrounding interstellar medium. According to this model, when the ionization-shock front leaves the cloud it causes the formation of an accelerating dense gas shell. In the accelerated shell, the effects of the Rayleigh-Taylor (R-T) instability dominate and the characteristic time of the growth of perturbations with the observed magnitude of about 3 pc is 0.14 Myr, which is less than the estimated age of the H II region. The total time t {sub ∑}, which is the sum of the expansion time of the H II region to the edge of the cloud, the time of the R-T instability growth, and the free fall time, is estimated as 0.44 < t {sub ∑} < 0.78 Myr. We conclude that the young objects in the H II region RCW 82 could be formed as a result of the R-T instability with subsequent fragmentation into large-scale condensations.

  17. Modeling the Arm II core in MicroCap IV

    SciTech Connect

    Dalton, A.C.

    1996-11-01

    This paper reports on how an electrical model for the core of the Arm II machine was created and how to use this model. We wanted to get a model for the electrical characteristics of the ARM II core, in order to simulate this machine and to assist in the design of a future machine. We wanted this model to be able to simulate saturation, variable loss, and reset. Using the Hodgdon model and the circuit analysis program MicroCap IV, this was accomplished. This paper is written in such a way as to allow someone not familiar with the project to understand it.

  18. Methods for Geometric Data Validation of 3d City Models

    NASA Astrophysics Data System (ADS)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges

  19. A validated approach for modeling collapse of steel structures

    NASA Astrophysics Data System (ADS)

    Saykin, Vitaliy Victorovich

    A civil engineering structure is faced with many hazardous conditions such as blasts, earthquakes, hurricanes, tornadoes, floods, and fires during its lifetime. Even though structures are designed for credible events that can happen during a lifetime of the structure, extreme events do happen and cause catastrophic failures. Understanding the causes and effects of structural collapse is now at the core of critical areas of national need. One factor that makes studying structural collapse difficult is the lack of full-scale structural collapse experimental test results against which researchers could validate their proposed collapse modeling approaches. The goal of this work is the creation of an element deletion strategy based on fracture models for use in validated prediction of collapse of steel structures. The current work reviews the state-of-the-art of finite element deletion strategies for use in collapse modeling of structures. It is shown that current approaches to element deletion in collapse modeling do not take into account stress triaxiality in vulnerable areas of the structure, which is important for proper fracture and element deletion modeling. The report then reviews triaxiality and its role in fracture prediction. It is shown that fracture in ductile materials is a function of triaxiality. It is also shown that, depending on the triaxiality range, different fracture mechanisms are active and should be accounted for. An approach using semi-empirical fracture models as a function of triaxiality are employed. The models to determine fracture initiation, softening and subsequent finite element deletion are outlined. This procedure allows for stress-displacement softening at an integration point of a finite element in order to subsequently remove the element. This approach avoids abrupt changes in the stress that would create dynamic instabilities, thus making the results more reliable and accurate. The calibration and validation of these models are

  20. Validation of an Urban Parameterization in a Mesoscale Model

    SciTech Connect

    Leach, M.J.; Chin, H.

    2001-07-19

    The Atmospheric Science Division at Lawrence Livermore National Laboratory uses the Naval Research Laboratory's Couple Ocean-Atmosphere Mesoscale Prediction System (COAMPS) for both operations and research. COAMPS is a non-hydrostatic model, designed as a multi-scale simulation system ranging from synoptic down to meso, storm and local terrain scales. As model resolution increases, the forcing due to small-scale complex terrain features including urban structures and surfaces, intensifies. An urban parameterization has been added to the Naval Research Laboratory's mesoscale model, COAMPS. The parameterization attempts to incorporate the effects of buildings and urban surfaces without explicitly resolving them, and includes modeling the mean flow to turbulence energy exchange, radiative transfer, the surface energy budget, and the addition of anthropogenic heat. The Chemical and Biological National Security Program's (CBNP) URBAN field experiment was designed to collect data to validate numerical models over a range of length and time scales. The experiment was conducted in Salt Lake City in October 2000. The scales ranged from circulation around single buildings to flow in the entire Salt Lake basin. Data from the field experiment includes tracer data as well as observations of mean and turbulence atmospheric parameters. Wind and turbulence predictions from COAMPS are used to drive a Lagrangian particle model, the Livermore Operational Dispersion Integrator (LODI). Simulations with COAMPS and LODI are used to test the sensitivity to the urban parameterization. Data from the field experiment, including the tracer data and the atmospheric parameters, are also used to validate the urban parameterization.

  1. Full-scale validation of a model of algal productivity.

    PubMed

    Béchet, Quentin; Shilton, Andy; Guieysse, Benoit

    2014-12-02

    While modeling algal productivity outdoors is crucial to assess the economic and environmental performance of full-scale cultivation, most of the models hitherto developed for this purpose have not been validated under fully relevant conditions, especially with regard to temperature variations. The objective of this study was to independently validate a model of algal biomass productivity accounting for both light and temperature and constructed using parameters experimentally derived using short-term indoor experiments. To do this, the accuracy of a model developed for Chlorella vulgaris was assessed against data collected from photobioreactors operated outdoor (New Zealand) over different seasons, years, and operating conditions (temperature-control/no temperature-control, batch, and fed-batch regimes). The model accurately predicted experimental productivities under all conditions tested, yielding an overall accuracy of ±8.4% over 148 days of cultivation. For the purpose of assessing the feasibility of full-scale algal cultivation, the use of the productivity model was therefore shown to markedly reduce uncertainty in cost of biofuel production while also eliminating uncertainties in water demand, a critical element of environmental impact assessments. Simulations at five climatic locations demonstrated that temperature-control in outdoor photobioreactors would require tremendous amounts of energy without considerable increase of algal biomass. Prior assessments neglecting the impact of temperature variations on algal productivity in photobioreactors may therefore be erroneous.

  2. Construct Validity of an Inanimate Training Model for Laparoscopic Appendectomy

    PubMed Central

    Sanchez-Ismayel, Alexis; Sanchez, Renata; Pena, Romina; Salamo, Oriana

    2013-01-01

    Background and Objective: The use of training models in laparoscopic surgery allows the surgical team to practice procedures in a safe environment. The aim of this study was to determine the capability of an inanimate laparoscopic appendectomy model to discriminate between different levels of surgical experience (construct validity). Methods: The performance of 3 groups with different levels of expertise in laparoscopic surgery—experts (Group A), intermediates (Group B), and novices (Group C)—was evaluated. The groups were instructed of the task to perform in the model using a video tutorial. Procedures were recorded in a digital format for later analysis using the Global Operative Assessment of Laparoscopic Skills (GOALS) score; procedure time was registered. The data were analyzed using the analysis of variance test. Results: Twelve subjects were evaluated, 4 in each group, using the GOALS score and time required to finish the task. Higher scores were observed in the expert group, followed by the intermediate and novice groups, with statistically significant difference. Regarding procedure time, a significant difference was also found between the groups, with the experts having the shorter time. The proposed model is able to discriminate among individuals with different levels of expertise, indicating that the abilities that the model evaluates are relevant in the surgeon's performance. Conclusions: Construct validity for the inanimate full-task laparoscopic appendectomy training model was demonstrated. Therefore, it is a useful tool in the development and evaluation of the resident in training. PMID:24018084

  3. Validation of the WATEQ4 geochemical model for uranium

    SciTech Connect

    Krupka, K.M.; Jenne, E.A.; Deutsch, W.J.

    1983-09-01

    As part of the Geochemical Modeling and Nuclide/Rock/Groundwater Interactions Studies Program, a study was conducted to partially validate the WATEQ4 aqueous speciation-solubility geochemical model for uranium. The solubility controls determined with the WATEQ4 geochemical model were in excellent agreement with those laboratory studies in which the solids schoepite (UO/sub 2/(OH)/sub 2/ . H/sub 2/O), UO/sub 2/(OH)/sub 2/, and rutherfordine ((UO/sub 2/CO/sub 3/) were identified as actual solubility controls for uranium. The results of modeling solution analyses from laboratory studies of uranyl phosphate solids, however, identified possible errors in the characterization of solids in the original solubility experiments. As part of this study, significant deficiencies in the WATEQ4 thermodynamic data base for uranium solutes and solids were corrected. Revisions included recalculation of selected uranium reactions. Additionally, thermodynamic data for the hydroxyl complexes of U(VI), including anionic (VI) species, were evaluated (to the extent permitted by the available data). Vanadium reactions were also added to the thermodynamic data base because uranium-vanadium solids can exist in natural ground-water systems. This study is only a partial validation of the WATEQ4 geochemical model because the available laboratory solubility studies do not cover the range of solid phases, alkaline pH values, and concentrations of inorganic complexing ligands needed to evaluate the potential solubility of uranium in ground waters associated with various proposed nuclear waste repositories. Further validation of this or other geochemical models for uranium will require careful determinations of uraninite solubility over the pH range of 7 to 10 under highly reducing conditions and of uranyl hydroxide and phosphate solubilities over the pH range of 7 to 10 under oxygenated conditions.

  4. Model selection, identification and validation in anaerobic digestion: a review.

    PubMed

    Donoso-Bravo, Andres; Mailier, Johan; Martin, Cristina; Rodríguez, Jorge; Aceves-Lara, César Arturo; Vande Wouwer, Alain

    2011-11-01

    Anaerobic digestion enables waste (water) treatment and energy production in the form of biogas. The successful implementation of this process has lead to an increasing interest worldwide. However, anaerobic digestion is a complex biological process, where hundreds of microbial populations are involved, and whose start-up and operation are delicate issues. In order to better understand the process dynamics and to optimize the operating conditions, the availability of dynamic models is of paramount importance. Such models have to be inferred from prior knowledge and experimental data collected from real plants. Modeling and parameter identification are vast subjects, offering a realm of approaches and methods, which can be difficult to fully understand by scientists and engineers dedicated to the plant operation and improvements. This review article discusses existing modeling frameworks and methodologies for parameter estimation and model validation in the field of anaerobic digestion processes. The point of view is pragmatic, intentionally focusing on simple but efficient methods.

  5. Validation of a Hertzian contact model with nonlinear damping

    NASA Astrophysics Data System (ADS)

    Sierakowski, Adam

    2015-11-01

    Due to limited spatial resolution, most disperse particle simulation methods rely on simplified models for incorporating short-range particle interactions. In this presentation, we introduce a contact model that combines the Hertz elastic restoring force with a nonlinear damping force, requiring only material properties and no tunable parameters. We have implemented the model in a resolved-particle flow solver that implements the Physalis method, which accurately captures hydrodynamic interactions by analytically enforcing the no-slip condition on the particle surface. We summarize the results of a few numerical studies that suggest the validity of the contact model over a range of particle interaction intensities (i.e., collision Stokes numbers) when compared with experimental data. This work was supported by the National Science Foundation under Grant Number CBET1335965 and the Johns Hopkins University Modeling Complex Systems IGERT program.

  6. Molprobity's ultimate rotamer-library distributions for model validation.

    PubMed

    Hintze, Bradley J; Lewis, Steven M; Richardson, Jane S; Richardson, David C

    2016-09-01

    Here we describe the updated MolProbity rotamer-library distributions derived from an order-of-magnitude larger and more stringently quality-filtered dataset of about 8000 (vs. 500) protein chains, and we explain the resulting changes and improvements to model validation as seen by users. To include only side-chains with satisfactory justification for their given conformation, we added residue-specific filters for electron-density value and model-to-density fit. The combined new protocol retains a million residues of data, while cleaning up false-positive noise in the multi- χ datapoint distributions. It enables unambiguous characterization of conformational clusters nearly 1000-fold less frequent than the most common ones. We describe examples of local interactions that favor these rare conformations, including the role of authentic covalent bond-angle deviations in enabling presumably strained side-chain conformations. Further, along with favored and outlier, an allowed category (0.3-2.0% occurrence in reference data) has been added, analogous to Ramachandran validation categories. The new rotamer distributions are used for current rotamer validation in MolProbity and PHENIX, and for rotamer choice in PHENIX model-building and refinement. The multi-dimensional χ distributions and Top8000 reference dataset are freely available on GitHub. These rotamers are termed "ultimate" because data sampling and quality are now fully adequate for this task, and also because we believe the future of conformational validation should integrate side-chain with backbone criteria. Proteins 2016; 84:1177-1189. © 2016 Wiley Periodicals, Inc.

  7. Experimental validation of flexible robot arm modeling and control

    NASA Technical Reports Server (NTRS)

    Ulsoy, A. Galip

    1989-01-01

    Flexibility is important for high speed, high precision operation of lightweight manipulators. Accurate dynamic modeling of flexible robot arms is needed. Previous work has mostly been based on linear elasticity with prescribed rigid body motions (i.e., no effect of flexible motion on rigid body motion). Little or no experimental validation of dynamic models for flexible arms is available. Experimental results are also limited for flexible arm control. Researchers include the effects of prismatic as well as revolute joints. They investigate the effect of full coupling between the rigid and flexible motions, and of axial shortening, and consider the control of flexible arms using only additional sensors.

  8. A 2-stage phase II design with direct assignment option in stage II for initial marker validation.

    PubMed

    An, Ming-Wen; Mandrekar, Sumithra J; Sargent, Daniel J

    2012-08-15

    Biomarkers are critical to targeted therapies, as they may identify patients more likely to benefit from a treatment. Several prospective designs for biomarker-directed therapy have been previously proposed, differing primarily in the study population, randomization scheme, or both. Recognizing the need for randomization, yet acknowledging the possibility of promising but inconclusive results after a stage I cohort of randomized patients, we propose a 2-stage phase II design on marker-positive patients that allows for direct assignment in a stage II cohort. In stage I, marker-positive patients are equally randomized to receive experimental treatment or control. Stage II has the option to adopt "direct assignment" whereby all patients receive experimental treatment. Through simulation, we studied the power and type I error rate of our design compared with a balanced randomized two-stage design, and conducted sensitivity analyses to study the effect of timing of stage I analysis, population shift effects, and unbalanced randomization. Our proposed design has minimal loss in power (<1.8%) and increased type I error rate (<2.1%) compared with a balanced randomized design. The maximum increase in type I error rate in the presence of a population shift was between 3.1% and 5%, and the loss in power across possible timings of stage I analysis was less than 1.2%. Our proposed design has desirable statistical properties with potential appeal in practice. The direct assignment option, if adopted, provides for an "extended confirmation phase" as an alternative to stopping the trial early for evidence of efficacy in stage I.

  9. Validating a spatially distributed hydrological model with soil morphology data

    NASA Astrophysics Data System (ADS)

    Doppler, T.; Honti, M.; Zihlmann, U.; Weisskopf, P.; Stamm, C.

    2013-10-01

    Spatially distributed hydrological models are popular tools in hydrology and they are claimed to be useful to support management decisions. Despite the high spatial resolution of the computed variables, calibration and validation is often carried out only on discharge time-series at specific locations due to the lack of spatially distributed reference data. Because of this restriction, the predictive power of these models, with regard to predicted spatial patterns, can usually not be judged. An example of spatial predictions in hydrology is the prediction of saturated areas in agricultural catchments. These areas can be important source areas for the transport of agrochemicals to the stream. We set up a spatially distributed model to predict saturated areas in a 1.2 km2 catchment in Switzerland with moderate topography. Around 40% of the catchment area are artificially drained. We measured weather data, discharge and groundwater levels in 11 piezometers for 1.5 yr. For broadening the spatially distributed data sets that can be used for model calibration and validation, we translated soil morphological data available from soil maps into an estimate of the duration of soil saturation in the soil horizons. We used redox-morphology signs for these estimates. This resulted in a data set with high spatial coverage on which the model predictions were validated. In general, these saturation estimates corresponded well to the measured groundwater levels. We worked with a model that would be applicable for management decisions because of its fast calculation speed and rather low data requirements. We simultaneously calibrated the model to the groundwater levels in the piezometers and discharge. The model was able to reproduce the general hydrological behavior of the catchment in terms of discharge and absolute groundwater levels. However, the accuracy of the groundwater level predictions was not high enough to be used for the prediction of saturated areas. The groundwater

  10. Traveling with Cognitive Tests: Testing the Validity of a KABC-II Adaptation in India

    ERIC Educational Resources Information Center

    Malda, Maike; van de Vijver, Fons J. R.; Srinivasan, Krishnamachari; Transler, Catherine; Sukumar, Prathima

    2010-01-01

    The authors evaluated the adequacy of an extensive adaptation of the American Kaufman Assessment Battery for Children, second edition (KABC-II), for 6- to 10-year-old Kannada-speaking children of low socioeconomic status in Bangalore, South India. The adapted KABC-II was administered to 598 children. Subtests showed high reliabilities, the…

  11. Calibration and validation of DRAINMOD to model bioretention hydrology

    NASA Astrophysics Data System (ADS)

    Brown, R. A.; Skaggs, R. W.; Hunt, W. F.

    2013-04-01

    SummaryPrevious field studies have shown that the hydrologic performance of bioretention cells varies greatly because of factors such as underlying soil type, physiographic region, drainage configuration, surface storage volume, drainage area to bioretention surface area ratio, and media depth. To more accurately describe bioretention hydrologic response, a long-term hydrologic model that generates a water balance is needed. Some current bioretention models lack the ability to perform long-term simulations and others have never been calibrated from field monitored bioretention cells with underdrains. All peer-reviewed models lack the ability to simultaneously perform both of the following functions: (1) model an internal water storage (IWS) zone drainage configuration and (2) account for soil-water content using the soil-water characteristic curve. DRAINMOD, a widely-accepted agricultural drainage model, was used to simulate the hydrologic response of runoff entering a bioretention cell. The concepts of water movement in bioretention cells are very similar to those of agricultural fields with drainage pipes, so many bioretention design specifications corresponded directly to DRAINMOD inputs. Detailed hydrologic measurements were collected from two bioretention field sites in Nashville and Rocky Mount, North Carolina, to calibrate and test the model. Each field site had two sets of bioretention cells with varying media depths, media types, drainage configurations, underlying soil types, and surface storage volumes. After 12 months, one of these characteristics was altered - surface storage volume at Nashville and IWS zone depth at Rocky Mount. At Nashville, during the second year (post-repair period), the Nash-Sutcliffe coefficients for drainage and exfiltration/evapotranspiration (ET) both exceeded 0.8 during the calibration and validation periods. During the first year (pre-repair period), the Nash-Sutcliffe coefficients for drainage, overflow, and exfiltration

  12. Organic acid modeling and model validation: Workshop summary. Final report

    SciTech Connect

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E&S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled ``Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.`` The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  13. Organic acid modeling and model validation: Workshop summary

    SciTech Connect

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.'' The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  14. Validation of coupled atmosphere-fire behavior models

    SciTech Connect

    Bossert, J.E.; Reisner, J.M.; Linn, R.R.; Winterkamp, J.L.; Schaub, R.; Riggan, P.J.

    1998-12-31

    Recent advances in numerical modeling and computer power have made it feasible to simulate the dynamical interaction and feedback between the heat and turbulence induced by wildfires and the local atmospheric wind and temperature fields. At Los Alamos National Laboratory, the authors have developed a modeling system that includes this interaction by coupling a high resolution atmospheric dynamics model, HIGRAD, with a fire behavior model, BEHAVE, to predict the spread of wildfires. The HIGRAD/BEHAVE model is run at very high resolution to properly resolve the fire/atmosphere interaction. At present, these coupled wildfire model simulations are computationally intensive. The additional complexity of these models require sophisticated methods for assuring their reliability in real world applications. With this in mind, a substantial part of the research effort is directed at model validation. Several instrumented prescribed fires have been conducted with multi-agency support and participation from chaparral, marsh, and scrub environments in coastal areas of Florida and inland California. In this paper, the authors first describe the data required to initialize the components of the wildfire modeling system. Then they present results from one of the Florida fires, and discuss a strategy for further testing and improvement of coupled weather/wildfire models.

  15. Validating and Verifying Biomathematical Models of Human Fatigue

    NASA Technical Reports Server (NTRS)

    Martinez, Siera Brooke; Quintero, Luis Ortiz; Flynn-Evans, Erin

    2015-01-01

    Airline pilots experience acute and chronic sleep deprivation, sleep inertia, and circadian desynchrony due to the need to schedule flight operations around the clock. This sleep loss and circadian desynchrony gives rise to cognitive impairments, reduced vigilance and inconsistent performance. Several biomathematical models, based principally on patterns observed in circadian rhythms and homeostatic drive, have been developed to predict a pilots levels of fatigue or alertness. These models allow for the Federal Aviation Administration (FAA) and commercial airlines to make decisions about pilot capabilities and flight schedules. Although these models have been validated in a laboratory setting, they have not been thoroughly tested in operational environments where uncontrolled factors, such as environmental sleep disrupters, caffeine use and napping, may impact actual pilot alertness and performance. We will compare the predictions of three prominent biomathematical fatigue models (McCauley Model, Harvard Model, and the privately-sold SAFTE-FAST Model) to actual measures of alertness and performance. We collected sleep logs, movement and light recordings, psychomotor vigilance task (PVT), and urinary melatonin (a marker of circadian phase) from 44 pilots in a short-haul commercial airline over one month. We will statistically compare with the model predictions to lapses on the PVT and circadian phase. We will calculate the sensitivity and specificity of each model prediction under different scheduling conditions. Our findings will aid operational decision-makers in determining the reliability of each model under real-world scheduling situations.

  16. A turbulence model for iced airfoils and its validation

    NASA Technical Reports Server (NTRS)

    Shin, Jaiwon; Chen, Hsun H.; Cebeci, Tuncer

    1992-01-01

    A turbulence model based on the extension of the algebraic eddy viscosity formulation of Cebeci and Smith developed for two dimensional flows over smooth and rough surfaces is described for iced airfoils and validated for computed ice shapes obtained for a range of total temperatures varying from 28 to -15 F. The validation is made with an interactive boundary layer method which uses a panel method to compute the inviscid flow and an inverse finite difference boundary layer method to compute the viscous flow. The interaction between inviscid and viscous flows is established by the use of the Hilbert integral. The calculated drag coefficients compare well with recent experimental data taken at the NASA-Lewis Icing Research Tunnel (IRT) and show that, in general, the drag increase due to ice accretion can be predicted well and efficiently.

  17. Validation and Verification with Applications to a Kinetic Global Model

    NASA Astrophysics Data System (ADS)

    Verboncoeur, J. P.

    2014-10-01

    As scientific software matures, verification, validation, benchmarking, and error estimation are becoming increasingly important to ensure predictable operation. Having well-described and consistent data is critical for consistent results. This presentation briefly addresses the motivation for V&V, the history and goals of the workshop series. A roadmap of the current workshop is presented. Finally, examples of V&V are applied to a novel kinetic global model for a series of low temperature plasma problems ranging from verification of specific rate equations to benchmarks and validation with other codes and experimental data for Penning breakdown and hydrocarbon plasmas. The results are included in the code release to ensure repeatability following code modifications. In collaboration with G. Parsey, J. Kempf, and A. Christlieb, Michigan State University. This work is supported in part by a U.S. Air Force Office of Scientific Research Basic Research Initiative and a Michigan State University Strategic Partnership grant.

  18. Validation of chemistry models employed in a particle simulation method

    NASA Technical Reports Server (NTRS)

    Haas, Brian L.; Mcdonald, Jeffrey D.

    1991-01-01

    The chemistry models employed in a statistical particle simulation method, as implemented in the Intel iPSC/860 multiprocessor computer, are validated and applied. Chemical relaxation of five-species air in these reservoirs involves 34 simultaneous dissociation, recombination, and atomic-exchange reactions. The reaction rates employed in the analytic solutions are obtained from Arrhenius experimental correlations as functions of temperature for adiabatic gas reservoirs in thermal equilibrium. Favorable agreement with the analytic solutions validates the simulation when applied to relaxation of O2 toward equilibrium in reservoirs dominated by dissociation and recombination, respectively, and when applied to relaxation of air in the temperature range 5000 to 30,000 K. A flow of O2 over a circular cylinder at high Mach number is simulated to demonstrate application of the method to multidimensional reactive flows.

  19. Standard solar model. II - g-modes

    NASA Technical Reports Server (NTRS)

    Guenther, D. B.; Demarque, P.; Pinsonneault, M. H.; Kim, Y.-C.

    1992-01-01

    The paper presents the g-mode oscillation for a set of modern solar models. Each solar model is based on a single modification or improvement to the physics of a reference solar model. Improvements were made to the nuclear reaction rates, the equation of state, the opacities, and the treatment of the atmosphere. The error in the predicted g-mode periods associated with the uncertainties in the model physics is predicted and the specific sensitivities of the g-mode periods and their period spacings to the different model structures are described. In addition, these models are compared to a sample of published observations. A remarkably good agreement is found between the 'best' solar model and the observations of Hill and Gu (1990).

  20. Validation of high displacement piezoelectric actuator finite element models

    NASA Astrophysics Data System (ADS)

    Taleghani, Barmac K.

    2000-08-01

    The paper presents the results obtained by using NASTRAN and ANSYS finite element codes to predict doming of the THUNDER piezoelectric actuators during the manufacturing process and subsequent straining due to an applied input voltage. To effectively use such devices in engineering applications, modeling and characterization are essential. Length, width, dome height, and thickness and important parameters for users of such devices. Therefore, finite element models were used to assess the effects of these parameters. NASTRAN and ANSYS used different methods for modeling piezoelectric effects. In NASTRAN, a thermal analogy was used to represent voltage at nodes as equivalent temperatures, while ANSYS processed the voltage directly using piezoelectric finite elements. The results of finite element models were validated by using the experimental results.

  1. Validation of High Displacement Piezoelectric Actuator Finite Element Models

    NASA Technical Reports Server (NTRS)

    Taleghani, B. K.

    2000-01-01

    The paper presents the results obtained by using NASTRAN(Registered Trademark) and ANSYS(Regitered Trademark) finite element codes to predict doming of the THUNDER piezoelectric actuators during the manufacturing process and subsequent straining due to an applied input voltage. To effectively use such devices in engineering applications, modeling and characterization are essential. Length, width, dome height, and thickness are important parameters for users of such devices. Therefore, finite element models were used to assess the effects of these parameters. NASTRAN(Registered Trademark) and ANSYS(Registered Trademark) used different methods for modeling piezoelectric effects. In NASTRAN(Registered Trademark), a thermal analogy was used to represent voltage at nodes as equivalent temperatures, while ANSYS(Registered Trademark) processed the voltage directly using piezoelectric finite elements. The results of finite element models were validated by using the experimental results.

  2. MODEL VALIDATION FOR A NONINVASIVE ARTERIAL STENOSIS DETECTION PROBLEM

    PubMed Central

    BANKS, H. THOMAS; HU, SHUHUA; KENZ, ZACKARY R.; KRUSE, CAROLA; SHAW, SIMON; WHITEMAN, JOHN; BREWIN, MARK P.; GREENWALD, STEPHEN E.; BIRCH, MALCOLM J.

    2014-01-01

    A current thrust in medical research is the development of a non-invasive method for detection, localization, and characterization of an arterial stenosis (a blockage or partial blockage in an artery). A method has been proposed to detect shear waves in the chest cavity which have been generated by disturbances in the blood flow resulting from a stenosis. In order to develop this methodology further, we use one-dimensional shear wave experimental data from novel acoustic phantoms to validate a corresponding viscoelastic mathematical model. We estimate model parameters which give a good fit (in a sense to be precisely defined) to the experimental data, and use asymptotic error theory to provide confidence intervals for parameter estimates. Finally, since a robust error model is necessary for accurate parameter estimates and confidence analysis, we include a comparison of absolute and relative models for measurement error. PMID:24506547

  3. Coupled Thermal-Chemical-Mechanical Modeling of Validation Cookoff Experiments

    SciTech Connect

    ERIKSON,WILLIAM W.; SCHMITT,ROBERT G.; ATWOOD,A.I.; CURRAN,P.D.

    2000-11-27

    The cookoff of energetic materials involves the combined effects of several physical and chemical processes. These processes include heat transfer, chemical decomposition, and mechanical response. The interaction and coupling between these processes influence both the time-to-event and the violence of reaction. The prediction of the behavior of explosives during cookoff, particularly with respect to reaction violence, is a challenging task. To this end, a joint DoD/DOE program has been initiated to develop models for cookoff, and to perform experiments to validate those models. In this paper, a series of cookoff analyses are presented and compared with data from a number of experiments for the aluminized, RDX-based, Navy explosive PBXN-109. The traditional thermal-chemical analysis is used to calculate time-to-event and characterize the heat transfer and boundary conditions. A reaction mechanism based on Tarver and McGuire's work on RDX{sup 2} was adjusted to match the spherical one-dimensional time-to-explosion data. The predicted time-to-event using this reaction mechanism compares favorably with the validation tests. Coupled thermal-chemical-mechanical analysis is used to calculate the mechanical response of the confinement and the energetic material state prior to ignition. The predicted state of the material includes the temperature, stress-field, porosity, and extent of reaction. There is little experimental data for comparison to these calculations. The hoop strain in the confining steel tube gives an estimation of the radial stress in the explosive. The inferred pressure from the measured hoop strain and calculated radial stress agree qualitatively. However, validation of the mechanical response model and the chemical reaction mechanism requires more data. A post-ignition burn dynamics model was applied to calculate the confinement dynamics. The burn dynamics calculations suffer from a lack of characterization of the confinement for the flaw

  4. Validation of thermal models for a prototypical MEMS thermal actuator.

    SciTech Connect

    Gallis, Michail A.; Torczynski, John Robert; Piekos, Edward Stanley; Serrano, Justin Raymond; Gorby, Allen D.; Phinney, Leslie Mary

    2008-09-01

    This report documents technical work performed to complete the ASC Level 2 Milestone 2841: validation of thermal models for a prototypical MEMS thermal actuator. This effort requires completion of the following task: the comparison between calculated and measured temperature profiles of a heated stationary microbeam in air. Such heated microbeams are prototypical structures in virtually all electrically driven microscale thermal actuators. This task is divided into four major subtasks. (1) Perform validation experiments on prototypical heated stationary microbeams in which material properties such as thermal conductivity and electrical resistivity are measured if not known and temperature profiles along the beams are measured as a function of electrical power and gas pressure. (2) Develop a noncontinuum gas-phase heat-transfer model for typical MEMS situations including effects such as temperature discontinuities at gas-solid interfaces across which heat is flowing, and incorporate this model into the ASC FEM heat-conduction code Calore to enable it to simulate these effects with good accuracy. (3) Develop a noncontinuum solid-phase heat transfer model for typical MEMS situations including an effective thermal conductivity that depends on device geometry and grain size, and incorporate this model into the FEM heat-conduction code Calore to enable it to simulate these effects with good accuracy. (4) Perform combined gas-solid heat-transfer simulations using Calore with these models for the experimentally investigated devices, and compare simulation and experimental temperature profiles to assess model accuracy. These subtasks have been completed successfully, thereby completing the milestone task. Model and experimental temperature profiles are found to be in reasonable agreement for all cases examined. Modest systematic differences appear to be related to uncertainties in the geometric dimensions of the test structures and in the thermal conductivity of the

  5. Modeling, Robust Control, and Experimental Validation of a Supercavitating Vehicle

    NASA Astrophysics Data System (ADS)

    Escobar Sanabria, David

    This dissertation considers the mathematical modeling, control under uncertainty, and experimental validation of an underwater supercavitating vehicle. By traveling inside a gas cavity, a supercavitating vehicle reduces hydrodynamic drag, increases speed, and minimizes power consumption. The attainable speed and power efficiency make these vehicles attractive for undersea exploration, high-speed transportation, and defense. However, the benefits of traveling inside a cavity come with difficulties in controlling the vehicle dynamics. The main challenge is the nonlinear force that arises when the back-end of the vehicle pierces the cavity. This force, referred to as planing, leads to oscillatory motion and instability. Control technologies that are robust to planing and suited for practical implementation need to be developed. To enable these technologies, a low-order vehicle model that accounts for inaccuracy in the characterization of planing is required. Additionally, an experimental method to evaluate possible pitfalls in the models and controllers is necessary before undersea testing. The major contribution of this dissertation is a unified framework for mathematical modeling, robust control synthesis, and experimental validation of a supercavitating vehicle. First, we introduce affordable experimental methods for mathematical modeling and controller testing under planing and realistic flow conditions. Then, using experimental observations and physical principles, we create a low-order nonlinear model of the longitudinal vehicle motion. This model quantifies the planing uncertainty and is suitable for robust controller synthesis. Next, based on the vehicle model, we develop automated tools for synthesizing controllers that deliver a certificate of performance in the face of nonlinear and uncertain planing forces. We demonstrate theoretically and experimentally that the proposed controllers ensure higher performance when the uncertain planing dynamics are

  6. Deviatoric constitutive model: domain of strain rate validity

    SciTech Connect

    Zocher, Marvin A

    2009-01-01

    A case is made for using an enhanced methodology in determining the parameters that appear in a deviatoric constitutive model. Predictability rests on our ability to solve a properly posed initial boundary value problem (IBVP), which incorporates an accurate reflection of material constitutive behavior. That reflection is provided through the constitutive model. Moreover, the constitutive model is required for mathematical closure of the IBVP. Common practice in the shock physics community is to divide the Cauchy tensor into spherical and deviatoric parts, and to develop separate models for spherical and deviatoric constitutive response. Our focus shall be on the Cauchy deviator and deviatoric constitutive behavior. Discussions related to the spherical part of the Cauchy tensor are reserved for another time. A number of deviatoric constitutive models have been developed for utilization in the solution of IBVPs that are of interest to those working in the field of shock physics, e.g. All of these models are phenomenological and contain a number of parameters that must be determined in light of experimental data. The methodology employed in determining these parameters dictates the loading regime over which the model can be expected to be accurate. The focus of this paper is the methodology employed in determining model parameters and the consequences of that methodology as it relates to the domain of strain rate validity. We shall begin by describing the methodology that is typically employed. We shall discuss limitations imposed upon predictive capability by the typically employed methodology. We shall propose a modification to the typically employed methodology that significantly extends the domain of strain rate validity.

  7. Development and Validation of a 3-Dimensional CFB Furnace Model

    NASA Astrophysics Data System (ADS)

    Vepsäläinen, Arl; Myöhänen, Karl; Hyppäneni, Timo; Leino, Timo; Tourunen, Antti

    At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents

  8. Aqueous Solution Vessel Thermal Model Development II

    SciTech Connect

    Buechler, Cynthia Eileen

    2015-10-28

    The work presented in this report is a continuation of the work described in the May 2015 report, “Aqueous Solution Vessel Thermal Model Development”. This computational fluid dynamics (CFD) model aims to predict the temperature and bubble volume fraction in an aqueous solution of uranium. These values affect the reactivity of the fissile solution, so it is important to be able to calculate them and determine their effects on the reaction. Part A of this report describes some of the parameter comparisons performed on the CFD model using Fluent. Part B describes the coupling of the Fluent model with a Monte-Carlo N-Particle (MCNP) neutron transport model. The fuel tank geometry is the same as it was in the May 2015 report, annular with a thickness-to-height ratio of 0.16. An accelerator-driven neutron source provides the excitation for the reaction, and internal and external water cooling channels remove the heat. The model used in this work incorporates the Eulerian multiphase model with lift, wall lubrication, turbulent dispersion and turbulence interaction. The buoyancy-driven flow is modeled using the Boussinesq approximation, and the flow turbulence is determined using the k-ω Shear-Stress-Transport (SST) model. The dispersed turbulence multiphase model is employed to capture the multiphase turbulence effects.

  9. Vibrations inside buildings due to subway railway traffic. Experimental validation of a comprehensive prediction model.

    PubMed

    Lopes, Patrícia; Ruiz, Jésus Fernández; Alves Costa, Pedro; Medina Rodríguez, L; Cardoso, António Silva

    2016-10-15

    The present paper focuses on the experimental validation of a numerical approach previously proposed by the authors for the prediction of vibrations inside buildings due to railway traffic in tunnels. The numerical model is based on the concept of dynamic substructuring and is composed by three autonomous models to simulate the following main parts of the problem: i) generation of vibrations (train-track interaction); ii) propagation of vibrations (track-tunnel-ground system); iii) reception of vibrations (building coupled to the ground). The experimental validation consists in the comparison between the results predicted by the proposed numerical model and the measurements performed inside a building due to the railway traffic in a shallow tunnel located in Madrid. Apart from the brief description of the numerical model and of the case study, the main options and simplifications adopted on the numerical modeling strategy are discussed. The balance adopted between accuracy and simplicity of the numerical approach proved to be a path to follow in order to transfer knowledge to engineering practice. Finally, the comparison between numerical and experimental results allowed finding a good agreement between both, fact that ensures the ability of the proposed modeling strategy to deal with real engineering practical problems.

  10. Validated models for predicting skin penetration from different vehicles.

    PubMed

    Ghafourian, Taravat; Samaras, Eleftherios G; Brooks, James D; Riviere, Jim E

    2010-12-23

    The permeability of a penetrant though skin is controlled by the properties of the penetrants and the mixture components, which in turn relates to the molecular structures. Despite the well-investigated models for compound permeation through skin, the effect of vehicles and mixture components has not received much attention. The aim of this Quantitative Structure Activity Relationship (QSAR) study was to develop a statistically validated model for the prediction of skin permeability coefficients of compounds dissolved in different vehicles. Furthermore, the model can help with the elucidation of the mechanisms involved in the permeation process. With this goal in mind, the skin permeability of four different penetrants each blended in 24 different solvent mixtures were determined from diffusion cell studies using porcine skin. The resulting 96 kp values were combined with a previous dataset of 288 kp data for QSAR analysis. Stepwise regression analysis was used for the selection of the most significant molecular descriptors and development of several regression models. The selected QSAR employed two penetrant descriptors of Wiener topological index and total lipole moment, boiling point of the solvent and the difference between the melting point of the penetrant and the melting point of the solvent. The QSAR was validated internally, using a leave-many-out procedure, giving a mean absolute error of 0.454 for the logkp value of the test set.

  11. A proposal to use geoid slope validation lines to validate models of geoid change

    NASA Astrophysics Data System (ADS)

    Smith, D. A.

    2010-12-01

    The United States National Geodetic Survey (NGS) has embarked on a ten year project called GRAV-D (Gravity for the Redefinition of the American Vertical Datum). The purpose of this project is to replace the current official vertical datum, NAVD 88 (the North American Vertical Datum of 1988) with a geopotential reference system based on a new survey of the gravity field and a gravimetric geoid. As part of GRAV-D, the National Geodetic Survey will develop a set of “geoid slope validation lines” at various locations of the country. These lines will be surveys designed to independently measure the slope of the geoid to provide a check against both the data and theory used to create the final gravimetric geoid which will be used in the geopotential reference system. The first of these lines is proposed to be established in the Autumn of 2011 in the west central region of Texas. The survey will be approximately 300 kilometers long, consisting of GPS, geodetic leveling, deflections of the vertical, surface absolute and relative gravity, including the use of relative meters for low-high surface gradient determination. This region was chosen for many factors including the availability of GRAV-D airborne gravity over the area, its relatively low elevation (220 meter orthometric height max), its geoid slope (from the latest high resolution models being a few decimeters over 300 km), lack of significant topographic relief, lack of large forestation, availability of good roads, clarity of weather and lack of large water crossings. Further lines are planned in the out-years, in more difficult areas, though their locations are not yet determined. Although the original intent of these lines was to serve as calibrations against geoid modeling data and theory, there may be additional uses relevant to geoid monitoring. A gap is being anticipated between the GRACE and GRACE-Follow On missions. GRACE has shown a quantifiable change (millimeters per year) in the geoid over parts of

  12. Long-term ELBARA-II Assistance to SMOS Land Product and Algorithm Validation at the Valencia Anchor Station (MELBEX Experiment 2010-2013)

    NASA Astrophysics Data System (ADS)

    Lopez-Baeza, Ernesto; Wigneron, Jean-Pierre; Schwank, Mike; Miernecki, Maciej; Kerr, Yann; Casal, Tania; Delwart, Steven; Fernandez-Moran, Roberto; Mecklenburg, Susanne; Coll Pajaron, M. Amparo; Salgado Hernanz, Paula

    The main activity of the Valencia Anchor Station (VAS) is currently now to support the validation of SMOS (Soil Moisture and Ocean Salinity) Level 2 and 3 land products (soil moisture, SM, and vegetation optical depth, TAU). With this aim, the European Space Agency (ESA) has provided the Climatology from Satellites Group of the University of Valencia with an ELBARA-II microwave radiometer under a loan agreement since September 2009. During this time, brightness temperatures (TB) have continuously been acquired, except during normal maintenance or minor repair interruptions. ELBARA-II is an L-band dual-polarization radiometer with two channels (1400-1418 MHz, 1409-1427 MHz). It is continuously measuring over a vineyard field (El Renegado, Caudete de las Fuentes, Valencia) from a 15 m platform with a constant protocol for calibration and angular scanning measurements with the aim to assisting the validation of SMOS land products and the calibration of the L-MEB (L-Band Emission of the Biosphere) -basis for the SMOS Level 2 Land Processor- over the VAS validation site. One of the advantages of using the VAS site is the possibility of studying two different environmental conditions along the year. While the vine cycle extends mainly between April and October, during the rest of the year the area remains under bare soil conditions, adequate for the calibration of the soil model. The measurement protocol currently running has shown to be robust during the whole operation time and will be extended in time as much as possible to continue providing a long-term data set of ELBARA-II TB measurements and retrieved SM and TAU. This data set is also showing to be useful in support of SMOS scientific activities: the VAS area and, specifically the ELBARA-II site, offer good conditions to control the long-term evolution of SMOS Level 2 and Level 3 land products and interpret eventual anomalies that may obscure sensor hidden biases. In addition, SM and TAU that are currently

  13. Gain scheduled continuous-time model predictive controller with experimental validation on AC machine

    NASA Astrophysics Data System (ADS)

    Wang, Liuping; Gan, Lu

    2013-08-01

    Linear controllers with gain scheduling have been successfully used in the control of nonlinear systems for the past several decades. This paper proposes the design of gain scheduled continuous-time model predictive controller with constraints. Using induction machine as an illustrative example, the paper will show the four steps involved in the design of a gain scheduled predictive controller: (i) linearisation of a nonlinear plant according to operating conditions; (ii) the design of linear predictive controllers for the family of linear models; (iii) gain scheduled predictive control law that will optimise a multiple model objective function with constraints, which will also ensure smooth transitions (i.e. bumpless transfer) between the predictive controllers; (iv) experimental validation of the gain scheduled predictive control system with constraints.

  14. Validation of the conceptual anatomical model of the lung airway.

    PubMed

    Fleming, John S; Sauret, Veronique; Conway, Joy H; Martonen, Ted B

    2004-01-01

    The conceptual anatomical model of the lung airway considers each lung volume divided into ten concentric shells. It specifies the volume of each airway generation in each shell, using Weibel morphometry. This study updates and validates the model and evaluates the errors obtained when using it to estimate inhaled aerosol deposition per generation from spatial imaging data. A comparison of different airway models describing the volume per generation, including data from CT images of a lung cast and a human subject, was performed. A revised version of the conceptual model was created, using the average volume per generation from these data. The new model was applied to derive the aerosol deposition per generation from 24 single photon emission computed tomography (SPECT) studies. Analysis errors were assessed by applying the same calculations but using airway models based on the minimum and maximum volumes per generation. The mean shell position of each generation in the average model was not significantly different from either CT model. However there were differences between the volumes per generation of the different models. The root mean square differences between bronchial airways deposition fraction (generations 2-8) obtained from the maximum and minimum models compared to the new average model was 0.66 percentage points (14%). For the conducting airways deposition fraction (generations 2-15) this was 1.66 percentage points (12%). The conceptual model is consistent with CT measurements of airway geometry. The errors resulting from using a generic airway model to interpret 3D radionuclide image data have been defined.

  15. Validation of landsurface processes in the AMIP models

    SciTech Connect

    Phillips, T J

    1999-10-01

    The Atmospheric Model Intercomparison Project (AMIP) is a commonly accepted protocol for testing the performance of the world's atmospheric general circulation models (AGCMs) under common specifications of radiative forcings (in solar constant and carbon dioxide concentration) and observed ocean boundary conditions (Gates 1992, Gates et al. 1999). From the standpoint of landsurface specialists, the AMIP affords an opportunity to investigate the behaviors of a wide variety of land-surface schemes (LSS) that are coupled to their ''native'' AGCMs (Phillips et al. 1995, Phillips 1999). In principle, therefore, the AMIP permits consideration of an overarching question: ''To what extent does an AGCM's performance in simulating continental climate depend on the representations of land-surface processes by the embedded LSS?'' There are, of course, some formidable obstacles to satisfactorily addressing this question. First, there is the dilemna of how to effectively validate simulation performance, given the present dearth of global land-surface data sets. Even if this data problem were to be alleviated, some inherent methodological difficulties would remain: in the context of the AMIP, it is not possible to validate a given LSS per se, since the associated land-surface climate simulation is a product of the coupled AGCM/LSS system. Moreover, aside from the intrinsic differences in LSS across the AMIP models, the varied representations of land-surface characteristics (e.g. vegetation properties, surface albedos and roughnesses, etc.) and related variations in land-surface forcings further complicate such an attribution process. Nevertheless, it may be possible to develop validation methodologies/statistics that are sufficiently penetrating to reveal ''signatures'' of particular ISS representations (e.g. ''bucket'' vs more complex parameterizations of hydrology) in the AMIP land-surface simulations.

  16. Experimental Validation of a Pulse Tube Cfd Model

    NASA Astrophysics Data System (ADS)

    Taylor, R. P.; Nellis, G. F.; Klein, S. A.; Radebaugh, R.; Lewis, M.; Bradley, P.

    2010-04-01

    Computational fluid dynamic (CFD) analysis has been applied by various authors to study the processes occurring in the pulse tube cryocooler and carry out parametric design and optimization. However, a thorough and quantitative validation of the CFD model predications against experimental data has not been accomplished. This is in part due to the difficulty associated with measuring the specific quantities of interest (e.g., internal enthalpy flows and acoustic power) rather than generic system performance (e.g., cooling power). This paper presents the experimental validation of a previously published two-dimensional, axisymmetric CFD model of the pulse tube and its associated flow transitions. The test facility designed for this purpose is unique in that it allows the precise measurement of the cold end acoustic power, regenerator loss, and cooling power. Therefore, it allows the separate and precise measurement of both the pulse tube loss and the regenerator loss. The experimental results are presented for various pulse tube and flow transition configurations operating at a cold end temperature of 80 K over a range of pressure ratios. The comparison of the model prediction to the experimental data is presented with discussion.

  17. Statistical validation of structured population models for Daphnia magna.

    PubMed

    Adoteye, Kaska; Banks, H T; Cross, Karissa; Eytcheson, Stephanie; Flores, Kevin B; LeBlanc, Gerald A; Nguyen, Timothy; Ross, Chelsea; Smith, Emmaline; Stemkovski, Michael; Stokely, Sarah

    2015-08-01

    In this study we use statistical validation techniques to verify density-dependent mechanisms hypothesized for populations of Daphnia magna. We develop structured population models that exemplify specific mechanisms and use multi-scale experimental data in order to test their importance. We show that fecundity and survival rates are affected by both time-varying density-independent factors, such as age, and density-dependent factors, such as competition. We perform uncertainty analysis and show that our parameters are estimated with a high degree of confidence. Furthermore, we perform a sensitivity analysis to understand how changes in fecundity and survival rates affect population size and age-structure.

  18. A validation study of a stochastic model of human interaction

    NASA Astrophysics Data System (ADS)

    Burchfield, Mitchel Talmadge

    The purpose of this dissertation is to validate a stochastic model of human interactions which is part of a developmentalism paradigm. Incorporating elements of ancient and contemporary philosophy and science, developmentalism defines human development as a progression of increasing competence and utilizes compatible theories of developmental psychology, cognitive psychology, educational psychology, social psychology, curriculum development, neurology, psychophysics, and physics. To validate a stochastic model of human interactions, the study addressed four research questions: (a) Does attitude vary over time? (b) What are the distributional assumptions underlying attitudes? (c) Does the stochastic model, {-}N{intlimitssbsp{-infty}{infty}}varphi(chi,tau)\\ Psi(tau)dtau, have utility for the study of attitudinal distributions and dynamics? (d) Are the Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein theories applicable to human groups? Approximately 25,000 attitude observations were made using the Semantic Differential Scale. Positions of individuals varied over time and the logistic model predicted observed distributions with correlations between 0.98 and 1.0, with estimated standard errors significantly less than the magnitudes of the parameters. The results bring into question the applicability of Fisherian research designs (Fisher, 1922, 1928, 1938) for behavioral research based on the apparent failure of two fundamental assumptions-the noninteractive nature of the objects being studied and normal distribution of attributes. The findings indicate that individual belief structures are representable in terms of a psychological space which has the same or similar properties as physical space. The psychological space not only has dimension, but individuals interact by force equations similar to those described in theoretical physics models. Nonlinear regression techniques were used to estimate Fermi-Dirac parameters from the data. The model explained a high degree

  19. Recommendations for the validation of flow cytometric testing during drug development: II assays.

    PubMed

    O'Hara, Denise M; Xu, Yuanxin; Liang, Zhiyan; Reddy, Manjula P; Wu, Dianna Y; Litwin, Virginia

    2011-01-05

    Flow cytometry-based assays serve as valuable tools for various aspects of the drug development process ranging from target discovery and characterization to evaluation of responses in a clinical setting. The integrity of the samples and the appropriate selection and characterization of the reagents used in these assays are in themselves challenging. These concerns taken together with the flow-based technology makes the validation of flow cytometry assays a challenging effort. Therefore, apart from summarizing the role of flow cytometry technology in various stages of drug development, this manuscript focuses on recommendations for the validation of methods applying flow cytometry. Information is also provided on the relevant validation parameters for different types of flow cytometry assays to guide the users of this platform. Together, the recommendations and the information on regulatory guidelines provided in this manuscript represent the consensus of all the authors and can assist the flow cytometry user in implementing the appropriate method validation strategies.

  20. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  1. Nyala and Bushbuck II: A Harvesting Model.

    ERIC Educational Resources Information Center

    Fay, Temple H.; Greeff, Johanna C.

    1999-01-01

    Adds a cropping or harvesting term to the animal overpopulation model developed in Part I of this article. Investigates various harvesting strategies that might suggest a solution to the overpopulation problem without actually culling any animals. (ASK)

  2. On The Modeling of Educational Systems: II

    ERIC Educational Resources Information Center

    Grauer, Robert T.

    1975-01-01

    A unified approach to model building is developed from the separate techniques of regression, simulation, and factorial design. The methodology is applied in the context of a suburban school district. (Author/LS)

  3. Coarse Grained Model for Biological Simulations: Recent Refinements and Validation

    PubMed Central

    Vicatos, Spyridon; Rychkova, Anna; Mukherjee, Shayantani; Warshel, Arieh

    2014-01-01

    Exploring the free energy landscape of proteins and modeling the corresponding functional aspects presents a major challenge for computer simulation approaches. This challenge is due to the complexity of the landscape and the enormous computer time needed for converging simulations. The use of various simplified coarse grained (CG) models offers an effective way of sampling the landscape, but most current models are not expected to give a reliable description of protein stability and functional aspects. The main problem is associated with insufficient focus on the electrostatic features of the model. In this respect our recent CG model offers significant advantage as it has been refined while focusing on its electrostatic free energy. Here we review the current state of our model, describing recent refinement, extensions and validation studies while focusing on demonstrating key applications. These include studies of protein stability, extending the model to include membranes and electrolytes and electrodes as well as studies of voltage activated proteins, protein insertion trough the translocon, the action of molecular motors and even the coupling of the stalled ribosome and the translocon. Our example illustrates the general potential of our approach in overcoming major challenges in studies of structure function correlation in proteins and large macromolecular complexes. PMID:25050439

  4. Sound Transmission Validation and Sensitivity Studies in Numerical Models.

    PubMed

    Oberrecht, Steve P; Krysl, Petr; Cranford, Ted W

    2016-01-01

    In 1974, Norris and Harvey published an experimental study of sound transmission into the head of the bottlenose dolphin. We used this rare source of data to validate our Vibroacoustic Toolkit, an array of numerical modeling simulation tools. Norris and Harvey provided measurements of received sound pressure in various locations within the dolphin's head from a sound source that was moved around the outside of the head. Our toolkit was used to predict the curves of pressure with the best-guess input data (material properties, transducer and hydrophone locations, and geometry of the animal's head). In addition, we performed a series of sensitivity analyses (SAs). SA is concerned with understanding how input changes to the model influence the outputs. SA can enhance understanding of a complex model by finding and analyzing unexpected model behavior, discriminating which inputs have a dominant effect on particular outputs, exploring how inputs combine to affect outputs, and gaining insight as to what additional information improves the model's ability to predict. Even when a computational model does not adequately reproduce the behavior of a physical system, its sensitivities may be useful for developing inferences about key features of the physical system. Our findings may become a valuable source of information for modeling the interactions between sound and anatomy.

  5. Volumetric Intraoperative Brain Deformation Compensation: Model Development and Phantom Validation

    PubMed Central

    DeLorenzo, Christine; Papademetris, Xenophon; Staib, Lawrence H.; Vives, Kenneth P.; Spencer, Dennis D.; Duncan, James S.

    2012-01-01

    During neurosurgery, nonrigid brain deformation may affect the reliability of tissue localization based on preoperative images. To provide accurate surgical guidance in these cases, preoperative images must be updated to reflect the intraoperative brain. This can be accomplished by warping these preoperative images using a biomechanical model. Due to the possible complexity of this deformation, intraoperative information is often required to guide the model solution. In this paper, a linear elastic model of the brain is developed to infer volumetric brain deformation associated with measured intraoperative cortical surface displacement. The developed model relies on known material properties of brain tissue, and does not require further knowledge about intraoperative conditions. To provide an initial estimation of volumetric model accuracy, as well as determine the model’s sensitivity to the specified material parameters and surface displacements, a realistic brain phantom was developed. Phantom results indicate that the linear elastic model significantly reduced localization error due to brain shift, from >16 mm to under 5 mm, on average. In addition, though in vivo quantitative validation is necessary, preliminary application of this approach to images acquired during neocortical epilepsy cases confirms the feasibility of applying the developed model to in vivo data. PMID:22562728

  6. Evaluation and cross-validation of Environmental Models

    NASA Astrophysics Data System (ADS)

    Lemaire, Joseph

    Before scientific models (statistical or empirical models based on experimental measurements; physical or mathematical models) can be proposed and selected as ISO Environmental Standards, a Commission of professional experts appointed by an established International Union or Association (e.g. IAGA for Geomagnetism and Aeronomy, . . . ) should have been able to study, document, evaluate and validate the best alternative models available at a given epoch. Examples will be given, indicating that different values for the Earth radius have been employed in different data processing laboratories, institutes or agencies, to process, analyse or retrieve series of experimental observations. Furthermore, invariant magnetic coordinates like B and L, commonly used in the study of Earth's radiation belts fluxes and for their mapping, differ from one space mission data center to the other, from team to team, and from country to country. Worse, users of empirical models generally fail to use the original magnetic model which had been employed to compile B and L , and thus to build these environmental models. These are just some flagrant examples of inconsistencies and misuses identified so far; there are probably more of them to be uncovered by careful, independent examination and benchmarking. A meter prototype, the standard unit length that has been determined on 20 May 1875, during the Diplomatic Conference of the Meter, and deposited at the BIPM (Bureau International des Poids et Mesures). In the same token, to coordinate and safeguard progress in the field of Space Weather, similar initiatives need to be undertaken, to prevent wild, uncontrolled dissemination of pseudo Environmental Models and Standards. Indeed, unless validation tests have been performed, there is guaranty, a priori, that all models on the market place have been built consistently with the same units system, and that they are based on identical definitions for the coordinates systems, etc... Therefore

  7. The Atmospheric Radionuclide Transport Model (ARTM) - Validation of a long-term atmospheric dispersion model

    NASA Astrophysics Data System (ADS)

    Hettrich, Sebastian; Wildermuth, Hans; Strobl, Christopher; Wenig, Mark

    2016-04-01

    In the last couple of years, the Atmospheric Radionuclide Transport Model (ARTM) has been developed by the German Federal Office for Radiation Protection (BfS) and the Society for Plant and Reactor Security (GRS). ARTM is an atmospheric dispersion model for continuous long-term releases of radionuclides into the atmosphere, based on the Lagrangian particle model. This model, developed in the first place as a more realistic replacement for the out-dated Gaussian plume models, is currently being optimised for further scientific purposes to study atmospheric dispersion in short-range scenarios. It includes a diagnostic wind field model, allows for the application of building structures and multiple sources (including linear, 2-and 3-dimensional source geometries), and considers orography and surface roughness. As an output it calculates the activity concentration, dry and wet deposition and can model also the radioactive decay of Rn-222. As such, ARTM requires to undergo an intense validation process. While for short-term and short-range models, which were mainly developed for examining nuclear accidents or explosions, a few measurement data-sets are available for validation, data-sets for validating long-term models are very sparse and the existing ones mostly prove to be not applicable for validation. Here we present a strategy for the validation of long-term Lagrangian particle models based on the work with ARTM. In our validation study, the first part we present is a comprehensive analysis of the model sensitivities on different parameters like e.g. (simulation grid size resolution, starting random number, amount of simulation particles, etc.). This study provides a good estimation for the uncertainties of the simulation results and consequently can be used to generate model outputs comparable to the available measurements data at various distances from the emission source. This comparison between measurement data from selected scenarios and simulation results

  8. Validation study of air-sea gas transfer modeling

    SciTech Connect

    Asher, W.E.; Farley, P.J.; Leifer, I.S.

    1995-07-01

    Laboratory results have demonstrated the importance of bubble plumes to air-water gas transfer (Asher et al., 1994). Bubble plumes enhance gas transfer by disrupting surface films, by directly transporting a gas, and by the creation of turbulence. Models of bubble gas transfer have been developed by different authors (Atkinson, 1973; Memery and Merlivat, 1985; Woolf and Thorpe, 1991) to determine the magnitude of gas transfer due to bubbles. Laboratory measurements of both the gas transfer rate k{sub L}, and the bubble distribution {phi} in a whitecap simulation tank (WST) have allowed these models to be validated and deficiencies in the theoretical assumptions to be explored. In the WST, each bucket tip simulates a wave breaking event. Important tests of these models include whether they can explain the experimentally determined solubility and Schmidt number dependency of k{sub L}, predict the time varying bubble concentrations, predict the evasion-invasion asymmetry, and predict the fraction of k{sub L} due to bubble plumes. Four different models were tested, a steady state model (Atkinson, 1973), a non-turbulence model with constant bubble radius (Memery and Merlivat, 1985), a turbulence model with constant bubble radius (Wolf and Thorpe, 1991), and a turbulence model with varying bubble radius. All models simulated multiple bubble tip cycles. The two turbulence models were run for sufficient tip cycles to generate statistically significant number of eddies ({number_sign}{gt}50) for bubbles affected by turbulence (V{sub B}{le}V{sub T}), found to be at least four tip cycles. The models allowed up to nine gases simultaneously and were run under different conditions of trace and major gas concentrations and partial pressures.

  9. A Report on the Validation of Beryllium Strength Models

    SciTech Connect

    Armstrong, Derek Elswick

    2016-02-05

    This report discusses work on validating beryllium strength models with flyer plate and Taylor rod experimental data. Strength models are calibrated with Hopkinson bar and quasi-static data. The Hopkinson bar data for beryllium provides strain rates up to about 4000 per second. A limitation of the Hopkinson bar data for beryllium is that it only provides information on strain up to about 0.15. The lack of high strain data at high strain rates makes it difficult to distinguish between various strength model settings. The PTW model has been calibrated many different times over the last 12 years. The lack of high strain data for high strain rates has resulted in these calibrated PTW models for beryllium exhibiting significantly different behavior when extrapolated to high strain. For beryllium, the α parameter of PTW has recently been calibrated to high precision shear modulus data. In the past the α value for beryllium was set based on expert judgment. The new α value for beryllium was used in a calibration of the beryllium PTW model by Sky Sjue. The calibration by Sjue used EOS table information to model the temperature dependence of the heat capacity. Also, the calibration by Sjue used EOS table information to model the density changes of the beryllium sample during the Hopkinson bar and quasi-static experiments. In this paper, the calibrated PTW model by Sjue is compared against experimental data and other strength models. The other strength models being considered are a PTW model calibrated by Shuh- Rong Chen and a Steinberg-Guinan type model by John Pedicini. The three strength models are used in a comparison against flyer plate and Taylor rod data. The results show that the Chen PTW model provides better agreement to this data. The Chen PTW model settings have been previously adjusted to provide a better fit to flyer plate data, whereas the Sjue PTW model has not been changed based on flyer plate data. However, the Sjue model provides a reasonable fit to

  10. Experimental validation of a numerical model for subway induced vibrations

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Degrande, G.; Lombaert, G.

    2009-04-01

    This paper presents the experimental validation of a coupled periodic finite element-boundary element model for the prediction of subway induced vibrations. The model fully accounts for the dynamic interaction between the train, the track, the tunnel and the soil. The periodicity or invariance of the tunnel and the soil in the longitudinal direction is exploited using the Floquet transformation, which allows for an efficient formulation in the frequency-wavenumber domain. A general analytical formulation is used to compute the response of three-dimensional invariant or periodic media that are excited by moving loads. The numerical model is validated by means of several experiments that have been performed at a site in Regent's Park on the Bakerloo line of London Underground. Vibration measurements have been performed on the axle boxes of the train, on the rail, the tunnel invert and the tunnel wall, and in the free field, both at the surface and at a depth of 15 m. Prior to these vibration measurements, the dynamic soil characteristics and the track characteristics have been determined. The Bakerloo line tunnel of London Underground has been modelled using the coupled periodic finite element-boundary element approach and free field vibrations due to the passage of a train at different speeds have been predicted and compared to the measurements. The correspondence between the predicted and measured response in the tunnel is reasonably good, although some differences are observed in the free field. The discrepancies are explained on the basis of various uncertainties involved in the problem. The variation in the response with train speed is similar for the measurements as well as the predictions. This study demonstrates the applicability of the coupled periodic finite element-boundary element model to make realistic predictions of the vibrations from underground railways.

  11. Validating a spatially distributed hydrological model with soil morphology data

    NASA Astrophysics Data System (ADS)

    Doppler, T.; Honti, M.; Zihlmann, U.; Weisskopf, P.; Stamm, C.

    2014-09-01

    Spatially distributed models are popular tools in hydrology claimed to be useful to support management decisions. Despite the high spatial resolution of the computed variables, calibration and validation is often carried out only on discharge time series at specific locations due to the lack of spatially distributed reference data. Because of this restriction, the predictive power of these models, with regard to predicted spatial patterns, can usually not be judged. An example of spatial predictions in hydrology is the prediction of saturated areas in agricultural catchments. These areas can be important source areas for inputs of agrochemicals to the stream. We set up a spatially distributed model to predict saturated areas in a 1.2 km2 catchment in Switzerland with moderate topography and artificial drainage. We translated soil morphological data available from soil maps into an estimate of the duration of soil saturation in the soil horizons. This resulted in a data set with high spatial coverage on which the model predictions were validated. In general, these saturation estimates corresponded well to the measured groundwater levels. We worked with a model that would be applicable for management decisions because of its fast calculation speed and rather low data requirements. We simultaneously calibrated the model to observed groundwater levels and discharge. The model was able to reproduce the general hydrological behavior of the catchment in terms of discharge and absolute groundwater levels. However, the the groundwater level predictions were not accurate enough to be used for the prediction of saturated areas. Groundwater level dynamics were not adequately reproduced and the predicted spatial saturation patterns did not correspond to those estimated from the soil map. Our results indicate that an accurate prediction of the groundwater level dynamics of the shallow groundwater in our catchment that is subject to artificial drainage would require a model that

  12. Non-Linear Slosh Damping Model Development and Validation

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; West, Jeff

    2015-01-01

    Propellant tank slosh dynamics are typically represented by a mechanical model of spring mass damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control (GN&C) analysis. For a partially-filled smooth wall propellant tank, the critical damping based on classical empirical correlation is as low as 0.05%. Due to this low value of damping, propellant slosh is potential sources of disturbance critical to the stability of launch and space vehicles. It is postulated that the commonly quoted slosh damping is valid only under the linear regime where the slosh amplitude is small. With the increase of slosh amplitude, the critical damping value should also increase. If this nonlinearity can be verified and validated, the slosh stability margin can be significantly improved, and the level of conservatism maintained in the GN&C analysis can be lessened. The purpose of this study is to explore and to quantify the dependence of slosh damping with slosh amplitude. Accurately predicting the extremely low damping value of a smooth wall tank is very challenging for any Computational Fluid Dynamics (CFD) tool. One must resolve thin boundary layers near the wall and limit numerical damping to minimum. This computational study demonstrates that with proper grid resolution, CFD can indeed accurately predict the low damping physics from smooth walls under the linear regime. Comparisons of extracted damping values with experimental data for different tank sizes show very good agreements. Numerical simulations confirm that slosh damping is indeed a function of slosh amplitude. When slosh amplitude is low, the damping ratio is essentially constant, which is consistent with the empirical correlation. Once the amplitude reaches a critical value, the damping ratio becomes a linearly increasing function of the slosh amplitude. A follow-on experiment validated the developed nonlinear damping relationship. This discovery can

  13. Engineer Modeling Study. Volume II. Users Manual.

    DTIC Science & Technology

    1982-09-01

    to have some experience with automated data processing (ADP) systems. Instructions are given for model operation on the Boeing Computer system. To...AM.y Vpor 62074 ATTII, ATA-TE-SI Riyadh 09038 Sharpe Army Dopot 95331 ATm: ATZA-PE ok I..0d 61201 s.... Ar y opot 14541 Am: Bear. Libary

  14. VALIDATION OF THE CORONAL THICK TARGET SOURCE MODEL

    SciTech Connect

    Fleishman, Gregory D.; Xu, Yan; Nita, Gelu N.; Gary, Dale E.

    2016-01-10

    We present detailed 3D modeling of a dense, coronal thick-target X-ray flare using the GX Simulator tool, photospheric magnetic measurements, and microwave imaging and spectroscopy data. The developed model offers a remarkable agreement between the synthesized and observed spectra and images in both X-ray and microwave domains, which validates the entire model. The flaring loop parameters are chosen to reproduce the emission measure, temperature, and the nonthermal electron distribution at low energies derived from the X-ray spectral fit, while the remaining parameters, unconstrained by the X-ray data, are selected such as to match the microwave images and total power spectra. The modeling suggests that the accelerated electrons are trapped in the coronal part of the flaring loop, but away from where the magnetic field is minimal, and, thus, demonstrates that the data are clearly inconsistent with electron magnetic trapping in the weak diffusion regime mediated by the Coulomb collisions. Thus, the modeling supports the interpretation of the coronal thick-target sources as sites of electron acceleration in flares and supplies us with a realistic 3D model with physical parameters of the acceleration region and flaring loop.

  15. Validation of the Coronal Thick Target Source Model

    NASA Astrophysics Data System (ADS)

    Fleishman, Gregory D.; Xu, Yan; Nita, Gelu N.; Gary, Dale E.

    2016-01-01

    We present detailed 3D modeling of a dense, coronal thick-target X-ray flare using the GX Simulator tool, photospheric magnetic measurements, and microwave imaging and spectroscopy data. The developed model offers a remarkable agreement between the synthesized and observed spectra and images in both X-ray and microwave domains, which validates the entire model. The flaring loop parameters are chosen to reproduce the emission measure, temperature, and the nonthermal electron distribution at low energies derived from the X-ray spectral fit, while the remaining parameters, unconstrained by the X-ray data, are selected such as to match the microwave images and total power spectra. The modeling suggests that the accelerated electrons are trapped in the coronal part of the flaring loop, but away from where the magnetic field is minimal, and, thus, demonstrates that the data are clearly inconsistent with electron magnetic trapping in the weak diffusion regime mediated by the Coulomb collisions. Thus, the modeling supports the interpretation of the coronal thick-target sources as sites of electron acceleration in flares and supplies us with a realistic 3D model with physical parameters of the acceleration region and flaring loop.

  16. Nonlinear ultrasound modelling and validation of fatigue damage

    NASA Astrophysics Data System (ADS)

    Fierro, G. P. Malfense; Ciampa, F.; Ginzburg, D.; Onder, E.; Meo, M.

    2015-05-01

    Nonlinear ultrasound techniques have shown greater sensitivity to microcracks and they can be used to detect structural damages at their early stages. However, there is still a lack of numerical models available in commercial finite element analysis (FEA) tools that are able to simulate the interaction of elastic waves with the materials nonlinear behaviour. In this study, a nonlinear constitutive material model was developed to predict the structural response under continuous harmonic excitation of a fatigued isotropic sample that showed anharmonic effects. Particularly, by means of Landau's theory and Kelvin tensorial representation, this model provided an understanding of the elastic nonlinear phenomena such as the second harmonic generation in three-dimensional solid media. The numerical scheme was implemented and evaluated using a commercially available FEA software LS-DYNA, and it showed a good numerical characterisation of the second harmonic amplitude generated by the damaged region known as the nonlinear response area (NRA). Since this process requires only the experimental second-order nonlinear parameter and rough damage size estimation as an input, it does not need any baseline testing with the undamaged structure or any dynamic modelling of the fatigue crack growth. To validate this numerical model, the second-order nonlinear parameter was experimentally evaluated at various points over the fatigue life of an aluminium (AA6082-T6) coupon and the crack propagation was measured using an optical microscope. A good correlation was achieved between the experimental set-up and the nonlinear constitutive model.

  17. Modeling and Validation of a Propellant Mixer for Controller Design

    NASA Technical Reports Server (NTRS)

    Richter, Hanz; Barbieri, Enrique; Figueroa, Fernando

    2003-01-01

    A mixing chamber used in rocket engine testing at the NASA Stennis Space Center is modelled by a system of two nonlinear ordinary differential equations. The mixer is used to condition the thermodynamic properties of cryogenic liquid propellant by controlled injection of the same substance in the gaseous phase. The three inputs of the mixer are the positions of the valves regulating the liquid and gas flows at the inlets, and the position of the exit valve regulating the flow of conditioned propellant. Mixer operation during a test requires the regulation of its internal pressure, exit mass flow, and exit temperature. A mathematical model is developed to facilitate subsequent controller designs. The model must be simple enough to lend itself to subsequent feedback controller design, yet its accuracy must be tested against real data. For this reason, the model includes function calls to thermodynamic property data. Some structural properties of the resulting model that pertain to controller design, such as uniqueness of the equilibrium point, feedback linearizability and local stability are shown to hold under conditions having direct physical interpretation. The existence of fixed valve positions that attain a desired operating condition is also shown. Validation of the model against real data is likewise provided.

  18. Validation of hydrogen gas stratification and mixing models

    SciTech Connect

    Wu, Hsingtzu; Zhao, Haihua

    2015-05-26

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for a large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.

  19. Validation of hydrogen gas stratification and mixing models

    DOE PAGES

    Wu, Hsingtzu; Zhao, Haihua

    2015-05-26

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for amore » large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.« less

  20. Markov models of molecular kinetics: generation and validation.

    PubMed

    Prinz, Jan-Hendrik; Wu, Hao; Sarich, Marco; Keller, Bettina; Senne, Martin; Held, Martin; Chodera, John D; Schütte, Christof; Noé, Frank

    2011-05-07

    Markov state models of molecular kinetics (MSMs), in which the long-time statistical dynamics of a molecule is approximated by a Markov chain on a discrete partition of configuration space, have seen widespread use in recent years. This approach has many appealing characteristics compared to straightforward molecular dynamics simulation and analysis, including the potential to mitigate the sampling problem by extracting long-time kinetic information from short trajectories and the ability to straightforwardly calculate expectation values and statistical uncertainties of various stationary and dynamical molecular observables. In this paper, we summarize the current state of the art in generation and validation of MSMs and give some important new results. We describe an upper bound for the approximation error made by modeling molecular dynamics with a MSM and we show that this error can be made arbitrarily small with surprisingly little effort. In contrast to previous practice, it becomes clear that the best MSM is not obtained by the most metastable discretization, but the MSM can be much improved if non-metastable states are introduced near the transition states. Moreover, we show that it is not necessary to resolve all slow processes by the state space partitioning, but individual dynamical processes of interest can be resolved separately. We also present an efficient estimator for reversible transition matrices and a robust test to validate that a MSM reproduces the kinetics of the molecular dynamics data.

  1. A geomagnetically induced current warning system: model development and validation

    NASA Astrophysics Data System (ADS)

    McKay, A.; Clarke, E.; Reay, S.; Thomson, A.

    Geomagnetically Induced Currents (GIC), which can flow in technological systems at the Earth's surface, are a consequence of magnetic storms and Space Weather. A well-documented practical problem for the power transmission industry is that GIC can affect the lifetime and performance of transformers within the power grid. Operational mitigation is widely considered to be one of the best strategies to manage the Space Weather and GIC risk. Therefore in the UK a magnetic storm warning and GIC monitoring and analysis programme has been under development by the British Geological Survey and Scottish Power plc (the power grid operator for Central Scotland) since 1999. Under the auspices of the European Space Agency's service development activities BGS is developing the capability to meet two key user needs that have been identified. These needs are, firstly, the development of a near real-time solar wind shock/ geomagnetic storm warning, based on L1 solar wind data and, secondly, the development of an integrated surface geo-electric field and power grid network model that should allow prediction of GIC throughout the power grid in near real time. While the final goal is a `seamless package', the components of the package utilise diverse scientific techniques. We review progress to date with particular regard to the validation of the individual components of the package. The Scottish power grid response to the October 2003 magnetic storms is also discussed and model and validation data are presented.

  2. Validating the Thinking Styles Inventory-Revised II among Chinese university students with hearing impairment through test accommodations.

    PubMed

    Cheng, Sanyin; Zhang, Li-Fang

    2014-01-01

    The present study pioneered in adopting test accommodations to validate the Thinking Styles Inventory-Revised II (TSI-R2; Sternberg, Wagner, & Zhang, 2007) among Chinese university students with hearing impairment. A series of three studies were conducted that drew their samples from the same two universities, in which accommodating test directions (N = 213), combining test directions with language accommodations from students' perspectives (N = 366), and integrating test directions with language accommodations from teachers' perspectives (N = 129) were used. The accommodated TSI-R2 generally indicated acceptable internal scale reliabilities and factorial validity for Chinese university students with hearing loss. Limitations in relation to the study participants are discussed, as well as test accommodations and the significance and implications of the study.

  3. Qaulity-control plan for intravenous admixture programs. II: Validation of operator technique.

    PubMed

    Morris, B G; Avis, K E; Bowles, G C

    1980-05-01

    A plan for the validation of aseptic-operator technique in i.v. admixture programs and two test methods for evaluating the plan are proposed. After a new operator has been trained, the plan involves qualification of the operator through the preparation of statistically valid samples, to be followed by the random selection of samples for in-process monitoring. To test the plan, trypticase soy broth transfers were used in one hospital and Addi-Chek (Millipore Corp.) filtrations were used in another. The participants, all trained operators, initially prepared 40 test samples as a validation step. The finding of no microbial growth in these test samples permitted continuation into the monitoring phase, during which test samples were prepared randomly, one test sample out of every 25 i.v. admixtures prepared for patient use. All samples were negative for microbial growth, indicating that the operators maintained aseptic technique. These findings give evidence that the proposed testing plan is valid. The authors propose the plan as a phase of a quality control program, based on valid statistical principles, to give assurance that i.v. room operators are qualified to prepare sterile parenteral medications.

  4. Ultrasonic transducers for cure monitoring: design, modelling and validation

    NASA Astrophysics Data System (ADS)

    Lionetto, Francesca; Montagna, Francesco; Maffezzoli, Alfonso

    2011-12-01

    The finite element method (FEM) has been applied to simulate the ultrasonic wave propagation in a multilayered transducer, expressly designed for high-frequency dynamic mechanical analysis of polymers. The FEM model includes an electro-acoustic (active element) and some acoustic (passive elements) transmission lines. The simulation of the acoustic propagation accounts for the interaction between the piezoceramic and the materials in the buffer rod and backing, and the coupling between the electric and mechanical properties of the piezoelectric material. As a result of the simulations, the geometry and size of the modelled ultrasonic transducer has been optimized and used for the realization of a prototype transducer for cure monitoring. The transducer performance has been validated by measuring the velocity changes during the polymerization of a thermosetting matrix of composite materials.

  5. A validated predictive model of coronary fractional flow reserve

    PubMed Central

    Huo, Yunlong; Svendsen, Mark; Choy, Jenny Susana; Zhang, Z.-D.; Kassab, Ghassan S.

    2012-01-01

    Myocardial fractional flow reserve (FFR), an important index of coronary stenosis, is measured by a pressure sensor guidewire. The determination of FFR, only based on the dimensions (lumen diameters and length) of stenosis and hyperaemic coronary flow with no other ad hoc parameters, is currently not possible. We propose an analytical model derived from conservation of energy, which considers various energy losses along the length of a stenosis, i.e. convective and diffusive energy losses as well as energy loss due to sudden constriction and expansion in lumen area. In vitro (constrictions were created in isolated arteries using symmetric and asymmetric tubes as well as an inflatable occluder cuff) and in vivo (constrictions were induced in coronary arteries of eight swine by an occluder cuff) experiments were used to validate the proposed analytical model. The proposed model agreed well with the experimental measurements. A least-squares fit showed a linear relation as (Δp or FFR)experiment = a(Δp or FFR)theory + b, where a and b were 1.08 and −1.15 mmHg (r2 = 0.99) for in vitro Δp, 0.96 and 1.79 mmHg (r2 = 0.75) for in vivo Δp, and 0.85 and 0.1 (r2 = 0.7) for FFR. Flow pulsatility and stenosis shape (e.g. eccentricity, exit angle divergence, etc.) had a negligible effect on myocardial FFR, while the entrance effect in a coronary stenosis was found to contribute significantly to the pressure drop. We present a physics-based experimentally validated analytical model of coronary stenosis, which allows prediction of FFR based on stenosis dimensions and hyperaemic coronary flow with no empirical parameters. PMID:22112650

  6. Satellite information of sea ice for model validation

    NASA Astrophysics Data System (ADS)

    Saheed, P. P.; Mitra, Ashis K.; Momin, Imranali M.; Mahapatra, Debasis K.; Rajagopal, E. N.

    2016-05-01

    Emergence of extensively large computational facilities have enabled the scientific world to use earth system models for understating the prevailing dynamics of the earth's atmosphere, ocean and cryosphere and their inter relations. The sea ice in the arctic and the Antarctic has been identified as one of the main proxies to study the climate changes. The rapid sea-ice melting in the Arctic and disappearance of multi-year sea ice has become a matter of concern. The earth system models couple the ocean, atmosphere and sea-ice in order to bring out the possible inter connections between these three very important components and their role in the changing climate. The Indian monsoon is seen to be subjected to nonlinear changes in the recent years. The rapid ice melt in the Arctic sea ice is apparently linked to the changes in the weather and climate of the Indian subcontinent. The recent findings reveal the relation between the high events occurs in the Indian subcontinent and the Arctic sea ice melt episodes. The coupled models are being used in order to study the depth of these relations. However, the models have to be validated extensively by using measured parameters. The satellite measurements of sea-ice starts from way back in 1979. There have been many data sets available since then. Here in this study, an evaluation of the existing data sets is conducted. There are some uncertainties in these data sets. It could be associated with the absence of a single sensor for a long period of time and also the absence of accurate in-situ measurements in order to validate the satellite measurements.

  7. Argonne Bubble Experiment Thermal Model Development II

    SciTech Connect

    Buechler, Cynthia Eileen

    2016-07-01

    This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at three beam power levels, 6, 12 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was observed. This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiations. The previous report described an initial analysis performed on a geometry that had not been updated to reflect the as-built solution vessel. Here, the as-built geometry is used. Monte-Carlo N-Particle (MCNP) calculations were performed on the updated geometry, and these results were used to define the power deposition profile for the CFD analyses, which were performed using Fluent, Ver. 16.2. CFD analyses were performed for the 12 and 15 kW irradiations, and further improvements to the model were incorporated, including the consideration of power deposition in nearby vessel components, gas mixture composition, and bubble size distribution. The temperature results of the CFD calculations are compared to experimental measurements.

  8. Oxygen and seizure dynamics: II. Computational modeling

    PubMed Central

    Wei, Yina; Ullah, Ghanim; Ingram, Justin

    2014-01-01

    Electrophysiological recordings show intense neuronal firing during epileptic seizures leading to enhanced energy consumption. However, the relationship between oxygen metabolism and seizure patterns has not been well studied. Recent studies have developed fast and quantitative techniques to measure oxygen microdomain concentration during seizure events. In this article, we develop a biophysical model that accounts for these experimental observations. The model is an extension of the Hodgkin-Huxley formalism and includes the neuronal microenvironment dynamics of sodium, potassium, and oxygen concentrations. Our model accounts for metabolic energy consumption during and following seizure events. We can further account for the experimental observation that hypoxia can induce seizures, with seizures occurring only within a narrow range of tissue oxygen pressure. We also reproduce the interplay between excitatory and inhibitory neurons seen in experiments, accounting for the different oxygen levels observed during seizures in excitatory vs. inhibitory cell layers. Our findings offer a more comprehensive understanding of the complex interrelationship among seizures, ion dynamics, and energy metabolism. PMID:24671540

  9. Development and validation of a liquid composite molding model

    NASA Astrophysics Data System (ADS)

    Bayldon, John Michael

    2007-12-01

    In composite manufacturing, Vacuum Assisted Resin Transfer Molding (VARTM) is becoming increasingly important as a cost effective manufacturing method of structural composites. In this process the dry preform (reinforcement) is placed on a rigid tool and covered by a flexible film to form an airtight vacuum bag. Liquid resin is drawn under vacuum through the preform inside the vacuum bag. Modeling of this process relies on a good understanding of closely coupled phenomena. The resin flow depends on the preform permeability, which in turn depends on the local fluid pressure and the preform compaction behavior. VARTM models for predicting the flow rate in this process do exist, however, they are not able to properly predict the flow for all classes of reinforcement material. In this thesis, the continuity equation used in VARTM models is reexamined and a modified form proposed. In addition, the compaction behavior of the preform in both saturated and dry states is studied in detail and new models are proposed for the compaction behavior. To assess the validity of the proposed models, the shadow moire method was adapted and used to perform full field measurement of the preform thickness during infusion, in addition to the usual measurements of flow front position. A new method was developed and evaluated for the analysis of the moire data related to the VARTM process, however, the method has wider applicability to other full field thickness measurements. The use of this measurement method demonstrated that although the new compaction models work well in the characterization tests, they do not properly describe all the preform features required for modeling the process. In particular the effect of varying saturation on the preform's behavior requires additional study. The flow models developed did, however, improve the prediction of the flow rate for the more compliant preform material tested, and the experimental techniques have shown where additional test methods

  10. Literature-derived bioaccumulation models for earthworms: Development and validation

    SciTech Connect

    Sample, B.E.; Suter, G.W. II; Beauchamp, J.J.; Efroymson, R.A.

    1999-09-01

    Estimation of contaminant concentrations in earthworms is a critical component in many ecological risk assessments. Without site-specific data, literature-derived uptake factors or models are frequently used. Although considerable research has been conducted on contaminant transfer from soil to earthworms, most studies focus on only a single location. External validation of transfer models has not been performed. The authors developed a database of soil and tissue concentrations for nine inorganic and two organic chemicals. Only studies that presented total concentrations in departed earthworms were included. Uptake factors and simple and multiple regression models of natural-log-transformed concentrations of each analyte in soil and earthworms were developed using data from 26 studies. These models were then applied to data from six additional studies. Estimated and observed earthworm concentrations were compared using nonparametric Wilcoxon signed-rank tests. Relative accuracy and quality of different estimation methods were evaluated by calculating the proportional deviation of the estimate from the measured value. With the exception of Cr, significant, single-variable (e.g., soil concentration) regression models were fit for each analyte. Inclusion of soil Ca improved model fits for Cd and Pb. Soil pH only marginally improved model fits. The best general estimates of chemical concentrations in earthworms were generated by simple ln-ln regression models for As, Cd, Cu, Hg, Mn, Pb, Zn, and polychlorinated biphenyls. No method accurately estimated Cr or Ni in earthworms. Although multiple regression models including pH generated better estimates for a few analytes, in general, the predictive utility gained by incorporating environmental variables was marginal.

  11. PEP-II vacuum system pressure profile modeling using EXCEL

    SciTech Connect

    Nordby, M.; Perkins, C.

    1994-06-01

    A generic, adaptable Microsoft EXCEL program to simulate molecular flow in beam line vacuum systems is introduced. Modeling using finite-element approximation of the governing differential equation is discussed, as well as error estimation and program capabilities. The ease of use and flexibility of the spreadsheet-based program is demonstrated. PEP-II vacuum system models are reviewed and compared with analytical models.

  12. Mortality Probability Model III and Simplified Acute Physiology Score II

    PubMed Central

    Vasilevskis, Eduard E.; Kuzniewicz, Michael W.; Cason, Brian A.; Lane, Rondall K.; Dean, Mitzi L.; Clay, Ted; Rennie, Deborah J.; Vittinghoff, Eric; Dudley, R. Adams

    2009-01-01

    Background: To develop and compare ICU length-of-stay (LOS) risk-adjustment models using three commonly used mortality or LOS prediction models. Methods: Between 2001 and 2004, we performed a retrospective, observational study of 11,295 ICU patients from 35 hospitals in the California Intensive Care Outcomes Project. We compared the accuracy of the following three LOS models: a recalibrated acute physiology and chronic health evaluation (APACHE) IV-LOS model; and models developed using risk factors in the mortality probability model III at zero hours (MPM0) and the simplified acute physiology score (SAPS) II mortality prediction model. We evaluated models by calculating the following: (1) grouped coefficients of determination; (2) differences between observed and predicted LOS across subgroups; and (3) intraclass correlations of observed/expected LOS ratios between models. Results: The grouped coefficients of determination were APACHE IV with coefficients recalibrated to the LOS values of the study cohort (APACHE IVrecal) [R2 = 0.422], mortality probability model III at zero hours (MPM0 III) [R2 = 0.279], and simplified acute physiology score (SAPS II) [R2 = 0.008]. For each decile of predicted ICU LOS, the mean predicted LOS vs the observed LOS was significantly different (p ≤ 0.05) for three, two, and six deciles using APACHE IVrecal, MPM0 III, and SAPS II, respectively. Plots of the predicted vs the observed LOS ratios of the hospitals revealed a threefold variation in LOS among hospitals with high model correlations. Conclusions: APACHE IV and MPM0 III were more accurate than SAPS II for the prediction of ICU LOS. APACHE IV is the most accurate and best calibrated model. Although it is less accurate, MPM0 III may be a reasonable option if the data collection burden or the treatment effect bias is a consideration. PMID:19363210

  13. Systematic approach to verification and validation: High explosive burn models

    SciTech Connect

    Menikoff, Ralph; Scovel, Christina A.

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code

  14. A Model for Estimating the Reliability and Validity of Criterion-Referenced Measures.

    ERIC Educational Resources Information Center

    Edmonston, Leon P.; Randall, Robert S.

    A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in…

  15. Validity of the KABC-II Culture-Language Interpretive Matrix: A Comparison of Native English Speakers and Spanish-Speaking English Language Learners

    ERIC Educational Resources Information Center

    Van Deth, Leah M.

    2013-01-01

    The purpose of the present study was to investigate the validity of the Culture-Language Interpretive Matrix (C-LIM; Flanagan, Ortiz, & Alfonso, 2013) when applied to scores from the Kaufman Assessment Battery for Children, 2nd Edition (KABC-II; Kaufman & Kaufman, 2004). Data were analyzed from the KABC-II standardization sample as well as…

  16. Drilling forces model for lunar regolith exploration and experimental validation

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Ding, Xilun

    2017-02-01

    China's Chang'e lunar exploration project aims to sample and return lunar regolith samples at a minimum penetration depth of 2 m in 2017. Unlike such tasks on the Earth, automated drilling and sampling missions on the Moon are more complicated. Therefore, a delicately designed drill tool is required to minimize operational cost and enhance reliability. Penetration force and rotational torque are two critical parameters in designing the drill tool. In this paper, a novel numerical model for predicting penetration force and rotational torque in the drilling of lunar regolith is proposed. The model is based on quasi-static Mohr-Coulomb soil mechanics and explicitly describes the interaction between drill tool and lunar regolith. Geometric features of drill tool, mechanical properties of lunar regolith, and drilling parameters are taken into consideration in the model. Consequently, a drilling test bed was developed, and experimental penetration force and rotational torque were obtained in penetrating a lunar regolith simulant with different drilling parameters. Finally, theoretical and experimental results were compared to validate the proposed model. Experimental results indicated that the numerical model had good accuracy and was effective in predicting the penetration force and rotational torque in drilling the lunar regolith simulant.

  17. Development and validation of a habitat suitability model for ...

    EPA Pesticide Factsheets

    We developed a spatially-explicit, flexible 3-parameter habitat suitability model that can be used to identify and predict areas at higher risk for non-native dwarf eelgrass (Zostera japonica) invasion. The model uses simple environmental parameters (depth, nearshore slope, and salinity) to quantitatively describe habitat suitable for Z. japonica invasion based on ecology and physiology from the primary literature. Habitat suitability is defined with values ranging from zero to one, where one denotes areas most conducive to Z. japonica and zero denotes areas not likely to support Z. japonica growth. The model was applied to Yaquina Bay, Oregon, USA, an area that has well documented Z. japonica expansion over the last two decades. The highest suitability values for Z. japonica occurred in the mid to upper portions of the intertidal zone, with larger expanses occurring in the lower estuary. While the upper estuary did contain suitable habitat, most areas were not as large as in the lower estuary, due to inappropriate depth, a steeply sloping intertidal zone, and lower salinity. The lowest suitability values occurred below the lower intertidal zone, within the Yaquina River channel. The model was validated by comparison to a multi-year time series of Z. japonica maps, revealing a strong predictive capacity. Sensitivity analysis performed to evaluate the contribution of each parameter to the model prediction revealed that depth was the most important factor. Sh

  18. Validation of a Global Hydrodynamic Flood Inundation Model

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Smith, A.; Sampson, C. C.; Alfieri, L.; Neal, J. C.

    2014-12-01

    In this work we present first validation results for a hyper-resolution global flood inundation model. We use a true hydrodynamic model (LISFLOOD-FP) to simulate flood inundation at 1km resolution globally and then use downscaling algorithms to determine flood extent and depth at 90m spatial resolution. Terrain data are taken from a custom version of the SRTM data set that has been processed specifically for hydrodynamic modelling. Return periods of flood flows along the entire global river network are determined using: (1) empirical relationships between catchment characteristics and index flood magnitude in different hydroclimatic zones derived from global runoff data; and (2) an index flood growth curve, also empirically derived. Bankful return period flow is then used to set channel width and depth, and flood defence impacts are modelled using empirical relationships between GDP, urbanization and defence standard of protection. The results of these simulations are global flood hazard maps for a number of different return period events from 1 in 5 to 1 in 1000 years. We compare these predictions to flood hazard maps developed by national government agencies in the UK and Germany using similar methods but employing detailed local data, and to observed flood extent at a number of sites including St. Louis, USA and Bangkok in Thailand. Results show that global flood hazard models can have considerable skill given careful treatment to overcome errors in the publicly available data that are used as their input.

  19. Canyon building ventilation system dynamic model -- Parameters and validation

    SciTech Connect

    Moncrief, B.R. ); Chen, F.F.K. )

    1993-01-01

    Plant system simulation crosses many disciplines. At the core is the mimic of key components in the form of mathematical models.'' These component models are functionally integrated to represent the plant. With today's low cost high capacity computers, the whole plant can be truly and effectively reproduced in a computer model. Dynamic simulation has its roots in single loop'' design, which is still a common objective in the employment of simulation. The other common objectives are the ability to preview plant operation, to anticipate problem areas, and to test the impact of design options. As plant system complexity increases and our ability to simulate the entire plant grows, the objective to optimize plant system design becomes practical. This shift in objectives from problem avoidance to total optimization by far offers the most rewarding potential. Even a small reduction in bulk materials and space can sufficiently justify the application of this technology. Furthermore, to realize an optimal plant starts from a tight and disciplined design. We believe the assurance required to execute such a design strategy can partly be derived from a plant model. This paper reports on the application of a dynamic model to evaluate the capacity of an existing production plant ventilation system. This study met the practical objectives of capacity evaluation under present and future conditions, and under normal and accidental situations. More importantly, the description of this application, in its methods and its utility, aims to validate the technology of dynamic simulation in the environment of plant system design and safe operation.

  20. Lightweight ZERODUR: Validation of Mirror Performance and Mirror Modeling Predictions

    NASA Technical Reports Server (NTRS)

    Hull, Tony; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA's XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2 m diameter, f/1.2988% lightweighted SCHOTT lightweighted ZERODUR(TradeMark) mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR(TradeMark). In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response(dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR(TradeMark) mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS). Summarize the outcome of NASA's XRCF tests and model validations

  1. Stable Eutectoid Transformation in Nodular Cast Iron: Modeling and Validation

    NASA Astrophysics Data System (ADS)

    Carazo, Fernando D.; Dardati, Patricia M.; Celentano, Diego J.; Godoy, Luis A.

    2017-01-01

    This paper presents a new microstructural model of the stable eutectoid transformation in a spheroidal cast iron. The model takes into account the nucleation and growth of ferrite grains and the growth of graphite spheroids. Different laws are assumed for the growth of both phases during and below the intercritical stable eutectoid. At a microstructural level, the initial conditions for the phase transformations are obtained from the microstructural simulation of solidification of the material, which considers the divorced eutectic and the subsequent growth of graphite spheroids up to the initiation of the stable eutectoid transformation. The temperature field is obtained by solving the energy equation by means of finite elements. The microstructural (phase change) and macrostructural (energy balance) models are coupled by a sequential multiscale procedure. Experimental validation of the model is achieved by comparison with measured values of fractions and radius of 2D view of ferrite grains. Agreement with such experiments indicates that the present model is capable of predicting ferrite phase fraction and grain size with reasonable accuracy.

  2. Validation of DWPF Melter Off-Gas Combustion Model

    SciTech Connect

    Choi, A.S.

    2000-08-23

    The empirical melter off-gas combustion model currently used in the DWPF safety basis calculations is valid at melter vapor space temperatures above 570 degrees C, as measured in the thermowell. This lower temperature bound coincides with that of the off-gas data used as the basis of the model. In this study, the applicability of the empirical model in a wider temperature range was assessed using the off-gas data collected during two small-scale research melter runs. The first data set came from the Small Cylindrical Melter-2 run in 1985 with the sludge feed coupled with the precipitate hydrolysis product. The second data set came from the 774-A melter run in 1996 with the sludge-only feed prepared with the modified acid addition strategy during the feed pretreatment step. The results of the assessment showed that the data from these two melter runs agreed well with the existing model, and further provided the basis for extending the lower temperature bound of the model to the measured melter vapor space temperature of 445 degrees C.

  3. Lightweight ZERODUR®: Validation of mirror performance and mirror modeling predictions

    NASA Astrophysics Data System (ADS)

    Hull, Anthony B.; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA’s XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2m diameter, f/1.29 88% lightweighted SCHOTT lightweighted ZERODUR® mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR®. In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response (dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR® mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS), and summarize the outcome of NASA’s XRCF tests and model validations.

  4. Results of site validation experiments. Volume II. Supporting documents 5 through 14

    SciTech Connect

    Not Available

    1983-01-01

    Volume II contains the following supporting documents: Summary of Geologic Mapping of Underground Investigations; Logging of Vertical Coreholes - ''Double Box'' Area and Exploratory Drift; WIPP High Precision Gravity Survey; Basic Data Reports for Drillholes, Brine Content of Facility Internal Strata; Mineralogical Content of Facility Interval Strata; Location and Characterization of Interbedded Materials; Characterization of Aquifers at Shaft Locations; and Permeability of Facility Interval Strate.

  5. Science Motivation Questionnaire II: Validation with Science Majors and Nonscience Majors

    ERIC Educational Resources Information Center

    Glynn, Shawn M.; Brickman, Peggy; Armstrong, Norris; Taasoobshirazi, Gita

    2011-01-01

    From the perspective of social cognitive theory, the motivation of students to learn science in college courses was examined. The students--367 science majors and 313 nonscience majors--responded to the Science Motivation Questionnaire II, which assessed five motivation components: intrinsic motivation, self-determination, self-efficacy, career…

  6. Validation Analysis of the Shoal Groundwater Flow and Transport Model

    SciTech Connect

    A. Hassan; J. Chapman

    2008-11-01

    Environmental restoration at the Shoal underground nuclear test is following a process prescribed by a Federal Facility Agreement and Consent Order (FFACO) between the U.S. Department of Energy, the U.S. Department of Defense, and the State of Nevada. Characterization of the site included two stages of well drilling and testing in 1996 and 1999, and development and revision of numerical models of groundwater flow and radionuclide transport. Agreement on a contaminant boundary for the site and a corrective action plan was reached in 2006. Later that same year, three wells were installed for the purposes of model validation and site monitoring. The FFACO prescribes a five-year proof-of-concept period for demonstrating that the site groundwater model is capable of producing meaningful results with an acceptable level of uncertainty. The corrective action plan specifies a rigorous seven step validation process. The accepted groundwater model is evaluated using that process in light of the newly acquired data. The conceptual model of ground water flow for the Project Shoal Area considers groundwater flow through the fractured granite aquifer comprising the Sand Springs Range. Water enters the system by the infiltration of precipitation directly on the surface of the mountain range. Groundwater leaves the granite aquifer by flowing into alluvial deposits in the adjacent basins of Fourmile Flat and Fairview Valley. A groundwater divide is interpreted as coinciding with the western portion of the Sand Springs Range, west of the underground nuclear test, preventing flow from the test into Fourmile Flat. A very low conductivity shear zone east of the nuclear test roughly parallels the divide. The presence of these lateral boundaries, coupled with a regional discharge area to the northeast, is interpreted in the model as causing groundwater from the site to flow in a northeastward direction into Fairview Valley. Steady-state flow conditions are assumed given the absence of

  7. Criterion Validity, Severity Cut Scores, and Test-Retest Reliability of the Beck Depression Inventory-II in a University Counseling Center Sample

    ERIC Educational Resources Information Center

    Sprinkle, Stephen D.; Lurie, Daphne; Insko, Stephanie L.; Atkinson, George; Jones, George L.; Logan, Arthur R.; Bissada, Nancy N.

    2002-01-01

    The criterion validity of the Beck Depression Inventory-II (BDI-II; A. T. Beck, R. A. Steer, & G. K. Brown, 1996) was investigated by pairing blind BDI-II administrations with the major depressive episode portion of the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I; M. B. First, R. L. Spitzer, M. Gibbon, & J. B. W.…

  8. The African American Acculturation Scale II: Cross-Validation and Short Form.

    ERIC Educational Resources Information Center

    Landrine, Hope; Klonoff, Elizabeth A.

    1995-01-01

    Studied African American culture, using a new, shortened, 33-item African American Acculturation Scale (AAAS-33) to assess the scale's validity and reliability. Comparisons between the original form and AAAS-33 reveal high correlations, however, the longer form may be sensitive to some beliefs, practices, and attitudes not assessed by the short…

  9. Effects of Risperidone on Aberrant Behavior in Persons with Developmental Disabilities: II. Social Validity Measures.

    ERIC Educational Resources Information Center

    McAdam, David B.; Zarcone, Jennifer R.; Hellings, Jessica; Napolitano, Deborah A.; Schroeder, Stephen R.

    2002-01-01

    Consumer satisfaction and social validity were measured during a double-blind, placebo-controlled evaluation of risperidone in treating aberrant behaviors of persons with developmental disabilities. A survey showed all 17 caregivers felt participation was positive. Community members (n=52) also indicated that when on medication, the 5 participants…

  10. Validity and Utility of the Parent--Teacher Relationship Scale-II

    ERIC Educational Resources Information Center

    Dawson, Anne E.; Wymbs, Brian T.

    2016-01-01

    Preliminary findings indicate that positive relations between parents and teachers are associated with successful school outcomes for children. However, measures available to assess parent-teacher relations are scant. The current study examined validity evidence for the Parent-Teacher Relationship Scale-I (PTRS). Specifically, the internal…

  11. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  12. Geophysical Monitoring for Validation of Transient Permafrost Models (Invited)

    NASA Astrophysics Data System (ADS)

    Hauck, C.; Hilbich, C.; Marmy, A.; Scherler, M.

    2013-12-01

    Permafrost is a widespread phenomenon at high latitudes and high altitudes and describes the permanently frozen state of the subsurface in lithospheric material. In the context of climate change, both, new monitoring and modelling techniques are required to observe and predict potential permafrost changes, e.g. the warming and degradation which may lead to the liberation of carbon (Arctic) and the destabilisation of permafrost slopes (mountains). Mountain permafrost occurrences in the European Alps are characterised by temperatures only a few degrees below zero and are therefore particularly sensitive to projected climate changes in the 21st century. Traditional permafrost observation techniques are mainly based on thermal monitoring in vertical and horizontal dimension, but they provide only weak indications of physical properties such as ice or liquid water content. Geophysical techniques can be used to characterise permafrost occurrences and to monitor their changes as the physical properties of frozen and unfrozen ground measured by geophysical techniques are markedly different. In recent years, electromagnetic, seismic but especially electrical methods have been used to continuously monitor permafrost occurrences and to detect long-term changes within the active layer and regarding the ice content within the permafrost layer. On the other hand, coupled transient thermal/hydraulic models are used to predict the evolution of permafrost occurrences under different climate change scenarios. These models rely on suitable validation data for a certain observation period, which is usually restricted to data sets of ground temperature and active layer depth. Very important initialisation and validation data for permafrost models are, however, ground ice content and unfrozen water content in the active layer. In this contribution we will present a geophysical monitoring application to estimate ice and water content and their evolution in time at a permafrost station in

  13. Development and validation of a realistic head model for EEG

    NASA Astrophysics Data System (ADS)

    Bangera, Nitin Bhalchandra

    The utility of extracranial electrical or magnetic field recordings (EEG or MEG) is greatly enhanced if the generators of the bioelectromagnetic fields can be determined accurately from the measured fields. This procedure, known as the 'inverse method,' depends critically on calculations of the projection from generators in the brain to the EEG and MEG sensors. Improving and validating this calculation, known as the 'forward solution,' is the focus of this dissertation. The improvements involve more accurate modeling of the structures of the brain and thus understanding how current flows within the brain as a result of addition of structures in the forward model. Validation compares calculations using different forward models to the experimental results obtained by stimulating with implanted dipole electrodes. The human brain tissue displays inhomogeneity in electrical conductivity and also displays anisotropy, notably in the skull and brain white matter. In this dissertation, a realistic head model has been implemented using the finite element method to calculate the effects of inhomogeneity and anisotropy in the human brain. Accurate segmentation of the brain tissue type is implemented using a semi-automatic method to segment multimodal imaging data from multi-spectral MRI scans (different flip angles) in conjunction with the regular T1-weighted scans and computed x-ray tomography images. The electrical conductivity in the anisotropic white matter tissue is quantified from diffusion tensor MRI. The finite element model is constructed using AMIRA, a commercial segmentation and visualization tool and solved using ABAQUS, a commercial finite element solver. The model is validated using experimental data collected from intracranial stimulation in medically intractable epileptic patients. Depth electrodes are implanted in medically intractable epileptic patients in order to direct surgical therapy when the foci cannot be localized with the scalp EEG. These patients

  14. PIV validation of blood-heart valve leaflet interaction modelling.

    PubMed

    Kaminsky, R; Dumont, K; Weber, H; Schroll, M; Verdonck, P

    2007-07-01

    The aim of this study was to validate the 2D computational fluid dynamics (CFD) results of a moving heart valve based on a fluid-structure interaction (FSI) algorithm with experimental measurements. Firstly, a pulsatile laminar flow through a monoleaflet valve model with a stiff leaflet was visualized by means of Particle Image Velocimetry (PIV). The inflow data sets were applied to a CFD simulation including blood-leaflet interaction. The measurement section with a fixed leaflet was enclosed into a standard mock loop in series with a Harvard Apparatus Pulsatile Blood Pump, a compliance chamber and a reservoir. Standard 2D PIV measurements were made at a frequency of 60 bpm. Average velocity magnitude results of 36 phase-locked measurements were evaluated at every 10 degrees of the pump cycle. For the CFD flow simulation, a commercially available package from Fluent Inc. was used in combination with inhouse developed FSI code based on the Arbitrary Lagrangian-Eulerian (ALE) method. Then the CFD code was applied to the leaflet to quantify the shear stress on it. Generally, the CFD results are in agreement with the PIV evaluated data in major flow regions, thereby validating the FSI simulation of a monoleaflet valve with a flexible leaflet. The applicability of the new CFD code for quantifying the shear stress on a flexible leaflet is thus demonstrated.

  15. Richards model revisited: validation by and application to infection dynamics.

    PubMed

    Wang, Xiang-Sheng; Wu, Jianhong; Yang, Yong

    2012-11-21

    Ever since Richards proposed his flexible growth function more than half a century ago, it has been a mystery that this empirical function has made many incredible coincidences with real ecological or epidemic data even though one of its parameters (i.e., the exponential term) does not seem to have clear biological meaning. It is therefore a natural challenge to mathematical biologists to provide an explanation of the interesting coincidences and a biological interpretation of the parameter. Here we start from a simple epidemic SIR model to revisit Richards model via an intrinsic relation between both models. Especially, we prove that the exponential term in the Richards model has a one-to-one nonlinear correspondence to the basic reproduction number of the SIR model. This one-to-one relation provides us an explicit formula in calculating the basic reproduction number. Another biological significance of our study is the observation that the peak time is approximately just a serial interval after the turning point. Moreover, we provide an explicit relation between final outbreak size, basic reproduction number and the peak epidemic size which means that we can predict the final outbreak size shortly after the peak time. Finally, we introduce a constraint in Richards model to address over fitting problem observed in the existing studies and then apply our method with constraint to conduct some validation analysis using the data of recent outbreaks of prototype infectious diseases such as Canada 2009 H1N1 outbreak, GTA 2003 SARS outbreak, Singapore 2005 dengue outbreak, and Taiwan 2003 SARS outbreak. Our new formula gives much more stable and precise estimate of model parameters and key epidemic characteristics such as the final outbreak size, the basic reproduction number, and the turning point, compared with earlier simulations without constraints.

  16. Validation of modeled pharmacoeconomic claims in formulary submissions.

    PubMed

    Langley, Paul C

    2015-01-01

    Modeled or simulated claims for costs and outcomes are a key element in formulary submissions and comparative assessments of drug products and devices; however, all too often these claims are presented in a form that is either unverifiable or potentially verifiable but in a time frame that is of no practical use to formulary committees and others who may be committed to ongoing disease-area and therapeutic-class reviews. On the assumption that formulary committees are interested in testable predictions for product performance in target populations and ongoing disease area and therapeutic reviews, the methodological standards that should be applied are those that are accepted in the natural sciences. Claims should be presented in a form that is amenable to falsification. If not, they have no scientific standing. Certainly one can follow ISPOR-SMDM standards for validating the assumptions underpinning a model or simulation. There is clearly an important role for simulations as an input to policy initiatives and developing claims for healthcare interventions and testable hypotheses; however, one would not evaluate such claims on the realism or otherwise of the model. The only standard is one of the model's ability to predict outcomes successfully in a time frame that is practical and useful. No other standard is acceptable. This sets the stage for an active research agenda.

  17. Validation of elastic cross section models for space radiation applications

    NASA Astrophysics Data System (ADS)

    Werneth, C. M.; Xu, X.; Norman, R. B.; Ford, W. P.; Maung, K. M.

    2017-02-01

    The space radiation field is composed of energetic particles that pose both acute and long-term risks for astronauts in low earth orbit and beyond. In order to estimate radiation risk to crew members, the fluence of particles and biological response to the radiation must be known at tissue sites. Given that the spectral fluence at the boundary of the shielding material is characterized, radiation transport algorithms may be used to find the fluence of particles inside the shield and body, and the radio-biological response is estimated from experiments and models. The fidelity of the radiation spectrum inside the shield and body depends on radiation transport algorithms and the accuracy of the nuclear cross sections. In a recent study, self-consistent nuclear models based on multiple scattering theory that include the option to study relativistic kinematics were developed for the prediction of nuclear cross sections for space radiation applications. The aim of the current work is to use uncertainty quantification to ascertain the validity of the models as compared to a nuclear reaction database and to identify components of the models that can be improved in future efforts.

  18. ADOPT: A Historically Validated Light Duty Vehicle Consumer Choice Model

    SciTech Connect

    Brooker, A.; Gonder, J.; Lopp, S.; Ward, J.

    2015-05-04

    The Automotive Deployment Option Projection Tool (ADOPT) is a light-duty vehicle consumer choice and stock model supported by the U.S. Department of Energy’s Vehicle Technologies Office. It estimates technology improvement impacts on U.S. light-duty vehicles sales, petroleum use, and greenhouse gas emissions. ADOPT uses techniques from the multinomial logit method and the mixed logit method estimate sales. Specifically, it estimates sales based on the weighted value of key attributes including vehicle price, fuel cost, acceleration, range and usable volume. The average importance of several attributes changes nonlinearly across its range and changes with income. For several attributes, a distribution of importance around the average value is used to represent consumer heterogeneity. The majority of existing vehicle makes, models, and trims are included to fully represent the market. The Corporate Average Fuel Economy regulations are enforced. The sales feed into the ADOPT stock model. It captures key aspects for summing petroleum use and greenhouse gas emissions This includes capturing the change in vehicle miles traveled by vehicle age, the creation of new model options based on the success of existing vehicles, new vehicle option introduction rate limits, and survival rates by vehicle age. ADOPT has been extensively validated with historical sales data. It matches in key dimensions including sales by fuel economy, acceleration, price, vehicle size class, and powertrain across multiple years. A graphical user interface provides easy and efficient use. It manages the inputs, simulation, and results.

  19. First principles Candu fuel model and validation experimentation

    SciTech Connect

    Corcoran, E.C.; Kaye, M.H.; Lewis, B.J.; Thompson, W.T.; Akbari, F.; Higgs, J.D.; Verrall, R.A.; He, Z.; Mouris, J.F.

    2007-07-01

    are added mixing terms associated with the appearance of the component species in particular phases. To validate the model, coulometric titration experiments relating the chemical potential of oxygen to the moles of oxygen introduced to SIMFUEL are underway. A description of the apparatus to oxidize and reduce samples in a controlled way in a H{sub 2}/H{sub 2}O mixture is presented. Existing measurements for irradiated fuel, both defected and non-defected, are also being incorporated into the validation process. (authors)

  20. Case study for model validation : assessing a model for thermal decomposition of polyurethane foam.

    SciTech Connect

    Dowding, Kevin J.; Leslie, Ian H.; Hobbs, Michael L.; Rutherford, Brian Milne; Hills, Richard Guy; Pilch, Martin M.

    2004-10-01

    A case study is reported to document the details of a validation process to assess the accuracy of a mathematical model to represent experiments involving thermal decomposition of polyurethane foam. The focus of the report is to work through a validation process. The process addresses the following activities. The intended application of mathematical model is discussed to better understand the pertinent parameter space. The parameter space of the validation experiments is mapped to the application parameter space. The mathematical models, computer code to solve the models and its (code) verification are presented. Experimental data from two activities are used to validate mathematical models. The first experiment assesses the chemistry model alone and the second experiment assesses the model of coupled chemistry, conduction, and enclosure radiation. The model results of both experimental activities are summarized and uncertainty of the model to represent each experimental activity is estimated. The comparison between the experiment data and model results is quantified with various metrics. After addressing these activities, an assessment of the process for the case study is given. Weaknesses in the process are discussed and lessons learned are summarized.

  1. Development, validation and application of numerical space environment models

    NASA Astrophysics Data System (ADS)

    Honkonen, Ilja

    2013-10-01

    Currently the majority of space-based assets are located inside the Earth's magnetosphere where they must endure the effects of the near-Earth space environment, i.e. space weather, which is driven by the supersonic flow of plasma from the Sun. Space weather refers to the day-to-day changes in the temperature, magnetic field and other parameters of the near-Earth space, similarly to ordinary weather which refers to changes in the atmosphere above ground level. Space weather can also cause adverse effects on the ground, for example, by inducing large direct currents in power transmission systems. The performance of computers has been growing exponentially for many decades and as a result the importance of numerical modeling in science has also increased rapidly. Numerical modeling is especially important in space plasma physics because there are no in-situ observations of space plasmas outside of the heliosphere and it is not feasible to study all aspects of space plasmas in a terrestrial laboratory. With the increasing number of computational cores in supercomputers, the parallel performance of numerical models on distributed memory hardware is also becoming crucial. This thesis consists of an introduction, four peer reviewed articles and describes the process of developing numerical space environment/weather models and the use of such models to study the near-Earth space. A complete model development chain is presented starting from initial planning and design to distributed memory parallelization and optimization, and finally testing, verification and validation of numerical models. A grid library that provides good parallel scalability on distributed memory hardware and several novel features, the distributed cartesian cell-refinable grid (DCCRG), is designed and developed. DCCRG is presently used in two numerical space weather models being developed at the Finnish Meteorological Institute. The first global magnetospheric test particle simulation based on the

  2. Competitive sorption of Pb(II), Cu(II) and Ni(II) on carbonaceous nanofibers: A spectroscopic and modeling approach.

    PubMed

    Ding, Congcong; Cheng, Wencai; Wang, Xiangxue; Wu, Zhen-Yu; Sun, Yubing; Chen, Changlun; Wang, Xiangke; Yu, Shu-Hong

    2016-08-05

    The competitive sorption of Pb(II), Cu(II) and Ni(II) on the uniform carbonaceous nanofibers (CNFs) was investigated in binary/ternary-metal systems. The pH-dependent sorption of Pb(II), Cu(II) and Ni(II) on CNFs was independent of ionic strength, indicating that inner-sphere surface complexation dominated sorption Pb(II), Cu(II) and Ni(II) on CNFs. The maximum sorption capacities of Pb(II), Cu(II) and Ni(II) on CNFs in single-metal systems at a pH 5.5±0.2 and 25±1°C were 3.84 (795.65mg/g), 3.21 (204.00mg/g) and 2.67 (156.70mg/g)mmol/g, respectively. In equimolar binary/ternary-metal systems, Pb(II) exhibited greater inhibition of the sorption of Cu(II) and Ni(II), demonstrating the stronger affinity of CNFs for Pb(II). The competitive sorption of heavy metals in ternary-metal systems was predicted quite well by surface complexation modeling derived from single-metal data. According to FTIR, XPS and EXAFS analyses, Pb(II), Cu(II) and Ni(II) were specifically adsorbed on CNFs via covalent bonding. These observations should provide an essential start in simultaneous removal of multiple heavy metals from aquatic environments by CNFs, and open the doorways for the application of CNFs.

  3. Applicability of Monte Carlo cross validation technique for model development and validation using generalised least squares regression

    NASA Astrophysics Data System (ADS)

    Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra

    2013-03-01

    SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.

  4. Comparison of LIDAR system performance for alternative single-mode receiver architectures: modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Toliver, Paul; Ozdur, Ibrahim; Agarwal, Anjali; Woodward, T. K.

    2013-05-01

    In this paper, we describe a detailed performance comparison of alternative single-pixel, single-mode LIDAR architectures including (i) linear-mode APD-based direct-detection, (ii) optically-preamplified PIN receiver, (iii) PINbased coherent-detection, and (iv) Geiger-mode single-photon-APD counting. Such a comparison is useful when considering next-generation LIDAR on a chip, which would allow one to leverage extensive waveguide-based structures and processing elements developed for telecom and apply them to small form-factor sensing applications. Models of four LIDAR transmit and receive systems are described in detail, which include not only the dominant sources of receiver noise commonly assumed in each of the four detection limits, but also additional noise terms present in realistic implementations. These receiver models are validated through the analysis of detection statistics collected from an experimental LIDAR testbed. The receiver is reconfigurable into four modes of operation, while transmit waveforms and channel characteristics are held constant. The use of a diffuse hard target highlights the importance of including speckle noise terms in the overall system analysis. All measurements are done at 1550 nm, which offers multiple system advantages including less stringent eye safety requirements and compatibility with available telecom components, optical amplification, and photonic integration. Ultimately, the experimentally-validated detection statistics can be used as part of an end-to-end system model for projecting rate, range, and resolution performance limits and tradeoffs of alternative integrated LIDAR architectures.

  5. Validation of a Deterministic Vibroacoustic Response Prediction Model

    NASA Technical Reports Server (NTRS)

    Caimi, Raoul E.; Margasahayam, Ravi

    1997-01-01

    This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.

  6. Modeling and Validating Chronic Pharmacological Manipulation of Circadian Rhythms

    PubMed Central

    Kim, J K; Forger, D B; Marconi, M; Wood, D; Doran, A; Wager, T; Chang, C; Walton, K M

    2013-01-01

    Circadian rhythms can be entrained by a light-dark (LD) cycle and can also be reset pharmacologically, for example, by the CK1δ/ε inhibitor PF-670462. Here, we determine how these two independent signals affect circadian timekeeping from the molecular to the behavioral level. By developing a systems pharmacology model, we predict and experimentally validate that chronic CK1δ/ε inhibition during the earlier hours of a LD cycle can produce a constant stable delay of rhythm. However, chronic dosing later during the day, or in the presence of longer light intervals, is not predicted to yield an entrained rhythm. We also propose a simple method based on phase response curves (PRCs) that predicts the effects of a LD cycle and chronic dosing of a circadian drug. This work indicates that dosing timing and environmental signals must be carefully considered for accurate pharmacological manipulation of circadian phase. PMID:23863866

  7. Use of Synchronized Phasor Measurements for Model Validation in ERCOT

    NASA Astrophysics Data System (ADS)

    Nuthalapati, Sarma; Chen, Jian; Shrestha, Prakash; Huang, Shun-Hsien; Adams, John; Obadina, Diran; Mortensen, Tim; Blevins, Bill

    2013-05-01

    This paper discusses experiences in the use of synchronized phasor measurement technology in Electric Reliability Council of Texas (ERCOT) interconnection, USA. Implementation of synchronized phasor measurement technology in the region is a collaborative effort involving ERCOT, ONCOR, AEP, SHARYLAND, EPG, CCET, and UT-Arlington. As several phasor measurement units (PMU) have been installed in ERCOT grid in recent years, phasor data with the resolution of 30 samples per second is being used to monitor power system status and record system events. Post-event analyses using recorded phasor data have successfully verified ERCOT dynamic stability simulation studies. Real time monitoring software "RTDMS"® enables ERCOT to analyze small signal stability conditions by monitoring the phase angles and oscillations. The recorded phasor data enables ERCOT to validate the existing dynamic models of conventional and/or wind generator.

  8. Utilizing Chamber Data for Developing and Validating Climate Change Models

    NASA Technical Reports Server (NTRS)

    Monje, Oscar

    2012-01-01

    Controlled environment chambers (e.g. growth chambers, SPAR chambers, or open-top chambers) are useful for measuring plant ecosystem responses to climatic variables and CO2 that affect plant water relations. However, data from chambers was found to overestimate responses of C fluxes to CO2 enrichment. Chamber data may be confounded by numerous artifacts (e.g. sidelighting, edge effects, increased temperature and VPD, etc) and this limits what can be measured accurately. Chambers can be used to measure canopy level energy balance under controlled conditions and plant transpiration responses to CO2 concentration can be elucidated. However, these measurements cannot be used directly in model development or validation. The response of stomatal conductance to CO2 will be the same as in the field, but the measured response must be recalculated in such a manner to account for differences in aerodynamic conductance, temperature and VPD between the chamber and the field.

  9. THE HYDRODYNAMICAL MODELS OF THE COMETARY COMPACT H ii REGION

    SciTech Connect

    Zhu, Feng-Yao; Zhu, Qing-Feng; Li, Juan; Wang, Jun-Zhi; Zhang, Jiang-Shui E-mail: zhuqf@ustc.edu.cn E-mail: jzwang@shao.ac.cn

    2015-10-10

    We have developed a full numerical method to study the gas dynamics of cometary ultracompact H ii regions, and associated photodissociation regions (PDRs). The bow-shock and champagne-flow models with a 40.9/21.9 M{sub ⊙} star are simulated. In the bow-shock models, the massive star is assumed to move through dense (n = 8000 cm{sup −3}) molecular material with a stellar velocity of 15 km s{sup −1}. In the champagne-flow models, an exponential distribution of density with a scale height of 0.2 pc is assumed. The profiles of the [Ne ii] 12.81 μm and H{sub 2} S(2) lines from the ionized regions and PDRs are compared for two sets of models. In champagne-flow models, emission lines from the ionized gas clearly show the effect of acceleration along the direction toward the tail due to the density gradient. The kinematics of the molecular gas inside the dense shell are mainly due to the expansion of the H ii region. However, in bow-shock models the ionized gas mainly moves in the same direction as the stellar motion. The kinematics of the molecular gas inside the dense shell simply reflects the motion of the dense shell with respect to the star. These differences can be used to distinguish two sets of models.

  10. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  11. A technique for global monitoring of net solar irradiance at the ocean surface. II - Validation

    NASA Technical Reports Server (NTRS)

    Chertock, Beth; Frouin, Robert; Gautier, Catherine

    1992-01-01

    The generation and validation of the first satellite-based long-term record of surface solar irradiance over the global oceans are addressed. The record is generated using Nimbus-7 earth radiation budget (ERB) wide-field-of-view plentary-albedo data as input to a numerical algorithm designed and implemented based on radiative transfer theory. The mean monthly values of net surface solar irradiance are computed on a 9-deg latitude-longitude spatial grid for November 1978-October 1985. The new data set is validated in comparisons with short-term, regional, high-resolution, satellite-based records. The ERB-based values of net surface solar irradiance are compared with corresponding values based on radiance measurements taken by the Visible-Infrared Spin Scan Radiometer aboard GOES series satellites. Errors in the new data set are estimated to lie between 10 and 20 W/sq m on monthly time scales.

  12. Gravitropic responses of the Avena coleoptile in space and on clinostats. II. Is reciprocity valid?

    NASA Technical Reports Server (NTRS)

    Johnsson, A.; Brown, A. H.; Chapman, D. K.; Heathcote, D.; Karlsson, C.

    1995-01-01

    Experiments were undertaken to determine if the reciprocity rule is valid for gravitropic responses of oat coleoptiles in the acceleration region below 1 g. The rule predicts that the gravitropic response should be proportional to the product of the applied acceleration and the stimulation time. Seedlings were cultivated on 1 g centrifuges and transferred to test centrifuges to apply a transverse g-stimulation. Since responses occurred in microgravity, the uncertainties about the validity of clinostat simulation of weightlessness was avoided. Plants at two stages of coleoptile development were tested. Plant responses were obtained using time-lapse video recordings that were analyzed after the flight. Stimulus intensities and durations were varied and ranged from 0.1 to 1.0 g and from 2 to 130 min, respectively. For threshold g-doses the reciprocity rule was obeyed. The threshold dose was of the order of 55 g s and 120 g s, respectively, for two groups of plants investigated. Reciprocity was studied also at bending responses which are from just above the detectable level to about 10 degrees. The validity of the rule could not be confirmed for higher g-doses, chiefly because the data were more variable. It was investigated whether the uniformity of the overall response data increased when the gravitropic dose was defined as (gm x t) with m-values different from unity. This was not the case and the reciprocity concept is, therefore, valid also in the hypogravity region. The concept of gravitropic dose, the product of the transverse acceleration and the stimulation time, is also well-defined in the acceleration region studied. With the same hardware, tests were done on earth where responses occurred on clinostats. The results did not contradict the reciprocity rule but scatter in the data was large.

  13. EEG-neurofeedback for optimising performance. II: creativity, the performing arts and ecological validity.

    PubMed

    Gruzelier, John H

    2014-07-01

    As a continuation of a review of evidence of the validity of cognitive/affective gains following neurofeedback in healthy participants, including correlations in support of the gains being mediated by feedback learning (Gruzelier, 2014a), the focus here is on the impact on creativity, especially in the performing arts including music, dance and acting. The majority of research involves alpha/theta (A/T), sensory-motor rhythm (SMR) and heart rate variability (HRV) protocols. There is evidence of reliable benefits from A/T training with advanced musicians especially for creative performance, and reliable benefits from both A/T and SMR training for novice music performance in adults and in a school study with children with impact on creativity, communication/presentation and technique. Making the SMR ratio training context ecologically relevant for actors enhanced creativity in stage performance, with added benefits from the more immersive training context. A/T and HRV training have benefitted dancers. The neurofeedback evidence adds to the rapidly accumulating validation of neurofeedback, while performing arts studies offer an opportunity for ecological validity in creativity research for both creative process and product.

  14. Validation of conducting wall models using magnetic measurements

    NASA Astrophysics Data System (ADS)

    Hanson, J. M.; Bialek, J.; Turco, F.; Navratil, G. A.; King, J.; Strait, E. J.; Turnbull, A. D.

    2016-10-01

    Comparisons between magnetic measurements of the DIII D wall eddy current response to applied ac, non-axisymmetric fields and MHD stability code predictions favor a fully 3D treatment of the conducting elements in the system. We describe validation studies of the wall models in the MARS-F and VALEN stability codes, using coil-sensor vacuum coupling measurements. The measurements are sensitive to induced wall currents, owing to the close proximities of the sensors and coils to the wall. VALEN treats conducting structures with arbitrary 3D geometries, while MARS-F uses an axisymmetric wall model and spectral decomposition of the problem geometry. Straightforward improvements to the VALEN model, such as refining the wall mesh and sensor geometry, lead to good agreement with single-channel measurements. Comparisons of couplings to multi-coil toroidal mode perturbations to both codes favor the 3D simulation approach, likely because it naturally treats sidebands generated by the coils and wall currents. Work supported by the US Department of Energy under DE-FG02- 04ER54761, DE-FC02-04ER54698, and DE-FG02-95ER54309.

  15. Tyre tread-block friction: modelling, simulation and experimental validation

    NASA Astrophysics Data System (ADS)

    Wallaschek, Jörg; Wies, Burkard

    2013-07-01

    Pneumatic tyres are used in vehicles since the beginning of the last century. They generate braking and steering forces for bicycles, motor cycles, cars, busses, trucks, agricultural vehicles and aircraft. These forces are generated in the usually very small contact area between tyre and road and their performance characteristics are of eminent importance for safety and comfort. Much research has been addressed to optimise tyre design with respect to footprint pressure and friction. In this context, the development of virtual tyre prototypes, that is, simulation models for the tyre, has grown to a science in its own. While the modelling of the structural dynamics of the tyre has reached a very advanced level, which allows to take into account effects like the rate-independent inelasticity of filled elastomers or the transient 3D deformations of the ply-reinforced tread, shoulder and sidewalls, little is known about the friction between tread-block elements and road. This is particularly obvious in the case when snow, ice, water or a third-body layer are present in the tyre-road contact. In the present paper, we give a survey on the present state of knowledge in the modelling, simulation and experimental validation of tyre tread-block friction processes. We concentrate on experimental techniques.

  16. Validation of an Acoustic Impedance Prediction Model for Skewed Resonators

    NASA Technical Reports Server (NTRS)

    Howerton, Brian M.; Parrott, Tony L.

    2009-01-01

    An impedance prediction model was validated experimentally to determine the composite impedance of a series of high-aspect ratio slot resonators incorporating channel skew and sharp bends. Such structures are useful for packaging acoustic liners into constrained spaces for turbofan noise control applications. A formulation of the Zwikker-Kosten Transmission Line (ZKTL) model, incorporating the Richards correction for rectangular channels, is used to calculate the composite normalized impedance of a series of six multi-slot resonator arrays with constant channel length. Experimentally, acoustic data was acquired in the NASA Langley Normal Incidence Tube over the frequency range of 500 to 3500 Hz at 120 and 140 dB OASPL. Normalized impedance was reduced using the Two-Microphone Method for the various combinations of channel skew and sharp 90o and 180o bends. Results show that the presence of skew and/or sharp bends does not significantly alter the impedance of a slot resonator as compared to a straight resonator of the same total channel length. ZKTL predicts the impedance of such resonators very well over the frequency range of interest. The model can be used to design arrays of slot resonators that can be packaged into complex geometries heretofore unsuitable for effective acoustic treatment.

  17. Circulation Control Model Experimental Database for CFD Validation

    NASA Technical Reports Server (NTRS)

    Paschal, Keith B.; Neuhart, Danny H.; Beeler, George B.; Allan, Brian G.

    2012-01-01

    A 2D circulation control wing was tested in the Basic Aerodynamic Research Tunnel at the NASA Langley Research Center. A traditional circulation control wing employs tangential blowing along the span over a trailing-edge Coanda surface for the purpose of lift augmentation. This model has been tested extensively at the Georgia Tech Research Institute for the purpose of performance documentation at various blowing rates. The current study seeks to expand on the previous work by documenting additional flow-field data needed for validation of computational fluid dynamics. Two jet momentum coefficients were tested during this entry: 0.047 and 0.114. Boundary-layer transition was investigated and turbulent boundary layers were established on both the upper and lower surfaces of the model. Chordwise and spanwise pressure measurements were made, and tunnel sidewall pressure footprints were documented. Laser Doppler Velocimetry measurements were made on both the upper and lower surface of the model at two chordwise locations (x/c = 0.8 and 0.9) to document the state of the boundary layers near the spanwise blowing slot.

  18. Vibroacoustic Model Validation for a Curved Honeycomb Composite Panel

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Robinson, Jay H.; Grosveld, Ferdinand W.

    2001-01-01

    Finite element and boundary element models are developed to investigate the vibroacoustic response of a curved honeycomb composite sidewall panel. Results from vibroacoustic tests conducted in the NASA Langley Structural Acoustic Loads and Transmission facility are used to validate the numerical predictions. The sidewall panel is constructed from a flexible honeycomb core sandwiched between carbon fiber reinforced composite laminate face sheets. This type of construction is being used in the development of an all-composite aircraft fuselage. In contrast to conventional rib-stiffened aircraft fuselage structures, the composite panel has nominally uniform thickness resulting in a uniform distribution of mass and stiffness. Due to differences in the mass and stiffness distribution, the noise transmission mechanisms for the composite panel are expected to be substantially different from those of a conventional rib-stiffened structure. The development of accurate vibroacoustic models will aide in the understanding of the dominant noise transmission mechanisms and enable optimization studies to be performed that will determine the most beneficial noise control treatments. Finite element and boundary element models of the sidewall panel are described. Vibroacoustic response predictions are presented for forced vibration input and the results are compared with experimental data.

  19. Modal testing for model validation of structures with discrete nonlinearities

    PubMed Central

    Ewins, D. J.; Weekes, B.; delli Carri, A.

    2015-01-01

    Model validation using data from modal tests is now widely practiced in many industries for advanced structural dynamic design analysis, especially where structural integrity is a primary requirement. These industries tend to demand highly efficient designs for their critical structures which, as a result, are increasingly operating in regimes where traditional linearity assumptions are no longer adequate. In particular, many modern structures are found to contain localized areas, often around joints or boundaries, where the actual mechanical behaviour is far from linear. Such structures need to have appropriate representation of these nonlinear features incorporated into the otherwise largely linear models that are used for design and operation. This paper proposes an approach to this task which is an extension of existing linear techniques, especially in the testing phase, involving only just as much nonlinear analysis as is necessary to construct a model which is good enough, or ‘valid’: i.e. capable of predicting the nonlinear response behaviour of the structure under all in-service operating and test conditions with a prescribed accuracy. A short-list of methods described in the recent literature categorized using our framework is given, which identifies those areas in which further development is most urgently required. PMID:26303924

  20. Using remote sensing for validation of a large scale hydrologic and hydrodynamic model in the Amazon

    NASA Astrophysics Data System (ADS)

    Paiva, R. C.; Bonnet, M.; Buarque, D. C.; Collischonn, W.; Frappart, F.; Mendes, C. B.

    2011-12-01

    We present the validation of the large-scale, catchment-based hydrological MGB-IPH model in the Amazon River basin. In this model, physically-based equations are used to simulate the hydrological processes, such as the Penman Monteith method to estimate evapotranspiration, or the Moore and Clarke infiltration model. A new feature recently introduced in the model is a 1D hydrodynamic module for river routing. It uses the full Saint-Venant equations and a simple floodplain storage model. River and floodplain geometry parameters are extracted from SRTM DEM using specially developed GIS algorithms that provide catchment discretization, estimation of river cross-sections geometry and water storage volume variations in the floodplains. The model was forced using satellite-derived daily rainfall TRMM 3B42, calibrated against discharge data and first validated using daily discharges and water levels from 111 and 69 stream gauges, respectively. Then, we performed a validation against remote sensing derived hydrological products, including (i) monthly Terrestrial Water Storage (TWS) anomalies derived from GRACE, (ii) river water levels derived from ENVISAT satellite altimetry data (212 virtual stations from Santos da Silva et al., 2010) and (iii) a multi-satellite monthly global inundation extent dataset at ~25 x 25 km spatial resolution (Papa et al., 2010). Validation against river discharges shows good performance of the MGB-IPH model. For 70% of the stream gauges, the Nash and Suttcliffe efficiency index (ENS) is higher than 0.6 and at Óbidos, close to Amazon river outlet, ENS equals 0.9 and the model bias equals,-4.6%. Largest errors are located in drainage areas outside Brazil and we speculate that it is due to the poor quality of rainfall datasets in these areas poorly monitored and/or mountainous. Validation against water levels shows that model is performing well in the major tributaries. For 60% of virtual stations, ENS is higher than 0.6. But, similarly, largest

  1. Transient PVT measurements and model predictions for vessel heat transfer. Part II.

    SciTech Connect

    Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.; Evans, Gregory Herbert; Rice, Steven F.

    2010-07-01

    Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models in which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.

  2. Stratospheric Heterogeneous Chemistry and Microphysics: Model Development, Validation and Applications

    NASA Technical Reports Server (NTRS)

    Turco, Richard P.

    1996-01-01

    The objectives of this project are to: define the chemical and physical processes leading to stratospheric ozone change that involve polar stratospheric clouds (PSCS) and the reactions occurring on the surfaces of PSC particles; study the formation processes, and the physical and chemical properties of PSCS, that are relevant to atmospheric chemistry and to the interpretation of field measurements taken during polar stratosphere missions; develop quantitative models describing PSC microphysics and heterogeneous chemical processes; assimilate laboratory and field data into these models; and calculate the extent of chemical processing on PSCs and the impact of specific microphysical processes on polar composition and ozone depletion. During the course of the project, a new coupled microphysics/physical-chemistry/ photochemistry model for stratospheric sulfate aerosols and nitric acid and ice PSCs was developed and applied to analyze data collected during NASA's Arctic Airborne Stratospheric Expedition-II (AASE-II) and other missions. In this model, detailed treatments of multicomponent sulfate aerosol physical chemistry, sulfate aerosol microphysics, polar stratospheric cloud microphysics, PSC ice surface chemistry, as well as homogeneous gas-phase chemistry were included for the first time. In recent studies focusing on AASE measurements, the PSC model was used to analyze specific measurements from an aircraft deployment of an aerosol impactor, FSSP, and NO(y) detector. The calculated results are in excellent agreement with observations for particle volumes as well as NO(y) concentrations, thus confirming the importance of supercooled sulfate/nitrate droplets in PSC formation. The same model has been applied to perform a statistical study of PSC properties in the Northern Hemisphere using several hundred high-latitude air parcel trajectories obtained from Goddard. The rates of ozone depletion along trajectories with different meteorological histories are presently

  3. Canyon building ventilation system dynamic model -- Parameters and validation

    SciTech Connect

    Moncrief, B.R.; Chen, F.F.K.

    1993-02-01

    Plant system simulation crosses many disciplines. At the core is the mimic of key components in the form of mathematical ``models.`` These component models are functionally integrated to represent the plant. With today`s low cost high capacity computers, the whole plant can be truly and effectively reproduced in a computer model. Dynamic simulation has its roots in ``single loop`` design, which is still a common objective in the employment of simulation. The other common objectives are the ability to preview plant operation, to anticipate problem areas, and to test the impact of design options. As plant system complexity increases and our ability to simulate the entire plant grows, the objective to optimize plant system design becomes practical. This shift in objectives from problem avoidance to total optimization by far offers the most rewarding potential. Even a small reduction in bulk materials and space can sufficiently justify the application of this technology. Furthermore, to realize an optimal plant starts from a tight and disciplined design. We believe the assurance required to execute such a design strategy can partly be derived from a plant model. This paper reports on the application of a dynamic model to evaluate the capacity of an existing production plant ventilation system. This study met the practical objectives of capacity evaluation under present and future conditions, and under normal and accidental situations. More importantly, the description of this application, in its methods and its utility, aims to validate the technology of dynamic simulation in the environment of plant system design and safe operation.

  4. An approach to model validation and model-based prediction -- polyurethane foam case study.

    SciTech Connect

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical analyses and

  5. SDSS-II: Determination of shape and color parameter coefficients for SALT-II fit model

    SciTech Connect

    Dojcsak, L.; Marriner, J.; /Fermilab

    2010-08-01

    In this study we look at the SALT-II model of Type IA supernova analysis, which determines the distance moduli based on the known absolute standard candle magnitude of the Type IA supernovae. We take a look at the determination of the shape and color parameter coefficients, {alpha} and {beta} respectively, in the SALT-II model with the intrinsic error that is determined from the data. Using the SNANA software package provided for the analysis of Type IA supernovae, we use a standard Monte Carlo simulation to generate data with known parameters to use as a tool for analyzing the trends in the model based on certain assumptions about the intrinsic error. In order to find the best standard candle model, we try to minimize the residuals on the Hubble diagram by calculating the correct shape and color parameter coefficients. We can estimate the magnitude of the intrinsic errors required to obtain results with {chi}{sup 2}/degree of freedom = 1. We can use the simulation to estimate the amount of color smearing as indicated by the data for our model. We find that the color smearing model works as a general estimate of the color smearing, and that we are able to use the RMS distribution in the variables as one method of estimating the correct intrinsic errors needed by the data to obtain the correct results for {alpha} and {beta}. We then apply the resultant intrinsic error matrix to the real data and show our results.

  6. Predictive data modeling of human type II diabetes related statistics

    NASA Astrophysics Data System (ADS)

    Jaenisch, Kristina L.; Jaenisch, Holger M.; Handley, James W.; Albritton, Nathaniel G.

    2009-04-01

    During the course of routine Type II treatment of one of the authors, it was decided to derive predictive analytical Data Models of the daily sampled vital statistics: namely weight, blood pressure, and blood sugar, to determine if the covariance among the observed variables could yield a descriptive equation based model, or better still, a predictive analytical model that could forecast the expected future trend of the variables and possibly eliminate the number of finger stickings required to montior blood sugar levels. The personal history and analysis with resulting models are presented.

  7. Using Laboratory Magnetospheres to Develop and Validate Space Weather Models

    NASA Astrophysics Data System (ADS)

    Mauel, M. E.; Garnier, D.; Kesner, J.

    2012-12-01

    Reliable space weather predictions can be used to plan satellite operations, predict radio outages, and protect the electrical transmission grid. While direct observation of the solar corona and satellite measurements of the solar wind give warnings of possible subsequent geomagnetic activity, more accurate and reliable models of how solar fluxes effect the earth's space environment are needed. The recent development in laboratory magnetic dipoles have yielded well confined high-beta plasmas with intense energetic electron belts similar to magnetospheres. With plasma diagnostics spanning from global to small spatial scales and user-controlled experiments, these devices can be used to study current issues in space weather such as fast particle excitation and rapid depolarization events. In levitated dipole experiments, which remove the collisional loss along field lines that normally dominate laboratory dipole plasmas, slow radial convection processes can be observed. We describe ongoing experiments and investigations that (i) control interchange mixing through application of vorticity injection, (ii) make whole-plasma, high-speed images of turbulent plasma dynamics, (iii) simulate nonlinear gyrokinetic dynamics of bounded driven dipole plasma, and (iv) compare laboratory plasma measurements and global convection models.; Photographs of the LDX and CTX Laboratory Magnetospheres. Trapped plasma and energetic particles are created and studied with a variety of imaging diagnostics. Shown to the right are multiple probes for simultaneous measurements of plasma structures and turbulent mixing.

  8. NAIRAS aircraft radiation model development, dose climatology, and initial validation

    PubMed Central

    Mertens, Christopher J; Meier, Matthias M; Brown, Steven; Norman, Ryan B; Xu, Xiaojing

    2013-01-01

    [1] The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis

  9. NAIRAS aircraft radiation model development, dose climatology, and initial validation.

    PubMed

    Mertens, Christopher J; Meier, Matthias M; Brown, Steven; Norman, Ryan B; Xu, Xiaojing

    2013-10-01

    [1] The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis

  10. Validation model for Raman based skin carotenoid detection.

    PubMed

    Ermakov, Igor V; Gellermann, Werner

    2010-12-01

    measurement, and which can be easily removed for subsequent biochemical measurements. Excellent correlation (coefficient R=0.95) is obtained for this tissue site which could serve as a model site for scaled up future validation studies of large populations. The obtained results provide proof that resonance Raman spectroscopy is a valid non-invasive objective methodology for the quantitative assessment of carotenoid antioxidants in human skin in vivo.

  11. Structure Modeling and Validation applied to Source Physics Experiments (SPEs)

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Rowe, C. A.; Patton, H. J.

    2012-12-01

    The U. S. Department of Energy's Source Physics Experiments (SPEs) comprise a series of small chemical explosions used to develop a better understanding of seismic energy generation and wave propagation for low-yield explosions. In particular, we anticipate improved understanding of the processes through which shear waves are generated by the explosion source. Three tests, 100, 1000 and 1000 kg yields respectively, were detonated in the same emplacement hole and recorded on the same networks of ground motion sensors in the granites of Climax Stock at the Nevada National Security Site. We present results for the analysis and modeling of seismic waveforms recorded close-in on five linear geophone lines extending radially from ground zero, having offsets from 100 to 2000 m and station spacing of 100 m. These records exhibit azimuthal variations of P-wave arrival times, and phase velocity, spreading and attenuation properties of high-frequency Rg waves. We construct a 1D seismic body-wave model starting from a refraction analysis of P-waves and adjusting to address time-domain and frequency-domain dispersion measurements of Rg waves between 2 and 9 Hz. The shallowest part of the structure we address using the arrival times recorded by near-field accelerometers residing within 200 m of the shot hole. We additionally perform a 2D modeling study with the Spectral Element Method (SEM) to investigate which structural features are most responsible for the observed variations, in particular anomalously weak amplitude decay in some directions of this topographically complicated locality. We find that a near-surface, thin, weathered layer of varying thickness and low wave speeds plays a major role on the observed waveforms. We anticipate performing full 3D modeling of the seismic near-field through analysis and validation of waveforms on the 5 radial receiver arrays.

  12. Validating the topographic climatology logic of the MTCLIM model

    SciTech Connect

    Glassy, J.M.; Running, S.W.

    1995-06-01

    The topographic climatology logic of the MTCLIM model was validated using a comparison of modeled air temperatures vs. remotely sensed, thermal infrared (TIR) surface temperatures from three Daedalus Thematic Mapper Simulator scenes. The TIR data was taken in 1990 near Sisters, Oregon, as part of the NASA OTTER project. The original air temperature calculation method was modified for the spatial context of this study. After stratifying by canopy closure and relative solar loading, r{sup 2} values of 0.74, 0.89, and 0.97 were obtained for the March, June, and August scenes, respectively, using a modified air temperature algorithm. Consistently lower coefficients of determination were obtained using the original air temperature algorithm on the same data r{sup 2} values of .070, .52, and .66 for the March, June, and August samples respectively. The difficulties of comparing screen height air temperatures with remotely sensed surface temperatures are discussed, and several ideas for follow-on studies are suggested.

  13. Strategies for carbohydrate model building, refinement and validation

    PubMed Central

    2017-01-01

    Sugars are the most stereochemically intricate family of biomolecules and present substantial challenges to anyone trying to understand their nomenclature, reactions or branched structures. Current crystallographic programs provide an abstraction layer allowing inexpert structural biologists to build complete protein or nucleic acid model components automatically either from scratch or with little manual intervention. This is, however, still not generally true for sugars. The need for carbohydrate-specific building and validation tools has been highlighted a number of times in the past, concomitantly with the introduction of a new generation of experimental methods that have been ramping up the production of protein–sugar complexes and glycoproteins for the past decade. While some incipient advances have been made to address these demands, correctly modelling and refining carbohydrates remains a challenge. This article will address many of the typical difficulties that a structural biologist may face when dealing with carbohydrates, with an emphasis on problem solving in the resolution range where X-ray crystallography and cryo-electron microscopy are expected to overlap in the next decade. PMID:28177313

  14. Narrowband VLF observations as validation of Plasmaspheric model

    NASA Astrophysics Data System (ADS)

    Collier, Andrew; Clilverd, Mark; Rodger, C. J.; Delport, Brett; Lichtenberger, János

    2012-07-01

    PLASMON is a European Union FP7 project which will use observations of whistlers and field line resonances to construct a data assimilative model of the plasmasphere. This model will be validated by comparison with electron precipitation data derived from narrowband VLF observations of subionospheric propagation from the AARDDVARK network. A VLF receiver on Marion Island, located at 46.9° S 37.1° E (L = 2.60), is able to observe the powerful NWC transmitter in Australia over a 1.4 < L < 3.0 path which passes exclusively over the ocean. The signal is thus very strong and exhibits an excellent signal-to-noise ratio. Data from the UltraMSK narrowband VLF receiver on Marion Island are used to examine evidence of particle precipitation along this path, thereby inferring the rate at which electrons are scattered into the bounce loss cone. This path covers a small range of L-values so that there is little ambiguity in the source of any peturbations. Perturbations detected on the path during geomagnetic storms should predominantly be responses to energetic electron precipitation processes occurring inside the plasmasphere. Comparisons will be made to preliminary plasmaspheric results from the PLASMON project.

  15. Validated Analytical Model of a Pressure Compensation Drip Irrigation Emitter

    NASA Astrophysics Data System (ADS)

    Shamshery, Pulkit; Wang, Ruo-Qian; Taylor, Katherine; Tran, Davis; Winter, Amos

    2015-11-01

    This work is focused on analytically characterizing the behavior of pressure-compensating drip emitters in order to design low-cost, low-power irrigation solutions appropriate for off-grid communities in developing countries. There are 2.5 billion small acreage farmers worldwide who rely solely on their land for sustenance. Drip, compared to flood, irrigation leads to up to 70% reduction in water consumption while increasing yields by 90% - important in countries like India which are quickly running out of water. To design a low-power drip system, there is a need to decrease the pumping pressure requirement at the emitters, as pumping power is the product of pressure and flow rate. To efficiently design such an emitter, the relationship between the fluid-structure interactions that occur in an emitter need to be understood. In this study, a 2D analytical model that captures the behavior of a common drip emitter was developed and validated through experiments. The effects of independently changing the channel depth, channel width, channel length and land height on the performance were studied. The model and the key parametric insights presented have the potential to be optimized in order to guide the design of low-pressure, clog-resistant, pressure-compensating emitters.

  16. Strategies for carbohydrate model building, refinement and validation.

    PubMed

    Agirre, Jon

    2017-02-01

    Sugars are the most stereochemically intricate family of biomolecules and present substantial challenges to anyone trying to understand their nomenclature, reactions or branched structures. Current crystallographic programs provide an abstraction layer allowing inexpert structural biologists to build complete protein or nucleic acid model components automatically either from scratch or with little manual intervention. This is, however, still not generally true for sugars. The need for carbohydrate-specific building and validation tools has been highlighted a number of times in the past, concomitantly with the introduction of a new generation of experimental methods that have been ramping up the production of protein-sugar complexes and glycoproteins for the past decade. While some incipient advances have been made to address these demands, correctly modelling and refining carbohydrates remains a challenge. This article will address many of the typical difficulties that a structural biologist may face when dealing with carbohydrates, with an emphasis on problem solving in the resolution range where X-ray crystallography and cryo-electron microscopy are expected to overlap in the next decade.

  17. Validating a Dynamic Earthquake Model to Produce Realistic Ground Motion

    NASA Astrophysics Data System (ADS)

    Andrews, D. J.; Ma, S.

    2015-12-01

    A dynamic earthquake model is validated by finding good agreement with an empirical ground motion prediction equation. The model replaces detailed deterministic processes on the fault with a stochastic emergent law. Initial stress on a fault plane is heterogeneous with a power-law spectrum that is self-similar. Rupture stops naturally. Rupture extent and moment are determined primarily by the specified lowest Fourier mode of initial stress. Higher modes are random with a self-similar spectrum that is tied to the amplitude of the lowest mode. Ten random realizations are calculated with a velocity structure for a hard rock site. The calculated mean response spectrum for M7 at a distance of 10 km agrees the with the GMPE of Boore et al (2013) within 0.25 of one standard deviation at periods from 0.3 seconds to 10 seconds. The agreement could be improved by using a more refined relation of the spatial stress spectrum to the amplitude of the lowest mode. The standard deviation of the calculated ground motion is somewhat smaller than the GMPE, but it depends on other rupture parameters and needs more investigation.

  18. Validation of the REBUS-3/RCT methodologies for EBR-II core-follow analysis

    SciTech Connect

    McKnight, R.D.

    1992-01-01

    One of the many tasks to be completed at EBR-2/FCF (Fuel Cycle Facility) regarding fuel cycle closure for the Integral Fast Reactor (IFR) is to develop and install the systems to be used for fissile material accountancy and control. The IFR fuel cycle and pyrometallurgical process scheme determine the degree of actinide of actinide buildup in the reload fuel assemblies. Inventories of curium, americium and neptunium in the fuel will affect the radiation and thermal environmental conditions at the fuel fabrication stations, the chemistry of reprocessing, and the neutronic performance of the core. Thus, it is important that validated calculational tools be put in place for accurately determining isotopic mass and neutronic inputs to FCF for both operational and material control and accountancy purposes. The primary goal of this work is to validate the REBUS-2/RCT codes as tools which can adequately compute the burnup and isotopic distribution in binary- and ternary-fueled Mark-3, Mark-4, and Mark-5 subassemblies. 6 refs.

  19. Adhesion ability of angiotensin II with model membranes.

    PubMed

    Preu, Julia; Tiefenauer, Louis; Gutberlet, Thomas

    2017-02-01

    The octa-peptide angiotensin II (Ang II, (H2NAspArgValTyrIleHisProPheCOOH)) is one of the key player on blood pressure regulation in mammals. Predominantly binding to the Angiotensin type 1 and 2 receptors, the hormone is one of several peptide ligands binding to G protein coupled receptors (GPCR). The active hormone derives from a high molecular weight precursor sequentially cleaved by the proteases renin and the angiotensin converting enzyme (ACE). The chemical nature of the amino acid sequence has an impact on the behavior in the proximity of membranes, demonstrated using different membrane model systems and biophysical methods. Applying electrochemical impedance spectroscopy and small angle X-ray scattering a detailed view on the adhesion of the peptide with model membrane surfaces was performed. The role of specific amino acids involved in the interaction with the phospholipid head group were investigated and, studying a truncated version of Ang II, Ang (1-7), the key role of the C-terminal phenylalanine was proven. Truncation of the C-terminal amino acid abolishes the binding of the peptide to the membrane surface. A shift in pH, altering the protonation state of the central histidine residue impairs the adhesion of Ang II.

  20. Parental modelling of eating behaviours: observational validation of the Parental Modelling of Eating Behaviours scale (PARM).

    PubMed

    Palfreyman, Zoe; Haycraft, Emma; Meyer, Caroline

    2015-03-01

    Parents are important role models for their children's eating behaviours. This study aimed to further validate the recently developed Parental Modelling of Eating Behaviours Scale (PARM) by examining the relationships between maternal self-reports on the PARM with the modelling practices exhibited by these mothers during three family mealtime observations. Relationships between observed maternal modelling and maternal reports of children's eating behaviours were also explored. Seventeen mothers with children aged between 2 and 6 years were video recorded at home on three separate occasions whilst eating a meal with their child. Mothers also completed the PARM, the Children's Eating Behaviour Questionnaire and provided demographic information about themselves and their child. Findings provided validation for all three PARM subscales, which were positively associated with their observed counterparts on the observational coding scheme (PARM-O). The results also indicate that habituation to observations did not change the feeding behaviours displayed by mothers. In addition, observed maternal modelling was significantly related to children's food responsiveness (i.e., their interest in and desire for foods), enjoyment of food, and food fussiness. This study makes three important contributions to the literature. It provides construct validation for the PARM measure and provides further observational support for maternal modelling being related to lower levels of food fussiness and higher levels of food enjoyment in their children. These findings also suggest that maternal feeding behaviours remain consistent across repeated observations of family mealtimes, providing validation for previous research which has used single observations.

  1. Validation of population-based disease simulation models: a review of concepts and methods

    PubMed Central

    2010-01-01

    Background Computer simulation models are used increasingly to support public health research and policy, but questions about their quality persist. The purpose of this article is to review the principles and methods for validation of population-based disease simulation models. Methods We developed a comprehensive framework for validating population-based chronic disease simulation models and used this framework in a review of published model validation guidelines. Based on the review, we formulated a set of recommendations for gathering evidence of model credibility. Results Evidence of model credibility derives from examining: 1) the process of model development, 2) the performance of a model, and 3) the quality of decisions based on the model. Many important issues in model validation are insufficiently addressed by current guidelines. These issues include a detailed evaluation of different data sources, graphical representation of models, computer programming, model calibration, between-model comparisons, sensitivity analysis, and predictive validity. The role of external data in model validation depends on the purpose of the model (e.g., decision analysis versus prediction). More research is needed on the methods of comparing the quality of decisions based on different models. Conclusion As the role of simulation modeling in population health is increasing and models are becoming more complex, there is a need for further improvements in model validation methodology and common standards for evaluating model credibility. PMID:21087466

  2. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    PubMed

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  3. Nonparametric model validations for hidden Markov models with applications in financial econometrics

    PubMed Central

    Zhao, Zhibiao

    2011-01-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601

  4. Alaska North Slope Tundra Travel Model and Validation Study

    SciTech Connect

    Harry R. Bader; Jacynthe Guimond

    2006-03-01

    of variability in snow depth cover throughout the period of field experimentation. The amount of change in disturbance indicators was greater in the tundra communities of the Foothills than in those of the Coastal Plain. However the overall level of change in both community types was less than expected. In Coastal Plain communities, ground hardness and snow slab thickness were found to play an important role in change in active layer depth and soil moisture as a result of treatment. In the Foothills communities, snow cover had the most influence on active layer depth and soil moisture as a result of treatment. Once certain minimum thresholds for ground hardness, snow slab thickness, and snow depth were attained, it appeared that little or no additive effect was realized regarding increased resistance to disturbance in the tundra communities studied. DNR used the results of this modeling project to set a standard for maximum permissible disturbance of cross-country tundra travel, with the threshold set below the widely accepted standard of Low Disturbance levels (as determined by the U.S. Fish and Wildlife Service). DNR followed the modeling project with a validation study, which seemed to support the field trial conclusions and indicated that the standard set for maximum permissible disturbance exhibits a conservative bias in favor of environmental protection. Finally DNR established a quick and efficient tool for visual estimations of disturbance to determine when investment in field measurements is warranted. This Visual Assessment System (VAS) seemed to support the plot disturbance measurements taking during the modeling and validation phases of this project.

  5. SPORE/EDRN/PRE-PLCO Ovarian Phase II Validation Study — EDRN Public Portal

    Cancer.gov

    Create a new set of phase II specimens (160 cases with pre-operative bloods representing major histologic types and including 80 early-staged and 80 late-staged cases, 160 controls with benign disease, 480 general population controls, and a small set of serial Samples collected either at least 3 months apart, but not more than 6 months apart OR between 10 months apart and no more than 14 months apart in 40 healthy controls) will be used to evaluate markers identified in preliminary work. The top 5-10 markers, plus an expanded panel of Luminex markers, will comprise a “working consensus panel” for subsequent analysis in PLCO specimens.

  6. Validation of numerical ground water models used to guide decision making.

    PubMed

    Hassan, Ahmed E

    2004-01-01

    Many sites of ground water contamination rely heavily on complex numerical models of flow and transport to develop closure plans. This complexity has created a need for tools and approaches that can build confidence in model predictions and provide evidence that these predictions are sufficient for decision making. Confidence building is a long-term, iterative process and the author believes that this process should be termed model validation. Model validation is a process, not an end result. That is, the process of model validation cannot ensure acceptable prediction or quality of the model. Rather, it provides an important safeguard against faulty models or inadequately developed and tested models. If model results become the basis for decision making, then the validation process provides evidence that the model is valid for making decisions (not necessarily a true representation of reality). Validation, verification, and confirmation are concepts associated with ground water numerical models that not only do not represent established and generally accepted practices, but there is not even widespread agreement on the meaning of the terms as applied to models. This paper presents a review of model validation studies that pertain to ground water flow and transport modeling. Definitions, literature debates, previously proposed validation strategies, and conferences and symposia that focused on subsurface model validation are reviewed and discussed. The review is general and focuses on site-specific, predictive ground water models used for making decisions regarding remediation activities and site closure. The aim is to provide a reasonable starting point for hydrogeologists facing model validation for ground water systems, thus saving a significant amount of time, effort, and cost. This review is also aimed at reviving the issue of model validation in the hydrogeologic community and stimulating the thinking of researchers and practitioners to develop practical and

  7. The Model Analyst’s Toolkit: Scientific Model Development, Analysis, and Validation

    DTIC Science & Technology

    2015-08-20

    confidence  Use a combination of human-driven data visualization and analysis, automated data analysis, and machine learning to leverage human...model construction and refinement. Our objective is to design and implement a prototype mechanism that learns from data how factors interact in non...create models that describe the world around us. Models help scientists organize their theories and suggest additional experiments to run. Validated

  8. Reservoir modeling of the Phase II Hot Dry Rock System

    SciTech Connect

    Zyvoloski, G.

    1984-01-01

    The Phase II system has been created with a series of hydraulic fracturing experiments at the Fenton Hill Hot Dry Rock site. Experiment 2032, the largest of the fracturing operations, involved injecting 5.6 million gallons (21,200m/sup 3/) of water into wellbore EE-2 over the period December 6-9, 1983. The experiment has been modeled using geothermal simulator FEHM developed at Los Alamos National Laboratory. The modeling effort has produced strong evidence of a large highly fractured reservoir. Two long term heat extraction schemes for the reservoir are studied with the model.

  9. Tsunami-HySEA model validation for tsunami current predictions

    NASA Astrophysics Data System (ADS)

    Macías, Jorge; Castro, Manuel J.; González-Vida, José Manuel; Ortega, Sergio

    2016-04-01

    Model ability to compute and predict tsunami flow velocities is of importance in risk assessment and hazard mitigation. Substantial damage can be produced by high velocity flows, particularly in harbors and bays, even when the wave height is small. Besides, an accurate simulation of tsunami flow velocities and accelerations is fundamental for advancing in the study of tsunami sediment transport. These considerations made the National Tsunami Hazard Mitigation Program (NTHMP) proposing a benchmark exercise focussed on modeling and simulating tsunami currents. Until recently, few direct measurements of tsunami velocities were available to compare and to validate model results. After Tohoku 2011 many current meters measurement were made, mainly in harbors and channels. In this work we present a part of the contribution made by the EDANYA group from the University of Malaga to the NTHMP workshop organized at Portland (USA), 9-10 of February 2015. We have selected three out of the five proposed benchmark problems. Two of them consist in real observed data from the Tohoku 2011 event, one at Hilo Habour (Hawaii) and the other at Tauranga Bay (New Zealand). The third one consists in laboratory experimental data for the inundation of Seaside City in Oregon. Acknowledgements: This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069) and the Spanish Government Research project DAIFLUID (MTM2012-38383-C02-01) and Universidad de Málaga, Campus de Excelencia Andalucía TECH. The GPU and multi-GPU computations were performed at the Unit of Numerical Methods (UNM) of the Research Support Central Services (SCAI) of the University of Malaga.

  10. The role of global cloud climatologies in validating numerical models

    NASA Technical Reports Server (NTRS)

    HARSHVARDHAN

    1991-01-01

    Reliable estimates of the components of the surface radiation budget are important in studies of ocean-atmosphere interaction, land-atmosphere interaction, ocean circulation and in the validation of radiation schemes used in climate models. The methods currently under consideration must necessarily make certain assumptions regarding both the presence of clouds and their vertical extent. Because of the uncertainties in assumed cloudiness, all these methods involve perhaps unacceptable uncertainties. Here, a theoretical framework that avoids the explicit computation of cloud fraction and the location of cloud base in estimating the surface longwave radiation is presented. Estimates of the global surface downward fluxes and the oceanic surface net upward fluxes were made for four months (April, July, October and January) in 1985 to 1986. These estimates are based on a relationship between cloud radiative forcing at the top of the atmosphere and the surface obtained from a general circulation model. The radiation code is the version used in the UCLA/GLA general circulation model (GCM). The longwave cloud radiative forcing at the top of the atmosphere as obtained from Earth Radiation Budget Experiment (ERBE) measurements is used to compute the forcing at the surface by means of the GCM-derived relationship. This, along with clear-sky fluxes from the computations, yield maps of the downward longwave fluxes and net upward longwave fluxes at the surface. The calculated results are discussed and analyzed. The results are consistent with current meteorological knowledge and explainable on the basis of previous theoretical and observational works; therefore, it can be concluded that this method is applicable as one of the ways to obtain the surface longwave radiation fields from currently available satellite data.

  11. Validation of transport models using additive flux minimization technique

    NASA Astrophysics Data System (ADS)

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-10-01

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V&V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V&V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V&V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  12. Active control strategy on a catenary-pantograph validated model

    NASA Astrophysics Data System (ADS)

    Sanchez-Rebollo, C.; Jimenez-Octavio, J. R.; Carnicero, A.

    2013-04-01

    Dynamic simulation methods have become essential in the design process and control of the catenary-pantograph system, overall since high-speed trains and interoperability criteria are getting very trendy. This paper presents an original hardware-in-the-loop (HIL) strategy aimed at integrating a multicriteria active control within the catenary-pantograph dynamic interaction. The relevance of HIL control systems applied in the frame of the pantograph is undoubtedly increasing due to the recent and more demanding requirements for high-speed railway systems. Since the loss of contact between the catenary and the pantograph leads to arcing and electrical wear, and too high contact forces cause mechanical wear of both the catenary wires and the strips of the pantograph, not only prescribed but also economic and performance criteria ratify such a relevance. Different configurations of the proportional-integral-derivative (PID) controller are proposed and applied to two different plant systems. Since this paper is mainly focused on the control strategy, both plant systems are simulation models though the methodology is suitable for a laboratory bench. The strategy of control involves a multicriteria optimisation of the contact force and the consumption of the energy supplied by the control force, a genetic algorithm has been applied for this purpose. Thus, the PID controller is fitted according to these conflicting objectives and tested within a nonlinear lumped model and a nonlinear finite element model, being the last one validated against the European Standard EN 50318. Finally, certain tests have been accomplished in order to analyse the robustness of the control strategy. Particularly, the relevance or the plant simulation, the running speed and the instrumentation time delay are studied in this paper.

  13. Childhood behavioural disorders in Ambo district, western Ethiopia. II. Validation of the RQC.

    PubMed

    Tadesse, B; Kebede, D; Tegegne, T; Alem, A

    1999-01-01

    We report here on a study conducted to measure the validity of an Amharic version of the Reporting Questionnaire for Children (RQC), that was used in a survey of childhood behavioural disorders in a predominantly rural district in western Ethiopia. Mothers of 196 children aged 5-15 years, who were initially interviewed by the RQC were re-interviewed by a psychiatrist who was unaware of the RQC status of these children. The re-interview was conducted using a DSM IV checklist. The study showed that a cut-off point of one or more positive responses to any of the 10 questions on the RQC maximized sensitivity (87.5%) and specificity (65%). The discriminatory power of each item was also computed, and the item dealing with wetting/soiling oneself was found to have the highest ability to identify cases from non-cases. The item on abnormal speech was found to have the least discriminating power.

  14. [Diagnostic validity of attention deficit/hyperactivity disorder: from phenomenology to neurobiology (II)].

    PubMed

    Trujillo-Orrego, Natalia; Ibáñez, Agustín; Pineda, David A

    2012-03-16

    The diagnostic criteria for the attention deficit/hyperactivity disorder (ADHD) were defined by the American Psychiatric Association in the Diagnostic and Statistical Manual of Mental Disorders, fourth version. ADHD is a neuro-psychiatric disorder associated with impairments in everyday life and behavioral dysregulation (i.e. inattention, hyper-activity and impulsivity), and it has showed empirical evidence from clinical, pharmacological, and psychometric studies. Nevertheless, the role of neurobiological impairments in the presentation of the symptoms remains unclear. For this paper, the authors reviewed Spanish and English literature that support the neurobiological validity of the disorder, aimed to present evidence associated with its cognitive and behavioral phenotype (e.g. in: neuropsychology, electrophysiology, structural and functional magnetic resonance imaging, neurochemistry and genetics). Additionally, an integrative theoretical clinical and scientific proposal is presented. Finally, the introduction of neurobiological marker as part of the definitive diagnosis is suggested, as a started point for the identification of therapeutic targets.

  15. Modeling short wave radiation and ground surface temperature: a validation experiment in the Western Alps

    NASA Astrophysics Data System (ADS)

    Pogliotti, P.; Cremonese, E.; Dallamico, M.; Gruber, S.; Migliavacca, M.; Morra di Cella, U.

    2009-12-01

    Permafrost distribution in high-mountain areas is influenced by topography (micro-climate) and high variability of ground covers conditions. Its monitoring is very difficult due to logistical problems like accessibility, costs, weather conditions and reliability of instrumentation. For these reasons physically-based modeling of surface rock/ground temperatures (GST) is fundamental for the study of mountain permafrost dynamics. With this awareness a 1D version of GEOtop model (www.geotop.org) is tested in several high-mountain sites and its accuracy to reproduce GST and incoming short wave radiation (SWin) is evaluated using independent field measurements. In order to describe the influence of topography, both flat and near-vertical sites with different aspects are considered. Since the validation of SWin is difficult on steep rock faces (due to the lack of direct measures) and validation of GST is difficult on flat sites (due to the presence of snow) the two parameters are validated as independent experiments: SWin only on flat morphologies, GST only on the steep ones. The main purpose is to investigate the effect of: (i) distance between driving meteo station location and simulation point location, (ii) cloudiness, (iii) simulation point aspect, (iv) winter/summer period. The temporal duration of model runs is variable from 3 years for the SWin experiment to 8 years for the validation of GST. The model parameterization is constant and tuned for a common massive bedrock of crystalline rock like granite. Ground temperature profile is not initialized because rock temperature is measured at only 10cm depth. A set of 9 performance measures is used for comparing model predictions and observations (including: fractional mean bias (FB), coefficient of residual mass (CMR), mean absolute error (MAE), modelling efficiency (ME), coefficient of determination (R2)). Results are very encouraging. For both experiments the distance (Km) between location of the driving meteo

  16. Mitigation of turbidity currents in reservoirs with passive retention systems: validation of CFD modeling

    NASA Astrophysics Data System (ADS)

    Ferreira, E.; Alves, E.; Ferreira, R. M. L.

    2012-04-01

    Sediment deposition by continuous turbidity currents may affect eco-environmental river dynamics in natural reservoirs and hinder the maneuverability of bottom discharge gates in dam reservoirs. In recent years, innovative techniques have been proposed to enforce the deposition of turbidity further upstream in the reservoir (and away from the dam), namely, the use of solid and permeable obstacles such as water jet screens , geotextile screens, etc.. The main objective of this study is to validate a computational fluid dynamics (CFD) code applied to the simulation of the interaction between a turbidity current and a passive retention system, designed to induce sediment deposition. To accomplish the proposed objective, laboratory tests were conducted where a simple obstacle configuration was subjected to the passage of currents with different initial sediment concentrations. The experimental data was used to build benchmark cases to validate the 3D CFD software ANSYS-CFX. Sensitivity tests of mesh design, turbulence models and discretization requirements were performed. The validation consisted in comparing experimental and numerical results, involving instantaneous and time-averaged sediment concentrations and velocities. In general, a good agreement between the numerical and the experimental values is achieved when: i) realistic outlet conditions are specified, ii) channel roughness is properly calibrated, iii) two equation k - ɛ models are employed iv) a fine mesh is employed near the bottom boundary. Acknowledgements This study was funded by the Portuguese Foundation for Science and Technology through the project PTDC/ECM/099485/2008. The first author thanks the assistance of Professor Moitinho de Almeida from ICIST and to all members of the project and of the Fluvial Hydraulics group of CEHIDRO.

  17. Validation of Community Models: Identifying Events in Space Weather Model Timelines

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    I develop and document a set of procedures which test the quality of predictions of solar wind speed and polarity of the interplanetary magnetic field (IMF) made by coupled models of the ambient solar corona and heliosphere. The Wang-Sheeley-Arge (WSA) model is used to illustrate the application of these validation procedures. I present an algorithm which detects transitions of the solar wind from slow to high speed. I also present an algorithm which processes the measured polarity of the outward directed component of the IMF. This removes high-frequency variations to expose the longer-scale changes that reflect IMF sector changes. I apply these algorithms to WSA model predictions made using a small set of photospheric synoptic magnetograms obtained by the Global Oscillation Network Group as input to the model. The results of this preliminary validation of the WSA model (version 1.6) are summarized.

  18. Simplified modeling of the EBR-II control rods

    SciTech Connect

    Angelo, P.L.

    1995-06-25

    Simplified models of EBR-II control and safety rods have been developed for core modeling under various operational and shutdown conditions. A parametric study was performed on normal worth, high worth, and safety rod type control rods. A summary of worth changes due to individual modeling approximations is tabulated. Worth effects due to structural modeling simplification are negligible. Fuel region homogenization and burnup compression contributes more than any other factor. Reference case C/E values (ratio of calculated worth from detailed model to measured worth) of 1.072 and 1.142 for safety and normal worth rods indicate acceptable errors when the approximations are used. Fuel burnup effect illustrates rod worth sensitivity to the modeling approximation. Aggregate effects are calculated under a reduced mesh.

  19. Comprehensive Modeling of the Apache with CAMRAD II

    NASA Technical Reports Server (NTRS)

    Jones, Henry E.; Kunz, Donald L.

    2001-01-01

    This paper presents a report of a multi year study of the U.S. Army LONGBOW APACHE (AH-64D) aircraft. The goals of this study were to provide the Apache Project Managers Office (PMO) with a broad spectrum of calibrated comprehensive and CFD models of the AH-64D aircraft. The goal of this paper is to present an overview of the comprehensive model which has been developed. The CAMRAD II computer code was chosen to complete this task. The paper first discusses issues that must be addressed when modeling the Apache using CAMRAD. The work required the acquisition of a data base for the aircraft and the development and application of a multidisciplinary computer model. Sample results from various parts of the model are presented. Conclusions with regard to the strengths and weaknesses of simulations based on this model are discussed.

  20. Comprehensive Modeling of the Apache With CAMRAD II

    NASA Technical Reports Server (NTRS)

    Jones, Henry E.; Kunz, Donald L.

    2001-01-01

    This paper presents a report of a multi year study of the U.S. Army LONGBOW APACHE (AH-64D) aircraft. The goals of this study were to provide the Apache Project Managers Office (PMO) with a broad spectrum of calibrated comprehensive and CFD models of the AH-64D aircraft. The goal of this paper is to present an overview of the comprehensive model which has been developed. The CAMRAD II computer code was chosen to complete this task. The paper first discusses issues that must be addressed when modeling the Apache using CAMRAD. The work required the acquisition of a data base for the aircraft and the development and application of a multidisciplinary computer model. Sample results from various parts of the model are presented. Conclusions with regard to the strengths and weaknesses of simulations based on this model are discussed.

  1. Geoid model computation and validation over Alaska/Yukon

    NASA Astrophysics Data System (ADS)

    Li, X.; Huang, J.; Roman, D. R.; Wang, Y.; Veronneau, M.

    2012-12-01

    The Alaska and Yukon area consists of very complex and dynamic geology. It is featured by the two highest mountains in North America, Mount McKinely (20,320ft) in Alaska, USA and Mount Logan (19,541ft) in Yukon, Canada, along with the Alaska trench along the plate boundaries. On the one hand this complex geology gives rise to large horizontal geoid gradients across this area. On the other hand geoid time variation is much stronger than most of the other areas in the world due to tectonic movement, the post glacial rebound and ice melting effects in this region. This type of geology poses great challenges for the determination of North American geoid over this area, which demands proper gravity data coverage in both space and time on both the Alaska and Yukon sides. However, the coverage of the local gravity data is inhomogenous in this area. The terrestrial gravity is sparse in Alaska, and spans a century in time. In contrast, the terrestrial gravity is relatively well-distributed in Yukon but with data gaps. In this paper, various new satellite models along with the newly acquired airborne data will be incorporated to augment the middle-to-long wavelength geoid components. Initial tests show clear geoid improvements at the local GPS benchmarks in the Yukon area after crustal motion is accounted for. Similar approaches will be employed on the Alaska side for a better validation to determine a continuous vertical datum across US and Canada.

  2. Validation and Application of Concentrated Cesium Eluate Physical Property Models

    SciTech Connect

    Choi, A.S.

    2004-03-18

    This work contained two objectives. To verify the mathematical equations developed for the physical properties of concentrated cesium eluate solutions against experimental test results obtained with simulated feeds. To estimate the physical properties of the radioactive AW-101 cesium eluate at saturation using the validated models. The Hanford River Protection Project (RPP) Hanford Waste Treatment and Immobilization Plant (WTP) is currently being built to extract radioisotopes from the vast inventory of Hanford tank wastes and immobilize them in a silicate glass matrix for eventual disposal at a geological repository. The baseline flowsheet for the pretreatment of supernatant liquid wastes includes removal of cesium using regenerative ion-exchange resins. The loaded cesium ion-exchange columns will be eluted with nitric acid nominally at 0.5 molar, and the resulting eluate solution will be concentrated in a forced-convection evaporator to reduce the storage volume and to recover the acid for reuse. The reboiler pot is initially charged with a concentrated nitric acid solution and kept under a controlled vacuum during feeding so the pot contents would boil at 50 degrees Celsius. The liquid level in the pot is maintained constant by controlling both the feed and boilup rates. The feeding will continue with no bottom removal until the solution in the pot reaches the target endpoint of 80 per cent saturation with respect to any one of the major salt species present.

  3. On the verification and validation of detonation models

    NASA Astrophysics Data System (ADS)

    Quirk, James

    2013-06-01

    This talk will consider the verification and validation of detonation models, such as Wescott-Stewart-Davis (Journal of Applied Physics. 2005), from the perspective of the American Institute of Aeronautics and Astronautics policy on numerical accuracy (AIAA J. Vol. 36, No. 1, 1998). A key aspect of the policy is that accepted documentation procedures must be used for journal articles with the aim of allowing the reported work to be reproduced by the interested reader. With the rise of electronic documents, since the policy was formulated, it is now possible to satisfy this mandate in its strictest sense: that is, it is now possible to run a comptuational verification study directly in a PDF, thereby allowing a technical author to report numerical subtleties that traditionally have been ignored. The motivation for this document-centric approach is discussed elsewhere (Quirk2003, Adaptive Mesh Refinement Theory and Practice, Springer), leaving the talk to concentrate on specific detonation examples that should be of broad interest to the shock-compression community.

  4. An independent verification and validation of the Future Theater Level Model conceptual model

    SciTech Connect

    Hartley, D.S. III; Kruse, K.L.; Martellaro, A.J.; Packard, S.L.; Thomas, B. Jr.; Turley, V.K.

    1994-08-01

    This report describes the methodology and results of independent verification and validation performed on a combat model in its design stage. The combat model is the Future Theater Level Model (FTLM), under development by The Joint Staff/J-8. J-8 has undertaken its development to provide an analysis tool that addresses the uncertainties of combat more directly than previous models and yields more rapid study results. The methodology adopted for this verification and validation consisted of document analyses. Included were detailed examination of the FTLM design documents (at all stages of development), the FTLM Mission Needs Statement, and selected documentation for other theater level combat models. These documents were compared to assess the FTLM as to its design stage, its purpose as an analytical combat model, and its capabilities as specified in the Mission Needs Statement. The conceptual design passed those tests. The recommendations included specific modifications as well as a recommendation for continued development. The methodology is significant because independent verification and validation have not been previously reported as being performed on a combat model in its design stage. The results are significant because The Joint Staff/J-8 will be using the recommendations from this study in determining whether to proceed with develop of the model.

  5. Atomic Data and Spectral Model for Fe II

    NASA Astrophysics Data System (ADS)

    Bautista, Manuel A.; Fivet, Vanessa; Ballance, Connor; Quinet, Pascal; Ferland, Gary; Mendoza, Claudio; Kallman, Timothy R.

    2015-08-01

    We present extensive calculations of radiative transition rates and electron impact collision strengths for Fe ii. The data sets involve 52 levels from the 3d7, 3d64s, and 3{d}54{s}2 configurations. Computations of A-values are carried out with a combination of state-of-the-art multiconfiguration approaches, namely the relativistic Hartree-Fock, Thomas-Fermi-Dirac potential, and Dirac-Fock methods, while the R-matrix plus intermediate coupling frame transformation, Breit-Pauli R-matrix, and Dirac R-matrix packages are used to obtain collision strengths. We examine the advantages and shortcomings of each of these methods, and estimate rate uncertainties from the resulting data dispersion. We proceed to construct excitation balance spectral models, and compare the predictions from each data set with observed spectra from various astronomical objects. We are thus able to establish benchmarks in the spectral modeling of [Fe ii] emission in the IR and optical regions as well as in the UV Fe ii absorption spectra. Finally, we provide diagnostic line ratios and line emissivities for emission spectroscopy as well as column densities for absorption spectroscopy. All atomic data and models are available online and through the AtomPy atomic data curation environment.

  6. Validation of the integration of CFD and SAS4A/SASSYS-1: Analysis of EBR-II shutdown heat removal test 17

    SciTech Connect

    Thomas, J. W.; Fanning, T. H.; Vilim, R.; Briggs, L. L.

    2012-07-01

    Recent analyses have demonstrated the need to model multidimensional phenomena, particularly thermal stratification in outlet plena, during safety analyses of loss-of-flow transients of certain liquid-metal cooled reactor designs. Therefore, Argonne's reactor systems safety code SAS4A/SASSYS-1 is being enhanced by integrating 3D computational fluid dynamics models of the plena. A validation exercise of the new tool is being performed by analyzing the protected loss-of-flow event demonstrated by the EBR-II Shutdown Heat Removal Test 17. In this analysis, the behavior of the coolant in the cold pool is modeled using the CFD code STAR-CCM+, while the remainder of the cooling system and the reactor core are modeled with SAS4A/SASSYS-1. This paper summarizes the code integration strategy and provides the predicted 3D temperature and velocity distributions inside the cold pool during SHRT-17. The results of the coupled analysis should be considered preliminary at this stage, as the exercise pointed to the need to improve the CFD model of the cold pool tank. (authors)

  7. Integral Reactor Containment Condensation Model and Experimental Validation

    SciTech Connect

    Wu, Qiao; Corradini, Michael

    2016-05-02

    This NEUP funded project, NEUP 12-3630, is for experimental, numerical and analytical studies on high-pressure steam condensation phenomena in a steel containment vessel connected to a water cooling tank, carried out at Oregon State University (OrSU) and the University of Wisconsin at Madison (UW-Madison). In the three years of investigation duration, following the original proposal, the planned tasks have been completed: (1) Performed a scaling study for the full pressure test facility applicable to the reference design for the condensation heat transfer process during design basis accidents (DBAs), modified the existing test facility to route the steady-state secondary steam flow into the high pressure containment for controllable condensation tests, and extended the operations at negative gage pressure conditions (OrSU). (2) Conducted a series of DBA and quasi-steady experiments using the full pressure test facility to provide a reliable high pressure condensation database (OrSU). (3) Analyzed experimental data and evaluated condensation model for the experimental conditions, and predicted the prototypic containment performance under accidental conditions (UW-Madison). A film flow model was developed for the scaling analysis, and the results suggest that the 1/3 scaled test facility covers large portion of laminar film flow, leading to a lower average heat transfer coefficient comparing to the prototypic value. Although it is conservative in reactor safety analysis, the significant reduction of heat transfer coefficient (50%) could under estimate the prototypic condensation heat transfer rate, resulting in inaccurate prediction of the decay heat removal capability. Further investigation is thus needed to quantify the scaling distortion for safety analysis code validation. Experimental investigations were performed in the existing MASLWR test facility at OrST with minor modifications. A total of 13 containment condensation tests were conducted for pressure

  8. Gas dynamics modeling of the HYLIFE-II reactor

    SciTech Connect

    Jantzen, C.

    1995-08-01

    Gas dynamics in the IFE reactor, HYLIFE-II is modeled using the code, TSUNAMI. This code is a 2-D shock-solver that uses the Godunov method with operator splitting. Results from a cylindrically symmetric simulation indicate an initial, low density, burst of high energy particles enters the final focus transport lens within 40 microseconds after the blast, much faster than the proposed 1 millisecond shutter closing time. After approximately 100 microseconds the chamber debris flux levels off to one eighth its peak value and maintains this intensity until the shutter closes. Although initial protective jet ablation is considered, neither secondary radiation nor condensation are modeled. Therefore results are conservative.

  9. An integrated model of the TOPAZ-II electromagnetic pump

    SciTech Connect

    El-Genk, M.S.; Paramonov, D.V. . Inst. of Space Nuclear Power Studies)

    1994-11-01

    A detailed model of the electromagnetic pump of the TOPAZ-II space nuclear reactor power system is developed and compared with experimental data. The magnetic field strength in the pump depends not only on the current supplied by the pump thermionic fuel elements in the reactor core but also on the temperature of the coolant, the magnetic coil, and the pump structure. All electric and thermal properties of the coolant, wall material of the pump ducts, and electric leads are taken to be temperature dependent. The model predictions are in good agreement with experimental data.

  10. A Validity-Based Model for the Evaluation of a Criterion-Referenced Test.

    ERIC Educational Resources Information Center

    Schattgen, Sharon; And Others

    This paper describes a model for the evaluation and approval of a test battery for compliance with a midwestern state law mandating criterion-referenced testing of specific objectives. Standards specifying that the test scores must demonstrate content validity and criterion-related validity form the foundation of the model. The model also…

  11. Experiments for calibration and validation of plasticity and failure material modeling: 304L stainless steel.

    SciTech Connect

    Lee, Kenneth L.; Korellis, John S.; McFadden, Sam X.

    2006-01-01

    Experimental data for material plasticity and failure model calibration and validation were obtained from 304L stainless steel. Model calibration data were taken from smooth tension, notched tension, and compression tests. Model validation data were provided from experiments using thin-walled tube specimens subjected to path dependent combinations of internal pressure, extension, and torsion.

  12. The Model Analyst’s Toolkit: Scientific Model Development, Analysis, and Validation

    DTIC Science & Technology

    2015-05-20

    sensor are faulty by learning how the actual temperature and thermostat setting relate to each other and then detecting inconsistencies. The second ...visualization and analysis, automated data analysis, and machine learning to leverage human expertise in model building with automated analyses of...design and implement a prototype mechanism that learns from data how factors interact in non-trivial ways in scientific models.  Data validation

  13. Structural Vibration Modeling and Validation. Modeling Uncertainty and Stochastic Control for Structural Control

    DTIC Science & Technology

    2005-12-30

    decade or so, there has been increasing interest in probabilistic, or stochastic, robust control theory . Monte Carlo simulations methods have been...AFRL-VS-PS- AFRL-VS-PS- TR-2005 -1174 TR-2005-1174 STRUCTURAL VIBRATION MODELING & VALIDATION Modeling Uncertainty and Stochastic Control ...for Structural Control Dr. Vít Babuška, Dr. Delano Carter, and Dr. Steven Lane 30 December 2005 Final Report

  14. Validation of Improved Research Integration Tool (IMPRINT) Driving Model for Workload Analysis

    DTIC Science & Technology

    2004-04-01

    is a large contributor to mental workload (Mitchell, Samms, Henthorn , and Wojciechowski, 2003). Allocating additional functions to the driver...regulation 5-11 (AR 5-11) states, “Validation is the process of determining the extent to which the M &S (model and simulation) adequately represents...second component of validation is output validation. This type of validation compares the output of the M &S to the perceived real world. The

  15. The Development and Validation of a New Land Surface Model for Regional and Global Climate Modeling

    NASA Astrophysics Data System (ADS)

    Lynch-Stieglitz, Marc

    1995-11-01

    A new land-surface scheme intended for use in mesoscale and global climate models has been developed and validated. The ground scheme consists of 6 soil layers. Diffusion and a modified tipping bucket model govern heat and water flow respectively. A 3 layer snow model has been incorporated into a modified BEST vegetation scheme. TOPMODEL equations and Digital Elevation Model data are used to generate baseflow which supports lowland saturated zones. Soil moisture heterogeneity represented by saturated lowlands subsequently impacts watershed evapotranspiration, the partitioning of surface fluxes, and the development of the storm hydrograph. Five years of meteorological and hydrological data from the Sleepers river watershed located in the eastern highlands of Vermont where winter snow cover is significant were then used to drive and validate the new scheme. Site validation data were sufficient to evaluate model performance with regard to various aspects of the watershed water balance, including snowpack growth/ablation, the spring snowmelt hydrograph, storm hydrographs, and the seasonal development of watershed evapotranspiration and soil moisture. By including topographic effects, not only are the main spring hydrographs and individual storm hydrographs adequately resolved, but the mechanisms generating runoff are consistent with current views of hydrologic processes. The seasonal movement of the mean water table depth and the saturated area of the watershed are consistent with site data and the overall model hydroclimatology, including the surface fluxes, seems reasonable.

  16. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    NASA Technical Reports Server (NTRS)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL

  17. Second-Moment RANS Model Verification and Validation Using the Turbulence Modeling Resource Website (Invited)

    NASA Technical Reports Server (NTRS)

    Eisfeld, Bernhard; Rumsey, Chris; Togiti, Vamshi

    2015-01-01

    The implementation of the SSG/LRR-omega differential Reynolds stress model into the NASA flow solvers CFL3D and FUN3D and the DLR flow solver TAU is verified by studying the grid convergence of the solution of three different test cases from the Turbulence Modeling Resource Website. The model's predictive capabilities are assessed based on four basic and four extended validation cases also provided on this website, involving attached and separated boundary layer flows, effects of streamline curvature and secondary flow. Simulation results are compared against experimental data and predictions by the eddy-viscosity models of Spalart-Allmaras (SA) and Menter's Shear Stress Transport (SST).

  18. Distinguishing 'new' from 'old' organic carbon in reclaimed coal mine sites using thermogravimetry: II. Field validation

    SciTech Connect

    Maharaj, S.; Barton, C.D.; Karathanasis, T.A.D.; Rowe, H.D.; Rimmer, S.M.

    2007-04-15

    Thermogravimetry was used under laboratory conditions to differentiate 'new' and 'old' organic carbon (c) by using grass litter, coal, and limestone to represent the different C fractions. Thermogravimetric and derivative thermogravimetry curves showed pyrolysis peaks at distinctively different temperatures, with the peak for litter occurring at 270 to 395{sup o}C, for coal at 415 to 520 {sup o}C, and for limestone at 700 to 785{sup o}C. To validate this method in a field setting, we studied four reforested coal mine sites in Kentucky representing a chronosequence since reclamation: 0 and 2 years located at Bent Mountain and 3 and 8 years located at the Starfire mine. A nonmined mature (approximate to 80 years old) stand at Robinson Forest, Kentucky, was selected as a reference location. Results indicated a general peak increase in the 270 to 395{sup o}C region with increased time, signifying an increase in the 'new' organic matter (OM) fraction. For the Bent Mountain site, the OM fraction increased from 0.03 to 0.095% between years 0 and 2, whereas the Starfire site showed an increase from 0.095 to 1.47% between years 3 and 8. This equates to a C sequestration rate of 2.92 Mg ha{sup -1} yr{sup -1} for 'new' OM in the upper 10-cm layer during the 8 years of reclamation on eastern Kentucky reclaimed coal mine sites. Results suggest that stable isotopes and elemental data can be used as proxy tools for qualifying soil organic C (SOC) changes over time on the reclaimed coal mine sites but cannot be used to determine the exact SOC accumulation rate. However, results suggested that the thermogravimetric and derivative thermogravimetry methods can be used to quantify SOC accumulation and has the potential to be a more reliable, cost-effective, and rapid means to determine the new organic C fraction in mixed geological material, especially in areas dominated by coal and carbonate materials.

  19. A MODEL STUDY OF TRANSVERSE MODE COUPLING INSTABILITY AT NATIONAL SYNCHROTRON LIGHT SOURCE-II (NSLS-II).

    SciTech Connect

    BLEDNYKH, A.; WANG, J.M.

    2005-05-15

    The vertical impedances of the preliminary designs of National Synchrotron Light Source II (NSLS-II) Mini Gap Undulators (MGU) are calculated by means of GdfidL code. The Transverse Mode Coupling Instability (TMCI) thresholds corresponding to these impedances are estimated using an analytically solvable model.

  20. Dynamical modeling of elliptical galaxies. II. numerical prolate models

    SciTech Connect

    Lake, G.

    1981-01-01

    The analytical solutions of Paper I are generalized using the self-consistent field method. These prolate models are constructed using only two integrals of motion, the energy (E) and the angular momentum about the axis of symmetry, (L/sub z/). They are the first models with flattening greater than E4 which possess elliptical isophotes and realistic density profiles. The singularity in the surface brightness which characterized the models of Paper I has been removed by smoothing the extreme suppression of L/sub z/. The new models (like those of Paper I) still show a sharp rise in the velocity dispersion at the center. This feature is due to the strongly anisotropic velocity dispersions, rather than the existence of a supermassive object.

  1. Importance of Sea Ice for Validating Global Climate Models

    NASA Technical Reports Server (NTRS)

    Geiger, Cathleen A.

    1997-01-01

    Reproduction of current day large-scale physical features and processes is a critical test of global climate model performance. Without this benchmark, prognoses of future climate conditions are at best speculation. A fundamental question relevant to this issue is, which processes and observations are both robust and sensitive enough to be used for model validation and furthermore are they also indicators of the problem at hand? In the case of global climate, one of the problems at hand is to distinguish between anthropogenic and naturally occuring climate responses. The polar regions provide an excellent testing ground to examine this problem because few humans make their livelihood there, such that anthropogenic influences in the polar regions usually spawn from global redistribution of a source originating elsewhere. Concomitantly, polar regions are one of the few places where responses to climate are non-anthropogenic. Thus, if an anthropogenic effect has reached the polar regions (e.g. the case of upper atmospheric ozone sensitivity to CFCs), it has most likely had an impact globally but is more difficult to sort out from local effects in areas where anthropogenic activity is high. Within this context, sea ice has served as both a monitoring platform and sensitivity parameter of polar climate response since the time of Fridtjof Nansen. Sea ice resides in the polar regions at the air-sea interface such that changes in either the global atmospheric or oceanic circulation set up complex non-linear responses in sea ice which are uniquely determined. Sea ice currently covers a maximum of about 7% of the earth's surface but was completely absent during the Jurassic Period and far more extensive during the various ice ages. It is also geophysically very thin (typically <10 m in Arctic, <3 m in Antarctic) compared to the troposphere (roughly 10 km) and deep ocean (roughly 3 to 4 km). Because of these unique conditions, polar researchers regard sea ice as one of the

  2. MATSIM -The Development and Validation of a Numerical Voxel Model based on the MATROSHKA Phantom

    NASA Astrophysics Data System (ADS)

    Beck, Peter; Rollet, Sofia; Berger, Thomas; Bergmann, Robert; Hajek, Michael; Latocha, Marcin; Vana, Norbert; Zechner, Andrea; Reitz, Guenther

    The AIT Austrian Institute of Technology coordinates the project MATSIM (MATROSHKA Simulation) in collaboration with the Vienna University of Technology and the German Aerospace Center. The aim of the project is to develop a voxel-based model of the MATROSHKA anthro-pomorphic torso used at the International Space Station (ISS) as foundation to perform Monte Carlo high-energy particle transport simulations for different irradiation conditions. Funded by the Austrian Space Applications Programme (ASAP), MATSIM is a co-investigation with the European Space Agency (ESA) ELIPS project MATROSHKA, an international collaboration of more than 18 research institutes and space agencies from all over the world, under the science and project lead of the German Aerospace Center. The MATROSHKA facility is designed to determine the radiation exposure of an astronaut onboard ISS and especially during an ex-travehicular activity. The numerical model developed in the frame of MATSIM is validated by reference measurements. In this report we give on overview of the model development and compare photon and neutron irradiations of the detector-equipped phantom torso with Monte Carlo simulations using FLUKA. Exposure to Co-60 photons was realized in the standard ir-radiation laboratory at Seibersdorf, while investigations with neutrons were performed at the thermal column of the Vienna TRIGA Mark-II reactor. The phantom was loaded with passive thermoluminescence dosimeters. In addition, first results of the calculated dose distribution within the torso are presented for a simulated exposure in low-Earth orbit.

  3. Models of TCP in high-BDP environments and their experimental validation

    SciTech Connect

    Vardoyan, G.; Rao, Nageswara S; Towlsey, D.

    2016-01-01

    In recent years, the computer networking community has seen a steady growth in bandwidth-delay products (BDPs). Several TCP variants were created to combat the shortcomings of legacy TCP when it comes to operation in high-BDP environments. These variants, among which are CUBIC, STCP, and H-TCP, have been extensively studied in some empirical contexts, and some analytical models exist for CUBIC and STCP. However, since these studies have been conducted, BDPs have risen even more, and new bulk data transfer tools have emerged that utilize multiple parallel TCP streams. In view of these new developments, it is imperative to revisit the question: Which congestion control algorithms are best adapted to current networking environments? In order to help resolve this question, we contribute the following: (i) using first principles, we develop a general throughput-prediction framework that takes into account buffer sizes and maximum window constraints; (ii) we validate the models using measurements and achieve low prediction errors; (iii) we note differences in TCP dynamics between two experimental configurations and find one of them to be significantly more deterministic than the other; we also find that CUBIC and H-TCP outperform STCP, especially when multiple streams are used; and (iv) we present preliminary results for modelling multiple TCP streams for CUBIC and STCP.

  4. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  5. Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Sebok, Angelia; Gore, Brian; Hooey, Becky

    2012-01-01

    We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures.

  6. Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition

    NASA Technical Reports Server (NTRS)

    Ewing, Anthony; Adams, Charles

    2004-01-01

    Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.

  7. 2013 CEF RUN - PHASE 1 DATA ANALYSIS AND MODEL VALIDATION

    SciTech Connect

    Choi, A.

    2014-05-08

    Phase 1 of the 2013 Cold cap Evaluation Furnace (CEF) test was completed on June 3, 2013 after a 5-day round-the-clock feeding and pouring operation. The main goal of the test was to characterize the CEF off-gas produced from a nitric-formic acid flowsheet feed and confirm whether the CEF platform is capable of producing scalable off-gas data necessary for the revision of the DWPF melter off-gas flammability model; the revised model will be used to define new safety controls on the key operating parameters for the nitric-glycolic acid flowsheet feeds including total organic carbon (TOC). Whether the CEF off-gas data were scalable for the purpose of predicting the potential flammability of the DWPF melter exhaust was determined by comparing the predicted H{sub 2} and CO concentrations using the current DWPF melter off-gas flammability model to those measured during Phase 1; data were deemed scalable if the calculated fractional conversions of TOC-to-H{sub 2} and TOC-to-CO at varying melter vapor space temperatures were found to trend and further bound the respective measured data with some margin of safety. Being scalable thus means that for a given feed chemistry the instantaneous flow rates of H{sub 2} and CO in the DWPF melter exhaust can be estimated with some degree of conservatism by multiplying those of the respective gases from a pilot-scale melter by the feed rate ratio. This report documents the results of the Phase 1 data analysis and the necessary calculations performed to determine the scalability of the CEF off-gas data. A total of six steady state runs were made during Phase 1 under non-bubbled conditions by varying the CEF vapor space temperature from near 700 to below 300°C, as measured in a thermowell (T{sub tw}). At each steady state temperature, the off-gas composition was monitored continuously for two hours using MS, GC, and FTIR in order to track mainly H{sub 2}, CO, CO{sub 2}, NO{sub x}, and organic gases such as CH{sub 4}. The standard

  8. An analytical approach to computing biomolecular electrostatic potential. II. Validation and applications

    NASA Astrophysics Data System (ADS)

    Gordon, John C.; Fenley, Andrew T.; Onufriev, Alexey

    2008-08-01

    An ability to efficiently compute the electrostatic potential produced by molecular charge distributions under realistic solvation conditions is essential for a variety of applications. Here, the simple closed-form analytical approximation to the Poisson equation rigorously derived in Part I for idealized spherical geometry is tested on realistic shapes. The effects of mobile ions are included at the Debye-Hückel level. The accuracy of the resulting closed-form expressions for electrostatic potential is assessed through comparisons with numerical Poisson-Boltzmann (NPB) reference solutions on a test set of 580 representative biomolecular structures under typical conditions of aqueous solvation. For each structure, the deviation from the reference is computed for a large number of test points placed near the dielectric boundary (molecular surface). The accuracy of the approximation, averaged over all test points in each structure, is within 0.6 kcal/mol/|e|~kT per unit charge for all structures in the test set. For 91.5% of the individual test points, the deviation from the NPB potential is within 0.6 kcal/mol/|e|. The deviations from the reference decrease with increasing distance from the dielectric boundary: The approximation is asymptotically exact far away from the source charges. Deviation of the overall shape of a structure from ideal spherical does not, by itself, appear to necessitate decreased accuracy of the approximation. The largest deviations from the NPB reference are found inside very deep and narrow indentations that occur on the dielectric boundaries of some structures. The dimensions of these pockets of locally highly negative curvature are comparable to the size of a water molecule; the applicability of a continuum dielectric models in these regions is discussed. The maximum deviations from the NPB are reduced substantially when the boundary is smoothed by using a larger probe radius (3 A˚) to generate the molecular surface. A detailed accuracy

  9. Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica

    SciTech Connect

    Oboh, I.; Aluyor, E.; Audu, T.

    2015-03-30

    The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R{sup 2}), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used to predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem.

  10. Effects of noise suppression on intelligibility. II: An attempt to validate physical metrics.

    PubMed

    Hilkhuysen, Gaston; Gaubitch, Nikolay; Brookes, Mike; Huckvale, Mark

    2014-01-01

    Using the data presented in the accompanying paper [Hilkhuysen et al., J. Acoust. Soc. Am. 131, 531-539 (2012)], the ability of six metrics to predict intelligibility of speech in noise before and after noise suppression was studied. The metrics considered were the Speech Intelligibility Index (SII), the fractional Articulation Index (fAI), the coherence intelligibility index based on the mid-levels in speech (CSIImid), an extension of the Normalized Coherence Metric (NCM+), a part of the speech-based envelope power model (pre-sEPSM), and the Short Term Objective Intelligibility measure (STOI). Three of the measures, SII, CSIImid, and NCM+, overpredicted intelligibility after noise reduction, whereas fAI underpredicted these intelligibilities. The pre-sEPSM metric worked well for speech in babble but failed with car noise. STOI gave the best predictions, but overall the size of intelligibility prediction errors were greater than the change in intelligibility caused by noise suppression. Suggestions for improvements of the metrics are discussed.

  11. Model light curves of linear Type II supernovae

    SciTech Connect

    Swartz, D.A.; Wheeler, J.C.; Harkness, R.P. )

    1991-06-01

    Light curves computed from hydrodynamic models of supernova are compared graphically with the average observed B and V-band light curves of linear Type II supernovae. Models are based on the following explosion scenarios: carbon deflagration within a C + O core near the Chandrasekhar mass, electron-capture-induced core collapse of an O-Ne-Mg core of the Chandrasekhar mass, and collapse of an Fe core in a massive star. A range of envelope mass, initial radius, and composition is investigated. Only a narrow range of values of these parameters are consistent with observations. Within this narrow range, most of the observed light curve properties can be obtained in part, but none of the models can reproduce the entire light curve shape and absolute magnitude over the full 200 day comparison period. The observed lack of a plateau phase is explained in terms of a combination of small envelope mass and envelope helium enhancement. The final cobalt tail phase of the light curve can be reproduced only if the mass of explosively synthesized radioactive Ni-56 is small. The results presented here, in conjunction with the observed homogeneity among individual members of the supernova subclass, argue favorably for the O-Ne-Mg core collapse mechanism as an explanation for linear Type II supernovae. The Crab Nebula may arisen from such an explosion. Carbon deflagrations may lead to brighter events like SN 1979C. 62 refs.

  12. Reconceptualizing the Learning Transfer Conceptual Framework: Empirical Validation of a New Systemic Model

    ERIC Educational Resources Information Center

    Kontoghiorghes, Constantine

    2004-01-01

    The main purpose of this study was to examine the validity of a new systemic model of learning transfer and thus determine if a more holistic approach to training transfer could better explain the phenomenon. In all