Science.gov

Sample records for ii model validation

  1. Filament winding cylinders. II - Validation of the process model

    NASA Technical Reports Server (NTRS)

    Calius, Emilio P.; Lee, Soo-Yong; Springer, George S.

    1990-01-01

    Analytical and experimental studies were performed to validate the model developed by Lee and Springer for simulating the manufacturing process of filament wound composite cylinders. First, results calculated by the Lee-Springer model were compared to results of the Calius-Springer thin cylinder model. Second, temperatures and strains calculated by the Lee-Springer model were compared to data. The data used in these comparisons were generated during the course of this investigation with cylinders made of Hercules IM-6G/HBRF-55 and Fiberite T-300/976 graphite-epoxy tows. Good agreement was found between the calculated and measured stresses and strains, indicating that the model is a useful representation of the winding and curing processes.

  2. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    SciTech Connect

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and sensible heat fluxes appear to occupy intermediate positions between these extremes, but the existing large observational uncertainties in these processes make this a provisional assessment. In all selected processes as well, the error statistics are found to be sensitive to season and latitude sector, confirming the need for finer-scale analyses which also are in progress.

  3. Validated Competing Event Model for the Stage I-II Endometrial Cancer Population

    SciTech Connect

    Carmona, Ruben; Gulaya, Sachin; Murphy, James D.; Rose, Brent S.; Wu, John; Noticewala, Sonal; McHale, Michael T.; Yashar, Catheryn M.; Vaida, Florin; Mell, Loren K.

    2014-07-15

    Purpose/Objectives(s): Early-stage endometrial cancer patients are at higher risk of noncancer mortality than of cancer mortality. Competing event models incorporating comorbidity could help identify women most likely to benefit from treatment intensification. Methods and Materials: 67,397 women with stage I-II endometrioid adenocarcinoma after total hysterectomy diagnosed from 1988 to 2009 were identified in Surveillance, Epidemiology, and End Results (SEER) and linked SEER-Medicare databases. Using demographic and clinical information, including comorbidity, we sought to develop and validate a risk score to predict the incidence of competing mortality. Results: In the validation cohort, increasing competing mortality risk score was associated with increased risk of noncancer mortality (subdistribution hazard ratio [SDHR], 1.92; 95% confidence interval [CI], 1.60-2.30) and decreased risk of endometrial cancer mortality (SDHR, 0.61; 95% CI, 0.55-0.78). Controlling for other variables, Charlson Comorbidity Index (CCI) = 1 (SDHR, 1.62; 95% CI, 1.45-1.82) and CCI >1 (SDHR, 3.31; 95% CI, 2.74-4.01) were associated with increased risk of noncancer mortality. The 10-year cumulative incidences of competing mortality within low-, medium-, and high-risk strata were 27.3% (95% CI, 25.2%-29.4%), 34.6% (95% CI, 32.5%-36.7%), and 50.3% (95% CI, 48.2%-52.6%), respectively. With increasing competing mortality risk score, we observed a significant decline in omega (?), indicating a diminishing likelihood of benefit from treatment intensification. Conclusion: Comorbidity and other factors influence the risk of competing mortality among patients with early-stage endometrial cancer. Competing event models could improve our ability to identify patients likely to benefit from treatment intensification.

  4. A study of the collapse of spherical shells, Part II: Model Validation.

    SciTech Connect

    Thacker, B. H.; McKeighan, P. C.; Pepin, J. E.

    2005-01-01

    There is a growing need to quantify the level of credibility that can be associated with model predictions. Model verification and validation (V&V) is a methodology for the development of models that can be used to make engineering predictions with quantified confidence. Model V&V procedures are needed by government and industry to reduce the time, cost and risk associated with component and full-scale testing of products, materials, and weapons. Consequently, the development of guidelines and procedures for conducting a V&V program are currently being defined by a broad spectrum of researchers. This talk will discuss an on-going effort to validate a model that predicts the collapse load of a spherical shell structure. Inherent variations in geometric shape and material parameters are included in the uncertainty model. Results from a recently completed probabilistic validation test to measure the variation in collapse load are compared to the predicted collapse load variation.

  5. Fatigue crack growth under variable-amplitude loading: Part II Code development and model validation q

    E-print Network

    Ray, Asok

    2001; accepted 12 February 2001 Abstract A state-space model of fatigue crack growth has been®ed because the plant dynamic models are usually formulated in the state-space setting or autoregressiveFatigue crack growth under variable-amplitude loading: Part II ± Code development and model

  6. Evaluation of Reliability and Validity of the Hendrich II Fall Risk Model in a Chinese Hospital Population

    PubMed Central

    Zhang, Congcong; Wu, Xinjuan; Lin, Songbai; Jia, Zhaoxia; Cao, Jing

    2015-01-01

    To translate, validate and examine the reliability and validity of a Chinese version of the Hendrich II Fall risk Model (HFRM) in predicting falls in elderly inpatient. A sample of 989 Chinese elderly inpatients was recruited upon admission at the Peking Union Medical College Hospital. The inpatients were assessed for fall risk using the Chinese version of the HFRM at admission. The reliability of the Chinese version of the HFRM was determined using the internal consistency and test-rested methods. Validity was determined using construct validity and convergent validity. Receiver operating characteristic (ROC) curves were created to determine the sensitivity and specificity. The Chinese version of the HFRM showed excellent repeatability with an intra-class correlation coefficient (ICC) of 0.9950 (95% confidence interval (CI): 0.9923–0.9984). The inter-rater reliability was high with an ICC of 0.9950 (95%CI: 0.9923–0.9984). Cronbach’s alpha coefficient was 0.366. Content validity was excellent, with a content validity ratio of 0.9333. The Chinese version of the HFRM had a sensitivity of 72% and a specificity of 69% when using a cut-off of 5 points on the scale. The area under the curve (AUC) was 0.815 (P<0.001). The Chinese version of the HFRM showed good reliability and validity in assessing the risk of fall in Chinese elderly inpatients. PMID:26544961

  7. A physical model of the bidirectional reflectance of vegetation canopies. I - Theory. II - Inversion and validation

    NASA Technical Reports Server (NTRS)

    Verstraete, Michel M.; Pinty, Bernard; Dickinson, Robert E.

    1990-01-01

    A new physically based analytical model of the bidirectional reflectance of vegetation canopies is derived. The model expresses the bidirectional reflectance field of a semiinfinite canopy as a combination of functions describing (1) the optical properties of the leaves through their single-scattering albedo and their phase function, (2) the average distribution of leaf orientations, and (3) the architecture of the canopy. The model is validated against laboratory and ground-based measurements in the visible and IR spectral regions, taken over two vegetation covers. The intrinsic optical properties of leaves and the information on the geometrical canopy arrangements in space were obtained using an inversion procedure based on a nonlinear optimization technique. Model predictions of bidirectional reflectances obtained using the inversion procedure compare well with actual observations.

  8. Person Heterogeneity of the BDI-II-C and Its Effects on Dimensionality and Construct Validity: Using Mixture Item Response Models

    ERIC Educational Resources Information Center

    Wu, Pei-Chen; Huang, Tsai-Wei

    2010-01-01

    This study was to apply the mixed Rasch model to investigate person heterogeneity of Beck Depression Inventory-II-Chinese version (BDI-II-C) and its effects on dimensionality and construct validity. Person heterogeneity was reflected by two latent classes that differ qualitatively. Additionally, person heterogeneity adversely affected the…

  9. Development and validation of an evaporation duct model. Part II: Evaluation and improvement of stability functions

    NASA Astrophysics Data System (ADS)

    Ding, Juli; Fei, Jianfang; Huang, Xiaogang; Cheng, Xiaoping; Hu, Xiaohua; Ji, Liang

    2015-06-01

    This study aims to validate and improve the universal evaporation duct (UED) model through a further analysis of the stability function ( ?). A large number of hydrometeorological observations obtained from a tower platform near Xisha Island of the South China Sea are employed, together with the latest variations in ? function. Applicability of different ? functions for specific sea areas and stratification conditions is investigated based on three objective criteria. The results show that, under unstable conditions, ? function of Fairall et al. (1996) (i.e., Fairall96, similar for abbreviations of other function names) in general offers the best performance. However, strictly speaking, this holds true only for the stability (represented by bulk Richardson number R iB) range -2.6 ? R iB < -0.1; when conditions become weakly unstable (-0.1 ? R iB < -0.01), Fairall96 offers the second best performance after Hu and Zhang (1992) (HYQ92). Conversely, for near-neutral but slightly unstable conditions (-0.01 ? R iB < 0.0), the effects of Edson04, Fairall03, Grachev00, and Fairall96 are similar, with Edson04 being the best function but offering only a weak advantage. Under stable conditions, HYQ92 is the optimal and offers a pronounced advantage, followed by the newly introduced SHEBA07 (by Grachev et al., 2007) function. Accordingly, the most favorable functions, i.e., Fairall96 and HYQ92, are incorporated into the UED model to obtain an improved version of the model. With the new functions, the mean root-mean-square (rms) errors of the modified refractivity ( M), 0-5-m M slope, 5-40-m M slope, and the rms errors of evaporation duct height (EDH) are reduced by 21.65%, 9.12%, 38.79%, and 59.06%, respectively, compared to the classical Naval Postgraduate School model.

  10. Predictive modeling of infrared radiative heating in tomato dry-peeling process: Part II. Model validation and sensitivity analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A predictive mathematical model was developed to simulate heat transfer in a tomato undergoing double sided infrared (IR) heating in a dry-peeling process. The aims of this study were to validate the developed model using experimental data and to investigate different engineering parameters that mos...

  11. Black liquor combustion validated recovery boiler modeling: Final year report. Volume 2 (Appendices I, section 5 and II, section 1)

    SciTech Connect

    Grace, T.M.; Frederick, W.J.; Salcudean, M.; Wessel, R.A.

    1998-08-01

    This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished were as follows: (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes. (2) Validate the enhanced furnace models, so that users can have confidence in the predicted results. (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler. (4) Facilitate the transfer of codes, black liquid submodels, and fundamental knowledge to the US kraft pulp industry. Volume 2 contains the last section of Appendix I, Radiative heat transfer in kraft recovery boilers, and the first section of Appendix II, The effect of temperature and residence time on the distribution of carbon, sulfur, and nitrogen between gaseous and condensed phase products from low temperature pyrolysis of kraft black liquor.

  12. The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models

    EPA Science Inventory

    The second phase of the MicroArray Quality Control (MAQC-II) project evaluated common practices for developing and validating microarray-based models aimed at predicting toxicological and clinical endpoints. Thirty-six teams developed classifiers for 13 endpoints - some easy, som...

  13. Assessing Wildlife Habitat Value of New England Salt Marshes: II. Model Testing and Validation

    EPA Science Inventory

    We test a previously described model to assess the wildlife habitat value of New England salt marshes by comparing modeled habitat values and scores with bird abundance and species richness at sixteen salt marshes in Narragansett Bay, Rhode Island USA. Assessment scores ranged f...

  14. Model Validation Status Review

    SciTech Connect

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and engineered barriers, plus the TSPA model itself Description of the model areas is provided in Section 3, and the documents reviewed are described in Section 4. The responsible manager for the Model Validation Status Review was the Chief Science Officer (CSO) for Bechtel-SAIC Co. (BSC). The team lead was assigned by the CSO. A total of 32 technical specialists were engaged to evaluate model validation status in the 21 model areas. The technical specialists were generally independent of the work reviewed, meeting technical qualifications as discussed in Section 5.

  15. TAMDAR Sensor Validation in 2003 AIRS II

    NASA Technical Reports Server (NTRS)

    Daniels, Taumi S.; Murray, John J.; Anderson, Mark V.; Mulally, Daniel J.; Jensen, Kristopher R.; Grainger, Cedric A.; Delene, David J.

    2005-01-01

    This study entails an assessment of TAMDAR in situ temperature, relative humidity and winds sensor data from seven flights of the UND Citation II. These data are undergoing rigorous assessment to determine their viability to significantly augment domestic Meteorological Data Communications Reporting System (MDCRS) and the international Aircraft Meteorological Data Reporting (AMDAR) system observational databases to improve the performance of regional and global numerical weather prediction models. NASA Langley Research Center participated in the Second Alliance Icing Research Study from November 17 to December 17, 2003. TAMDAR data taken during this period is compared with validation data from the UND Citation. The data indicate acceptable performance of the TAMDAR sensor when compared to measurements from the UND Citation research instruments.

  16. Maui Electrical System Simulation Model Validation

    E-print Network

    Maui Electrical System Simulation Model Validation Prepared for the U.S. Department of Energy ­ Baseline Model Validation By GE Global Research Niskayuna, New York And University of Hawaii Hawaii Natural to build the models and are summarized in this report. ii #12;iii Table of Contents ACKNOWLEDGEMENT

  17. INACTIVATION OF CRYPTOSPORIDIUM OOCYSTS IN A PILOT-SCALE OZONE BUBBLE-DIFFUSER CONTACTOR - II: MODEL VALIDATION AND APPLICATION

    EPA Science Inventory

    The ADR model developed in Part I of this study was successfully validated with experimenta data obtained for the inactivation of C. parvum and C. muris oocysts with a pilot-scale ozone-bubble diffuser contactor operated with treated Ohio River water. Kinetic parameters, required...

  18. Groundwater Model Validation

    SciTech Connect

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation data to constrain model input parameters is shown for the second case study using a Bayesian approach known as Markov Chain Monte Carlo. The approach shows a great potential to be helpful in the validation process and in incorporating prior knowledge with new field data to derive posterior distributions for both model input and output.

  19. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  20. Black liquor combustion validated recovery boiler modeling: Final year report. Volume 3 (Appendices II, sections 2--3 and III)

    SciTech Connect

    Grace, T.M.; Frederick, W.J.; Salcudean, M.; Wessel, R.A.

    1998-08-01

    This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished were as follows: (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes. (2) Validate the enhanced furnace models, so that users can have confidence in the predicted results. (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler. (4) Facilitate the transfer of codes, black liquid submodels, and fundamental knowledge to the US kraft pulp industry. Volume 3 contains the following appendix sections: Formation and destruction of nitrogen oxides in recovery boilers; Sintering and densification of recovery boiler deposits laboratory data and a rate model; and Experimental data on rates of particulate formation during char bed burning.

  1. Validation of SAGE II NO2 measurements

    NASA Technical Reports Server (NTRS)

    Cunnold, D. M.; Zawodny, J. M.; Chu, W. P.; Mccormick, M. P.; Pommereau, J. P.; Goutail, F.

    1991-01-01

    The validity of NO2 measurements from the stratospheric aerosol and gas experiment (SAGE) II is examined by comparing the data with climatological distributions of NO2 and by examining the consistency of the observations themselves. The precision at high altitudes is found to be 5 percent, which is also the case at specific low altitudes for certain latitudes where the mixing ratio is 4 ppbv, and the precision is 0.2 ppbv at low altitudes. The autocorrelation distance of the smoothed profile measurement noise is 3-5 km and 10 km for 1-km and 5-km smoothing, respectively. The SAGE II measurements agree with spectroscopic measurements to within 10 percent, and the SAGE measurements are about 20 percent smaller than average limb monitor measurements at the mixing ratio peak. SAGE I and SAGE II measurements are slightly different, but the difference is not attributed to changes in atmospheric NO2.

  2. Fast, efficient generation of high-quality atomic charges. AM1-BCC model: II. Parameterization and validation.

    PubMed

    Jakalian, Araz; Jack, David B; Bayly, Christopher I

    2002-12-01

    We present the first global parameterization and validation of a novel charge model, called AM1-BCC, which quickly and efficiently generates high-quality atomic charges for computer simulations of organic molecules in polar media. The goal of the charge model is to produce atomic charges that emulate the HF/6-31G* electrostatic potential (ESP) of a molecule. Underlying electronic structure features, including formal charge and electron delocalization, are first captured by AM1 population charges; simple additive bond charge corrections (BCCs) are then applied to these AM1 atomic charges to produce the AM1-BCC charges. The parameterization of BCCs was carried out by fitting to the HF/6-31G* ESP of a training set of >2700 molecules. Most organic functional groups and their combinations were sampled, as well as an extensive variety of cyclic and fused bicyclic heteroaryl systems. The resulting BCC parameters allow the AM1-BCC charging scheme to handle virtually all types of organic compounds listed in The Merck Index and the NCI Database. Validation of the model was done through comparisons of hydrogen-bonded dimer energies and relative free energies of solvation using AM1-BCC charges in conjunction with the 1994 Cornell et al. forcefield for AMBER.(13) Homo- and hetero-dimer hydrogen-bond energies of a diverse set of organic molecules were reproduced to within 0.95 kcal/mol RMS deviation from the ab initio values, and for DNA dimers the energies were within 0.9 kcal/mol RMS deviation from ab initio values. The calculated relative free energies of solvation for a diverse set of monofunctional isosteres were reproduced to within 0.69 kcal/mol of experiment. In all these validation tests, AMBER with the AM1-BCC charge model maintained a correlation coefficient above 0.96. Thus, the parameters presented here for use with the AM1-BCC method present a fast, accurate, and robust alternative to HF/6-31G* ESP-fit charges for general use with the AMBER force field in computer simulations involving organic small molecules. PMID:12395429

  3. Lidar measurements during a haze episode in Penang, Malaysia and validation of the ECMWF MACC-II model

    NASA Astrophysics Data System (ADS)

    Khor, Wei Ying; Lolli, Simone; Hee, Wan Shen; Lim, Hwee San; Jafri, M. Z. Mat; Benedetti, Angela; Jones, Luke

    2015-04-01

    Haze is a phenomenon which occurs when there is a great amount of tiny particulates suspended in the atmosphere. During the period of March 2014, a long period of haze event occurred in Penang, Malaysia. The haze condition was measured and monitored using a ground-based Lidar system. By using the measurements obtained, we evaluated the performance of the ECMWF MACC-II model. Lidar measurements showed that there was a thick aerosol layer confined in the planetary boundary layer (PBL) with extinction coefficients exceeding values of 0.3 km-1. The model however has underestimated the atmospheric conditions in Penang. Backward trajectories analysis was performed to identify aerosols sources and transport. It is speculated that the aerosols came from the North-East direction which was influenced by the North-East monsoon wind and some originated from the central eastern coast of Sumatra along the Straits of Malacca.

  4. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part II: Benchmark comparisons of PUMA core parameters with MCNP5 and improvements due to a simple cell heterogeneity correction

    SciTech Connect

    Grant, C.; Mollerach, R.; Leszczynski, F.; Serra, O.; Marconi, J.; Fink, J.

    2006-07-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure vessel design with 451 vertical coolant channels and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update of reactor physics calculation methods and models was recently carried out covering cell, supercell (control rod) and core calculations. This paper presents benchmark comparisons of core parameters of a slightly idealized model of the Atucha-I core obtained with the PUMA reactor code with MCNP5. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, more symmetric than Atucha-II, and has some experimental data available. To validate the new models benchmark comparisons of k-effective, channel power and axial power distributions obtained with PUMA and MCNP5 have been performed. In addition, a simple cell heterogeneity correction recently introduced in PUMA is presented, which improves significantly the agreement of calculated channel powers with MCNP5. To complete the validation, the calculation of some of the critical configurations of the Atucha-I reactor measured during the experiments performed at first criticality is also presented. (authors)

  5. Data Assimilation of Photosynthetic Light-use Efficiency using Multi-angular Satellite Data: II Model Implementation and Validation

    NASA Technical Reports Server (NTRS)

    Hilker, Thomas; Hall, Forest G.; Tucker, J.; Coops, Nicholas C.; Black, T. Andrew; Nichol, Caroline J.; Sellers, Piers J.; Barr, Alan; Hollinger, David Y.; Munger, J. W.

    2012-01-01

    Spatially explicit and temporally continuous estimates of photosynthesis will be of great importance for increasing our understanding of and ultimately closing the terrestrial carbon cycle. Current capabilities to model photosynthesis, however, are limited by accurate enough representations of the complexity of the underlying biochemical processes and the numerous environmental constraints imposed upon plant primary production. A potentially powerful alternative to model photosynthesis through these indirect observations is the use of multi-angular satellite data to infer light-use efficiency (e) directly from spectral reflectance properties in connection with canopy shadow fractions. Hall et al. (this issue) introduced a new approach for predicting gross ecosystem production that would allow the use of such observations in a data assimilation mode to obtain spatially explicit variations in e from infrequent polar-orbiting satellite observations, while meteorological data are used to account for the more dynamic responses of e to variations in environmental conditions caused by changes in weather and illumination. In this second part of the study we implement and validate the approach of Hall et al. (this issue) across an ecologically diverse array of eight flux-tower sites in North America using data acquired from the Compact High Resolution Imaging Spectroradiometer (CHRIS) and eddy-flux observations. Our results show significantly enhanced estimates of e and therefore cumulative gross ecosystem production (GEP) over the course of one year at all examined sites. We also demonstrate that e is greatly heterogeneous even across small study areas. Data assimilation and direct inference of GEP from space using a new, proposed sensor could therefore be a significant step towards closing the terrestrial carbon cycle.

  6. Applied model validation

    NASA Astrophysics Data System (ADS)

    Davies, A. D.

    1985-07-01

    The NBS Center for Fire Research (CFR) conducts scientific research bearing on the fire safety of buildings, vehicles, tunnels and other inhabited structures. Data from controlled fire experiments are collected, analyzed and reduced to the analytical formulas that appear to underly the observed phenomena. These results and more general physical principles are then combined into models to predict the development of environments that may be hostile to humans. This is a progress report of an applied model validation case study. The subject model is Transport of Fire, Smoke and Gases (FAST). Products from a fire in a burn room exit through a connected corridor to outdoors. Cooler counterflow air in a lower layer feeds the fire. The model predicts corridor layer temperatures and thicknesses vs. time, given enclosure, fire and ambient specifications. Data have been collected from 38 tests using several fire sizes, but have not been reduced. Corresponding model results, and model and test documentation are yet to come. Considerable modeling and calculation is needed to convert instrument readings to test results comparable with model outputs so that residual differences may be determined.

  7. Validation Studies for the Diet History Questionnaire II

    Cancer.gov

    Data show that the DHQ I instrument provides reasonable nutrient estimates, and three studies were conducted to assess its validity/calibration. There have been no such validation studies with the DHQ II.

  8. Model Fe-Al Steel with Exceptional Resistance to High Temperature Coarsening. Part II: Experimental Validation and Applications

    NASA Astrophysics Data System (ADS)

    Zhou, Tihe; Zhang, Peng; O'Malley, Ronald J.; Zurob, Hatem S.; Subramanian, Mani

    2015-01-01

    In order to achieve a fine uniform grain-size distribution using the process of thin slab casting and directing rolling (TSCDR), it is necessary to control the grain-size prior to the onset of thermomechanical processing. In the companion paper, Model Fe- Al Steel with Exceptional Resistance to High Temperature Coarsening. Part I: Coarsening Mechanism and Particle Pinning Effects, a new steel composition which uses a small volume fraction of austenite particles to pin the growth of delta-ferrite grains at high temperature was proposed and grain growth was studied in reheated samples. This paper will focus on the development of a simple laboratory-scale setup to simulate thin-slab casting of the newly developed steel and demonstrate the potential for grain size control under industrial conditions. Steel bars with different diameters are briefly dipped into the molten steel to create a shell of solidified material. These are then cooled down to room temperature at different cooling rates. During cooling, the austenite particles nucleate along the delta-ferrite grain boundaries and greatly retard grain growth. With decreasing temperature, more austenite particles precipitate, and grain growth can be completely arrested in the holding furnace. Additional applications of the model alloy are discussed including grain-size control in the heat affected zone in welds and grain-growth resistance at high temperature.

  9. Probabilistic Methods for Model Validation 

    E-print Network

    Halder, Abhishek

    2014-05-01

    This dissertation develops a probabilistic method for validation and verification (V&V) of uncertain nonlinear systems. Existing systems-control literature on model and controller V&V either deal with linear systems with ...

  10. ON PREDICTION AND MODEL VALIDATION

    SciTech Connect

    M. MCKAY; R. BECKMAN; K. CAMPBELL

    2001-02-01

    Quantification of prediction uncertainty is an important consideration when using mathematical models of physical systems. This paper proposes a way to incorporate ''validation data'' in a methodology for quantifying uncertainty of the mathematical predictions. The report outlines a theoretical framework.

  11. MODEL VALIDATION REPORT FOR THE HOUSATONIC RIVER

    EPA Science Inventory

    The Model Validation Report will present a comparison of model validation runs to existing data for the model validation period. The validation period spans a twenty year time span to test the predictive capability of the model over a longer time period, similar to that which wil...

  12. Partial Validation of Multibody Program to Optimize Simulated Trajectories II (POST II) Parachute Simulation With Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Ben; Queen, Eric M.

    2002-01-01

    A capability to simulate trajectories Of Multiple interacting rigid bodies has been developed. This capability uses the Program to Optimize Simulated Trajectories II (POST II). Previously, POST II had the ability to simulate multiple bodies without interacting forces. The current implementation is used for the Simulation of parachute trajectories, in which the parachute and suspended bodies can be treated as rigid bodies. An arbitrary set of connecting lines can be included in the model and are treated as massless spring-dampers. This paper discusses details of the connection line modeling and results of several test cases used to validate the capability.

  13. Turbulence Modeling Verification and Validation

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2014-01-01

    Computational fluid dynamics (CFD) software that solves the Reynolds-averaged Navier-Stokes (RANS) equations has been in routine use for more than a quarter of a century. It is currently employed not only for basic research in fluid dynamics, but also for the analysis and design processes in many industries worldwide, including aerospace, automotive, power generation, chemical manufacturing, polymer processing, and petroleum exploration. A key feature of RANS CFD is the turbulence model. Because the RANS equations are unclosed, a model is necessary to describe the effects of the turbulence on the mean flow, through the Reynolds stress terms. The turbulence model is one of the largest sources of uncertainty in RANS CFD, and most models are known to be flawed in one way or another. Alternative methods such as direct numerical simulations (DNS) and large eddy simulations (LES) rely less on modeling and hence include more physics than RANS. In DNS all turbulent scales are resolved, and in LES the large scales are resolved and the effects of the smallest turbulence scales are modeled. However, both DNS and LES are too expensive for most routine industrial usage on today's computers. Hybrid RANS-LES, which blends RANS near walls with LES away from walls, helps to moderate the cost while still retaining some of the scale-resolving capability of LES, but for some applications it can still be too expensive. Even considering its associated uncertainties, RANS turbulence modeling has proved to be very useful for a wide variety of applications. For example, in the aerospace field, many RANS models are considered to be reliable for computing attached flows. However, existing turbulence models are known to be inaccurate for many flows involving separation. Research has been ongoing for decades in an attempt to improve turbulence models for separated and other nonequilibrium flows. When developing or improving turbulence models, both verification and validation are important steps in the process. Verification insures that the CFD code is solving the equations as intended (no errors in the implementation). This is typically done either through the method of manufactured solutions (MMS) or through careful step-by-step comparisons with other verified codes. After the verification step is concluded, validation is performed to document the ability of the turbulence model to represent different types of flow physics. Validation can involve a large number of test case comparisons with experiments, theory, or DNS. Organized workshops have proved to be valuable resources for the turbulence modeling community in its pursuit of turbulence modeling verification and validation. Workshop contributors using different CFD codes run the same cases, often according to strict guidelines, and compare results. Through these comparisons, it is often possible to (1) identify codes that have likely implementation errors, and (2) gain insight into the capabilities and shortcomings of different turbulence models to predict the flow physics associated with particular types of flows. These are valuable lessons because they help bring consistency to CFD codes by encouraging the correction of faulty programming and facilitating the adoption of better models. They also sometimes point to specific areas needed for improvement in the models. In this paper, several recent workshops are summarized primarily from the point of view of turbulence modeling verification and validation. Furthermore, the NASA Langley Turbulence Modeling Resource website is described. The purpose of this site is to provide a central location where RANS turbulence models are documented, and test cases, grids, and data are provided. The goal of this paper is to provide an abbreviated survey of turbulence modeling verification and validation efforts, summarize some of the outcomes, and give some ideas for future endeavors in this area.

  14. The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models

    PubMed Central

    2012-01-01

    Gene expression data from microarrays are being applied to predict preclinical and clinical endpoints, but the reliability of these predictions has not been established. In the MAQC-II project, 36 independent teams analyzed six microarray data sets to generate predictive models for classifying a sample with respect to one of 13 endpoints indicative of lung or liver toxicity in rodents, or of breast cancer, multiple myeloma or neuroblastoma in humans. In total, >30,000 models were built using many combinations of analytical methods. The teams generated predictive models without knowing the biological meaning of some of the endpoints and, to mimic clinical reality, tested the models on data that had not been used for training. We found that model performance depended largely on the endpoint and team proficiency and that different approaches generated models of similar performance. The conclusions and recommendations from MAQC-II should be useful for regulatory agencies, study committees and independent investigators that evaluate methods for global gene expression analysis. PMID:20676074

  15. (Validity of environmental transfer models)

    SciTech Connect

    Blaylock, B.G.; Hoffman, F.O.; Gardner, R.H.

    1990-11-07

    BIOMOVS (BIOspheric MOdel Validation Study) is an international cooperative study initiated in 1985 by the Swedish National Institute of Radiation Protection to test models designed to calculate the environmental transfer and bioaccumulation of radionuclides and other trace substances. The objective of the symposium and workshop was to synthesize results obtained during Phase 1 of BIOMOVS (the first five years of the study) and to suggest new directions that might be pursued during Phase 2 of BIOMOVS. The travelers were an instrumental part of the development of BIOMOVS. This symposium allowed the travelers to present a review of past efforts at model validation and a synthesis of current activities and to refine ideas concerning future development of models and data for assessing the fate, effect, and human risks of environmental contaminants. R. H. Gardner also visited the Free University, Amsterdam, and the National Institute of Public Health and Environmental Protection (RIVM) in Bilthoven to confer with scientists about current research in theoretical ecology and the use of models for estimating the transport and effect of environmental contaminants and to learn about the European efforts to map critical loads of acid deposition.

  16. Validating Computational Models Kathleen M. Carley

    E-print Network

    Tesfatsion, Leigh

    , calls for model validation without understanding what validation entails. On the other hand, as noted of this paper then, is not to teach programming or even how to build models; rather, the goal is to provide

  17. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part I: Benchmark comparisons of WIMS-D5 and DRAGON cell and control rod parameters with MCNP5

    SciTech Connect

    Mollerach, R.; Leszczynski, F.; Fink, J.

    2006-07-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure-vessel design with 451 vertical coolant channels, and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update calculation methods and models (cell, supercell and reactor) was recently carried out covering cell, supercell (control rod) and core calculations. As a validation of the new models some benchmark comparisons were done with Monte Carlo calculations with MCNP5. This paper presents comparisons of cell and supercell benchmark problems based on a slightly idealized model of the Atucha-I core obtained with the WIMS-D5 and DRAGON codes with MCNP5 results. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, and more symmetric than Atucha-II Cell parameters compared include cell k-infinity, relative power levels of the different rings of fuel rods, and some two-group macroscopic cross sections. Supercell comparisons include supercell k-infinity changes due to the control rods (tubes) of steel and hafnium. (authors)

  18. Validation for a recirculation model.

    PubMed

    LaPuma, P T

    2001-04-01

    Recent Clean Air Act regulations designed to reduce volatile organic compound (VOC) emissions have placed new restrictions on painting operations. Treating large volumes of air which contain dilute quantities of VOCs can be expensive. Recirculating some fraction of the air allows an operator to comply with environmental regulations at reduced cost. However, there is a potential impact on employee safety because indoor pollutants will inevitably increase when air is recirculated. A computer model was developed, written in Microsoft Excel 97, to predict compliance costs and indoor air concentration changes with respect to changes in the level of recirculation for a given facility. The model predicts indoor air concentrations based on product usage and mass balance equations. This article validates the recirculation model using data collected from a C-130 aircraft painting facility at Hill Air Force Base, Utah. Air sampling data and air control cost quotes from vendors were collected for the Hill AFB painting facility and compared to the model's predictions. The model's predictions for strontium chromate and isocyanate air concentrations were generally between the maximum and minimum air sampling points with a tendency to predict near the maximum sampling points. The model's capital cost predictions for a thermal VOC control device ranged from a 14 percent underestimate to a 50 percent overestimate of the average cost quotes. A sensitivity analysis of the variables is also included. The model is demonstrated to be a good evaluation tool in understanding the impact of recirculation. PMID:11318387

  19. Obstructive lung disease models: what is valid?

    PubMed

    Ferdinands, Jill M; Mannino, David M

    2008-12-01

    Use of disease simulation models has led to scrutiny of model methods and demand for evidence that models credibly simulate health outcomes. We sought to describe recent obstructive lung disease simulation models and their validation. Medline and EMBASE were used to identify obstructive lung disease simulation models published from January 2000 to June 2006. Publications were reviewed to assess model attributes and four types of validation: first-order (verification/debugging), second-order (comparison with studies used in model development), third-order (comparison with studies not used in model development), and predictive validity. Six asthma and seven chronic obstructive pulmonary disease models were identified. Seven (54%) models included second-order validation, typically by comparing observed outcomes to simulations of source study cohorts. Seven (54%) models included third-order validation, in which modeled outcomes were usually compared qualitatively for agreement with studies independent of the model. Validation endpoints included disease prevalence, exacerbation, and all-cause mortality. Validation was typically described as acceptable, despite near-universal absence of criteria for judging adequacy of validation. Although over half of recent obstructive lung disease simulation models report validation, inconsistencies in validation methods and lack of detailed reporting make assessing adequacy of validation difficult. For simulation modeling to be accepted as a tool for evaluating clinical and public health programs, models must be validated to credibly simulate health outcomes of interest. Defining the required level of validation and providing guidance for quantitative assessment and reporting of validation are important future steps in promoting simulation models as practical decision tools. PMID:19353353

  20. Atlas II and IIA analyses and environments validation

    NASA Astrophysics Data System (ADS)

    Martin, Richard E.

    1995-06-01

    General Dynamics has now flown all four versions of the Atlas commercial launch vehicle, which cover a payload weight capability to geosynchronous transfer orbit (GTO) in the range of 5000-8000 lb. The key analyses to set design and environmental test parameters for the vehicle modifications and the ground and flight test data that validated them were prepared in paper IAF-91-170 for the first version, Atlas I. This paper presents similar data for the next two versions, Atlas II and IIA. The Atlas II has propellant tanks lengthened by 12 ft and is boosted by MA-5A rocket engines uprated to 474,000 lb liftoff thrust. GTO payload capability is 6225 lb with the 11-ft fairing. The Atlas IIA is an Atlas II with uprated RL10A-4 engines on the lengthened Centaur II upper stage. The two 20,800 lb thrust, 449 s specific impulse engines with an optional extendible nozzle increase payload capability to GTO to 6635 lb. The paper describes design parameters and validated test results for many other improvements that have generally provided greater capability at less cost, weight and complexity and better reliability. Those described include: moving the MA-5A start system to the ground, replacing the vernier engines with a simple 50 lb thrust on-off hydrazine roll control system, addition of a POGO suppressor, replacement of Centaur jettisonable insulation panels with fixed foam, a new inertial navigation unit (INU) that combines in one package a ring-laser gyro based strapdown guidance system with two MIL-STD-1750A processors, redundant MIL-STD-1553 data bus interfaces, robust Ada-based software and a new Al-Li payload adapter. Payload environment is shown to be essentially unchanged from previous Atlas vehicles. Validation of load, stability, control and pressurization requirements for the larger vehicle is discussed. All flights to date (five Atlas II, one Atlas IIA) have been successful in launching satellites for EUTELSAT, the U.S. Air Force and INTELSAT. Significant design parameters validated by these flights are presented. Particularly noteworthy has been the performance of the INU, which has provided average GTO insertion errors of only 10 miles apogee, 0.2 miles perigee and 0.004 degrees inclination. It is concluded that Atlas II/IIA have successfully demonstrated probably the largest number of current state-of-the-art components of any expendable launch vehicle flying today.

  1. Validity of Thin Shell Models

    NASA Astrophysics Data System (ADS)

    Gao, Sijie

    The purpose of this paper is to test the validity of the thin shell formalism. Firstly, we construct a dust thick shell collapsing to a Schwarzschild black hole. From this exact solution, we show that the two sides of the shell satisfy different equations of motion. Moreover, we show that the inner side and the outer side always cross each other right after the formation of the thin shell, causing a breakdown of the model. Secondly, we establish a class of wormholes with non-zero thickness and extremal Reissner-Nordström exterior. In the thin shell limit, we find that the surface stress-energy tensor contains the contribution of electromagnetic field, which contradicts the assumption in previous literature.

  2. Assimilation of Geosat Altimetric Data in a Nonlinear Shallow-Water Model of the Indian Ocean by Adjoint Approach. Part II: Some Validation and Interpretation of the Assimilated Results. Part 2; Some Validation and Interpretation of the Assimilated Results

    NASA Technical Reports Server (NTRS)

    Greiner, Eric; Perigaud, Claire

    1996-01-01

    This paper examines the results of assimilating Geosat sea level variations relative to the November 1986-November 1988 mean reference, in a nonlinear reduced-gravity model of the Indian Ocean, Data have been assimilated during one year starting in November 1986 with the objective of optimizing the initial conditions and the yearly averaged reference surface. The thermocline slope simulated by the model with or without assimilation is validated by comparison with the signal, which can be derived from expandable bathythermograph measurements performed in the Indian Ocean at that time. The topography simulated with assimilation on November 1986 is in very good agreement with the hydrographic data. The slopes corresponding to the South Equatorial Current and to the South Equatorial Countercurrent are better reproduced with assimilation than without during the first nine months. The whole circulation of the cyclonic gyre south of the equator is then strongly intensified by assimilation. Another assimilation experiment is run over the following year starting in November 1987. The difference between the two yearly mean surfaces simulated with assimilation is in excellent agreement with Geosat. In the southeastern Indian Ocean, the correction to the yearly mean dynamic topography due to assimilation over the second year is negatively correlated to the one the year before. This correction is also in agreement with hydrographic data. It is likely that the signal corrected by assimilation is not only due to wind error, because simulations driven by various wind forcings present the same features over the two years. Model simulations run with a prescribed throughflow transport anomaly indicate that assimilation is rather correcting in the interior of the model domain for inadequate boundary conditions with the Pacific.

  3. Validation studies of a computational model for molten material freezing

    SciTech Connect

    Sawada, Tetsuo; Ninokata, Hisashi; Shimizu, Akinao

    1996-02-01

    Validation studies are described of a computational model for the freezing of molten core materials under core disruptive accident conditions of fast breeder reactors. A series of out-of-pile experiments named SIMBATH, performed at Forschungszentrum Karlsruhe in Germany, has already been analyzed with the SIMMER-II code. In the current study, TRAN simulation tests in the SIMBATH facility are analyzed by SIMMER-II for its modeling validation of molten material freezing. The original TRAN experiments were performed at Sandia National laboratories to examine the freezing behavior of molten UO{sub 2} injected into an annular channels. In the TAN simulation experiments of the SIMBATH series, similar freezing phenomena are investigated for molten thermite, a mixture of Al{sub 2}O{sub 3} and iron, instead of UO{sub 2}. Two typical TRAN simulation tests are analyzed that aim at clarification of the applicability of the code to the freezing process during the experiments. The distribution of molten materials that are deposited in the test section according to the experimental measurements and in calculations by SIMMER-II is compared. These studies confirm that the conduction-limited freezing model combined with the rudimentary bulk freezing (particle-jamming) model of SIMMER-II is compared. These studies confirm that the conduction-limited freezing model combined with the rudimentary bulk freezing (particle-jamming) model of SIMMER-II could be used to reproduce the TRAN simulation experiments satisfactorily. This finding encourages the extrapolation of the results of previous validation research for SIMMER-II based on other SIMBATH tests to reactor case analyses. The calculation by SIMMER-II suggest that further improvements of the model, such as freezing on a convex surface of pin cladding and the scraping of crusts, make possible more accurate simulation of freezing phenomena.

  4. Validation of Multibody Program to Optimize Simulated Trajectories II Parachute Simulation with Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.

    2009-01-01

    A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.

  5. NASA GSFC CCMC Recent Model Validation Activities

    NASA Technical Reports Server (NTRS)

    Rastaetter, L.; Pulkkinen, A.; Taktakishvill, A.; Macneice, P.; Shim, J. S.; Zheng, Yihua; Kuznetsova, M. M.; Hesse, M.

    2012-01-01

    The Community Coordinated Modeling Center (CCMC) holds the largest assembly of state-of-the-art physics-based space weather models developed by the international space physics community. In addition to providing the community easy access to these modern space research models to support science research, its another primary goal is to test and validate models for transition from research to operations. In this presentation, we provide an overview of the space science models available at CCMC. Then we will focus on the community-wide model validation efforts led by CCMC in all domains of the Sun-Earth system and the internal validation efforts at CCMC to support space weather servicesjoperations provided its sibling organization - NASA GSFC Space Weather Center (http://swc.gsfc.nasa.gov). We will also discuss our efforts in operational model validation in collaboration with NOAA/SWPC.

  6. Factorial validity and measurement invariance across intelligence levels and gender of the overexcitabilities questionnaire-II (OEQ-II).

    PubMed

    Van den Broeck, Wim; Hofmans, Joeri; Cooremans, Sven; Staels, Eva

    2014-03-01

    The concept of overexcitability, derived from Dabrowski's theory of personality development, offers a promising approach for the study of the developmental dynamics of giftedness. The present study aimed at (a) examining the factorial structure of the Overexcitabilities Questionnaire-II scores (OEQ-II) and (b) testing measurement invariance of these scores across intelligence and gender. A sample of 641 Dutch-speaking adolescents from 11 to 15 years old, 363 girls and 278 boys, participated in this study. Results showed that a model without cross-loadings did not fit the data well (using confirmatory factor analysis), whereas a factor model in which all cross-loadings were included yielded fit statistics that were in support of the factorial structure of the OEQ-II scores (using exploratory structural equation modeling). Furthermore, our findings supported the assumption of (partial) strict measurement invariance of the OEQ-II scores across intelligence levels and across gender. Such levels of measurement invariance allow valid comparisons between factor means and factor relationships across groups. In particular, the gifted group scored significantly higher on intellectual and sensual overexcitability (OE) than the nongifted group, girls scored higher on emotional and sensual OE than boys, and boys scored higher on intellectual and psychomotor OE than girls. PMID:24079958

  7. Ecological Validity of the Conners' Continuous Performance Test II in a School-Based Sample

    ERIC Educational Resources Information Center

    Weis, Robert; Totten, Sara J.

    2004-01-01

    The ecological validity of the Conners' Continuous Performance Test II (CPT-II) was examined using a sample of 206 first- and second-grade children. Children's CPT-II scores were correlated with observations of inattentive/hyperactive behavior during CPT-II administration, observations of children's behavior during analogue academic task,…

  8. Algorithm for model validation: theory and applications.

    PubMed

    Sornette, D; Davis, A B; Ide, K; Vixie, K R; Pisarenko, V; Kamm, J R

    2007-04-17

    Validation is often defined as the process of determining the degree to which a model is an accurate representation of the real world from the perspective of its intended uses. Validation is crucial as industries and governments depend increasingly on predictions by computer models to justify their decisions. We propose to formulate the validation of a given model as an iterative construction process that mimics the often implicit process occurring in the minds of scientists. We offer a formal representation of the progressive build-up of trust in the model. Thus, we replace static claims on the impossibility of validating a given model by a dynamic process of constructive approximation. This approach is better adapted to the fuzzy, coarse-grained nature of validation. Our procedure factors in the degree of redundancy versus novelty of the experiments used for validation as well as the degree to which the model predicts the observations. We illustrate the methodology first with the maturation of quantum mechanics as the arguably best established physics theory and then with several concrete examples drawn from some of our primary scientific interests: a cellular automaton model for earthquakes, a multifractal random walk model for financial time series, an anomalous diffusion model for solar radiation transport in the cloudy atmosphere, and a computational fluid dynamics code for the Richtmyer-Meshkov instability. PMID:17420476

  9. Inert doublet model and LEP II limits

    SciTech Connect

    Lundstroem, Erik; Gustafsson, Michael; Edsjoe, Joakim

    2009-02-01

    The inert doublet model is a minimal extension of the standard model introducing an additional SU(2) doublet with new scalar particles that could be produced at accelerators. While there exists no LEP II analysis dedicated for these inert scalars, the absence of a signal within searches for supersymmetric neutralinos can be used to constrain the inert doublet model. This translation however requires some care because of the different properties of the inert scalars and the neutralinos. We investigate what restrictions an existing DELPHI Collaboration study of neutralino pair production can put on the inert scalars and discuss the result in connection with dark matter. We find that although an important part of the inert doublet model parameter space can be excluded by the LEP II data, the lightest inert particle still constitutes a valid dark matter candidate.

  10. Description and validation of realistic and structured endourology training model

    PubMed Central

    Soria, Federico; Morcillo, Esther; Sanz, Juan Luis; Budia, Alberto; Serrano, Alvaro; Sanchez-Margallo, Francisco M

    2014-01-01

    Purpose: The aim of the present study was to validate a model of training, which combines the use of non-biological and ex vivo biological bench models, as well as the modelling of urological injuries for endourological treatment in a porcine animal model. Material and Methods: A total of 40 participants took part in this study. The duration of the activity was 16 hours. The model of training was divided into 3 levels: level I, concerning the acquisition of basic theoretical knowledge; level II, involving practice with the bench models and level III, concerning practice in the porcine animal model. First, trainees practiced with animals without using a model of injured (ureteroscopy, management of guide wires and catheters under fluoroscopic control) and later practiced in lithiasic animal model. During the activity, an evaluation of the face and content validity was conducted, as well as constructive validation provided by the trainees versus experts. Evolution of the variables during the course within each group was analysed using the Student’s t test for paired samples, while comparisons between groups, were performed using the Student’s t test for unpaired samples. Results: The assessments of face and content validity were satisfactory. The constructive validation, “within one trainee” shows that were statistical significant differences between the first time the trainees performed the tasks in the animal model and the last time, mainly in the knowledge of procedure and Holmium laser lithotripsy cathegories. At the beginning of level III, there are also statistical significant differences between trainee’s scores and the expert’s scores.Conclusions: This realistic Endourology training model allows the acquisition of knowledge and technical and non-technical skills as evidenced by the face, content and constructive validity. Structured use of bench models (biological and non biological) and animal model simulators increase the endourological basic skills. PMID:25374928

  11. An Aqueous Thermodynamic Model for the Complexation of Sodium and Strontium with Organic Chelators valid to High Ionic Strength. II. N-(2-hydroxyethyl)ethylenedinitrilotriacetic acid (HEDTA)

    SciTech Connect

    Felmy, Andrew R.; Mason, Marvin J.; Qafoku, Odeta

    2003-04-01

    This is the second paper in a two part series on the development of aqueous thermodynamic models for the complexation of Na+ and Sr2+ with organic chelators. In this paper the development of an aqueous thermodynamic model describing the effects of ionic strength, carbonate concentration, and temperature on the complexation of Sr2+ by HEDTA under basic conditions is presented. The thermodynamic model describing the Na+ interactions with the HEDTA3- chelate relies solely on the use of Pitzer ion-interaction parameters. The exclusive use of Pitzer ion-interaction parameters differs significantly from our previous model for EDTA, which required the introduction of a NaEDTA3- ion pair. Estimation of the Pitzer ion-interaction parameters for HEDTA3- and SrHEDTA- with Na+ allows the extrapolation of a standard state equilibrium constant for the SrHEDTA- species which is one order of magnitude greater than the 0.1M reference state value available in the literature. The overall model is developed from data available in the literature on apparent equilibrium constants for HEDTA protonation, the solubility of salts in concentrated HEDTA solutions, and from new data on the solubility of SrCO3(c) obtained as part of this study. The predictions of the final thermodynamic model for the Na-Sr-OH-CO3-NO3-HEDTA-H2O system are tested by application to chemical systems containing competing metal ions (i.e., Ca2+).

  12. Systematic Independent Validation of Inner Heliospheric Models

    NASA Technical Reports Server (NTRS)

    MacNeice, P. J.; Takakishvili, Alexandre

    2008-01-01

    This presentation is the first in a series which will provide independent validation of community models of the outer corona and inner heliosphere. In this work we establish a set of measures to be used in validating this group of models. We use these procedures to generate a comprehensive set of results from the Wang- Sheeley-Arge (WSA) model which will be used as a baseline, or reference, against which to compare all other models. We also run a test of the validation procedures by applying them to a small set of results produced by the ENLIL Magnetohydrodynamic (MHD) model. In future presentations we will validate other models currently hosted by the Community Coordinated Modeling Center(CCMC), including a comprehensive validation of the ENLIL model. The Wang-Sheeley-Arge (WSA) model is widely used to model the Solar wind, and is used by a number of agencies to predict Solar wind conditions at Earth as much as four days into the future. Because it is so important to both the research and space weather forecasting communities, it is essential that its performance be measured systematically, and independently. In this paper we offer just such an independent and systematic validation. We report skill scores for the model's predictions of wind speed and IMF polarity for a large set of Carrington rotations. The model was run in all its routinely used configurations. It ingests line of sight magnetograms. For this study we generated model results for monthly magnetograms from the National Solar Observatory (SOLIS), Mount Wilson Observatory and the GONG network, spanning the Carrington rotation range from 1650 to 2068. We compare the influence of the different magnetogram sources, performance at quiet and active times, and estimate the effect of different empirical wind speed tunings. We also consider the ability of the WSA model to identify sharp transitions in wind speed from slow to fast wind. These results will serve as a baseline against which to compare future versions of the model

  13. Preparation of Power Distribution System for High Penetration of Renewable Energy Part I. Dynamic Voltage Restorer for Voltage Regulation Pat II. Distribution Circuit Modeling and Validation

    NASA Astrophysics Data System (ADS)

    Khoshkbar Sadigh, Arash

    Part I: Dynamic Voltage Restorer In the present power grids, voltage sags are recognized as a serious threat and a frequently occurring power-quality problem and have costly consequence such as sensitive loads tripping and production loss. Consequently, the demand for high power quality and voltage stability becomes a pressing issue. Dynamic voltage restorer (DVR), as a custom power device, is more effective and direct solutions for "restoring" the quality of voltage at its load-side terminals when the quality of voltage at its source-side terminals is disturbed. In the first part of this thesis, a DVR configuration with no need of bulky dc link capacitor or energy storage is proposed. This fact causes to reduce the size of the DVR and increase the reliability of the circuit. In addition, the proposed DVR topology is based on high-frequency isolation transformer resulting in the size reduction of transformer. The proposed DVR circuit, which is suitable for both low- and medium-voltage applications, is based on dc-ac converters connected in series to split the main dc link between the inputs of dc-ac converters. This feature makes it possible to use modular dc-ac converters and utilize low-voltage components in these converters whenever it is required to use DVR in medium-voltage application. The proposed configuration is tested under different conditions of load power factor and grid voltage harmonic. It has been shown that proposed DVR can compensate the voltage sag effectively and protect the sensitive loads. Following the proposition of the DVR topology, a fundamental voltage amplitude detection method which is applicable in both single/three-phase systems for DVR applications is proposed. The advantages of proposed method include application in distorted power grid with no need of any low-pass filter, precise and reliable detection, simple computation and implementation without using a phased locked loop and lookup table. The proposed method has been verified by simulation and experimental tests under various conditions considering all possible cases such as different amounts of voltage sag depth (VSD), different amounts of point-on-wave (POW) at which voltage sag occurs, harmonic distortion, line frequency variation, and phase jump (PJ). Furthermore, the ripple amount of fundamental voltage amplitude calculated by the proposed method and its error is analyzed considering the line frequency variation together with harmonic distortion. The best and worst detection time of proposed method were measured 1ms and 8.8ms, respectively. Finally, the proposed method has been compared with other voltage sag detection methods available in literature. Part 2: Power System Modeling for Renewable Energy Integration: As power distribution systems are evolving into more complex networks, electrical engineers have to rely on software tools to perform circuit analysis. There are dozens of powerful software tools available in the market to perform the power system studies. Although their main functions are similar, there are differences in features and formatting structures to suit specific applications. This creates challenges for transferring power system circuit models data (PSCMD) between different software and rebuilding the same circuit in the second software environment. The objective of this part of thesis is to develop a Unified Platform (UP) to facilitate transferring PSCMD among different software packages and relieve the challenges of the circuit model conversion process. UP uses a commonly available spreadsheet file with a defined format, for any home software to write data to and for any destination software to read data from, via a script-based application called PSCMD transfer application. The main considerations in developing the UP are to minimize manual intervention and import a one-line diagram into the destination software or export it from the source software, with all details to allow load flow, short circuit and other analyses. In this study, ETAP, OpenDSS, and GridLab-D are considered, and PSCMD trans

  14. Validation of the Hot Strip Mill Model

    SciTech Connect

    Richard Shulkosky; David Rosberg; Jerrud Chapman

    2005-03-30

    The Hot Strip Mill Model (HSMM) is an off-line, PC based software originally developed by the University of British Columbia (UBC) and the National Institute of Standards and Technology (NIST) under the AISI/DOE Advanced Process Control Program. The HSMM was developed to predict the temperatures, deformations, microstructure evolution and mechanical properties of steel strip or plate rolled in a hot mill. INTEG process group inc. undertook the current task of enhancing and validating the technology. With the support of 5 North American steel producers, INTEG process group tested and validated the model using actual operating data from the steel plants and enhanced the model to improve prediction results.

  15. Minimal type II seesaw model

    SciTech Connect

    Gu Peihong; Zhang He; Zhou Shun

    2006-10-01

    We propose a minimal type II seesaw model by introducing only one right-handed neutrino besides the SU(2){sub L} triplet Higgs to the standard model. In the usual type II seesaw models with several right-handed neutrinos, the contributions of the right-handed neutrinos and the triplet Higgs to the CP asymmetry, which stems from the decay of the lightest right-handed neutrino, are proportional to their respective contributions to the light neutrino mass matrix. However, in our minimal type II seesaw model, this CP asymmetry is just given by the one-loop vertex correction involving the triplet Higgs, even though the contribution of the triplet Higgs does not dominate the light neutrino masses. For illustration, the Fritzsch-type lepton mass matrices are considered.

  16. Lidar validation of SAGE II aerosol measurements after the 1991 Mount Pinatubo eruption

    E-print Network

    Robock, Alan

    Lidar validation of SAGE II aerosol measurements after the 1991 Mount Pinatubo eruption Juan Carlos Experiment (SAGE) II instrument made extensive aerosol extinction retrievals using the limb-viewing technique. In regions of high-aerosol loading, SAGE II was not able to make measurements, resulting in large information

  17. Validity of the Sleep Subscale of the Diagnostic Assessment for the Severely Handicapped-II (DASH-II)

    ERIC Educational Resources Information Center

    Matson, Johnny L.; Malone, Carrie J.

    2006-01-01

    Currently there are no available sleep disorder measures for individuals with severe and profound intellectual disability. We, therefore, attempted to establish the external validity of the "Diagnostic Assessment for the Severely Handicapped-II" (DASH-II) sleep subscale by comparing daily observational sleep data with the responses of direct care…

  18. Systematic Independent Validation of Inner Heliospheric Models

    NASA Astrophysics Data System (ADS)

    MacNeice, P. J.; Taktakishvili, A.

    2008-12-01

    This presentation is the first in a series which will provide independent validation of community models of the outer corona and inner heliosphere. In this work we establish a set of measures to be used in validating this group of models. We use these procedures to generate a comprehensive set of results from the Wang- Sheeley-Arge (WSA) model which will be used as a baseline, or reference, against which to compare all other models. We also run a test of the validation procedures by applying them to a small set of results produced by the ENLIL Magnetohydrodynamic (MHD) model. In future presentations we will validate other models currently hosted by the Community Coordinated Modeling Center(CCMC), including a comprehensive validation of the ENLIL model. The Wang-Sheeley-Arge (WSA) model is widely used to model the Solar wind, and is used by a number of agencies to predict Solar wind conditions at Earth as much as four days into the future. Because it is so important to both the research and space weather forecasting communities, it is essential that its performance be measured systematically, and independently. In this paper we offer just such an independent and systematic validation. We report skill scores for the model's predictions of wind speed and IMF polarity for a large set of Carrington rotations. The model was run in all its routinely used configurations. It ingests line of sight magnetograms. For this study we generated model results for monthly magnetograms from the National Solar Observatory (SOLIS), Mount Wilson Observatory and the GONG network, spanning the Carrington rotation range from 1650 to 2068. We compare the influence of the different magnetogram sources, performance at quiet and active times, and estimate the effect of different empirical wind speed tunings. We also consider the ability of the WSA model to identify sharp transitions in wind speed from slow to fast wind. These results will serve as a baseline against which to compare future versions of the model as well as the current and future generation of MHD models under development for use in forecasting.

  19. Feature extraction for structural dynamics model validation

    SciTech Connect

    Hemez, Francois; Farrar, Charles; Park, Gyuhae; Nishio, Mayuko; Worden, Keith; Takeda, Nobuo

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  20. Structural system identification: Structural dynamics model validation

    SciTech Connect

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  1. Evaluation (not validation) of quantitative models.

    PubMed Central

    Oreskes, N

    1998-01-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid. Modelers and policymakers must continue to work toward finding effective ways to evaluate and judge the quality of their models, and to develop appropriate terminology to communicate these judgments to the public whose health and safety may be at stake. PMID:9860904

  2. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid. Modelers and policymakers must continue to work toward finding effective ways to evaluate and judge the quality of their models, and to develop appropriate terminology to communicate these judgments to the public whose health and safety may be at stake. PMID:9860904

  3. Validation of an Experimentally Derived Uncertainty Model

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Cox, D. E.; Balas, G. J.; Juang, J.-N.

    1996-01-01

    The results show that uncertainty models can be obtained directly from system identification data by using a minimum norm model validation approach. The error between the test data and an analytical nominal model is modeled as a combination of unstructured additive and structured input multiplicative uncertainty. Robust controllers which use the experimentally derived uncertainty model show significant stability and performance improvements over controllers designed with assumed ad hoc uncertainty levels. Use of the identified uncertainty model also allowed a strong correlation between design predictions and experimental results.

  4. VALIDATION OF IMPROVED 3D ATR MODEL

    SciTech Connect

    Soon Sam Kim; Bruce G. Schnitzler

    2005-11-01

    A full-core Monte Carlo based 3D model of the Advanced Test Reactor (ATR) was previously developed. [1] An improved 3D model has been developed by the International Criticality Safety Benchmark Evaluation Project (ICSBEP) to eliminate homogeneity of fuel plates of the old model, incorporate core changes into the new model, and to validate against a newer, more complicated core configuration. This new 3D model adds capability for fuel loading design and azimuthal power peaking studies of the ATR fuel elements.

  5. Validating Mediator Cost Models with Hubert Naackey

    E-print Network

    Tomasic, Anthony

    Validating Mediator Cost Models with Disco Hubert Naackey | Anthony Tomasic? | PatrickValduriezy y Integrity Inc., San Mateo, California tomasic@digital-integrity.com ABSTRACT. Disco is a mediator system developed at INRIA for accessing heteroge- neous data sources over the Internet. In Disco, mediators accept

  6. Predicting Backdrafting and Spillage for Natural-Draft Gas Combustion Appliances: Validating VENT-II

    SciTech Connect

    Rapp, Vi H.; Pastor-Perez, Albert; Singer, Brett C.; Wray, Craig P.

    2013-04-01

    VENT-II is a computer program designed to provide detailed analysis of natural draft and induced draft combustion appliance vent-systems (i.e., furnace or water heater). This program is capable of predicting house depressurization thresholds that lead to backdrafting and spillage of combustion appliances; however, validation reports of the program being applied for this purpose are not readily available. The purpose of this report is to assess VENT-II’s ability to predict combustion gas spillage events due to house depressurization by comparing VENT-II simulated results with experimental data for four appliance configurations. The results show that VENT-II correctly predicts depressurizations resulting in spillage for natural draft appliances operating in cold and mild outdoor conditions, but not for hot conditions. In the latter case, the predicted depressurizations depend on whether the vent section is defined as part of the vent connector or the common vent when setting up the model. Overall, the VENTII solver requires further investigation before it can be used reliably to predict spillage caused by depressurization over a full year of weather conditions, especially where hot conditions occur.

  7. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    SciTech Connect

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.

  8. Validation of Hadronic Models in GEANT4

    SciTech Connect

    Koi, Tatsumi; Wright, Dennis H.; Folger, Gunter; Ivanchenko, Vladimir; Kossov, Mikhail; Starkov, Nikolai; Heikkinen, Aatos; Truscott, Peter; Lei, Fan; Wellisch, Hans-Peter

    2007-09-26

    Geant4 is a software toolkit for the simulation of the passage of particles through matter. It has abundant hadronic models from thermal neutron interactions to ultra relativistic hadrons. An overview of validations in Geant4 hadronic physics is presented based on thin target measurements. In most cases, good agreement is available between Monte Carlo prediction and experimental data; however, several problems have been detected which require some improvement in the models.

  9. Pain Documentation: Validation of a Reference Model.

    PubMed

    Gesner, Emily; Collins, Sarah A; Rocha, Roberto

    2015-01-01

    Over the last decade, interoperability of the Electronic Health Record (EHR) is becoming more of a reality. However, inconsistencies in documentation such as pain are considered a barrier to obtaining this goal. In order to be able to remedy this issue, it is necessary to validate reference models that have been created based upon requirements defined by Health Level 7 (HL7), Logical Names and Codes (LOINC) and the Intermountain Clinical Element Model using external published sources and guidelines. Using pain as an example of complex and inconsistent documentation, it was found that the reference model based upon these standards is valid because the data elements identified are broad and can meet the needs of each sub-domain within the primary domain of pain. PMID:26262163

  10. Solar Sail Model Validation from Echo Trajectories

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Brickerhoff, Adam T.

    2007-01-01

    The NASA In-Space Propulsion program has been engaged in a project to increase the technology readiness of solar sails. Recently, these efforts came to fruition in the form of several software tools to model solar sail guidance, navigation and control. Furthermore, solar sails are one of five technologies competing for the New Millennium Program Space Technology 9 flight demonstration mission. The historic Echo 1 and Echo 2 balloons were comprised of aluminized Mylar, which is the near-term material of choice for solar sails. Both spacecraft, but particularly Echo 2, were in low Earth orbits with characteristics similar to the proposed Space Technology 9 orbit. Therefore, the Echo balloons are excellent test cases for solar sail model validation. We present the results of studies of Echo trajectories that validate solar sail models of optics, solar radiation pressure, shape and low-thrust orbital dynamics.

  11. Using airborne laser scanning profiles to validate marine geoid models

    NASA Astrophysics Data System (ADS)

    Julge, Kalev; Gruno, Anti; Ellmann, Artu; Liibusk, Aive; Oja, Tõnis

    2014-05-01

    Airborne laser scanning (ALS) is a remote sensing method which utilizes LiDAR (Light Detection And Ranging) technology. The datasets collected are important sources for large range of scientific and engineering applications. Mostly the ALS is used to measure terrain surfaces for compilation of Digital Elevation Models but it can also be used in other applications. This contribution focuses on usage of ALS system for measuring sea surface heights and validating gravimetric geoid models over marine areas. This is based on the ALS ability to register echoes of LiDAR pulse from the water surface. A case study was carried out to analyse the possibilities for validating marine geoid models by using ALS profiles. A test area at the southern shores of the Gulf of Finland was selected for regional geoid validation. ALS measurements were carried out by the Estonian Land Board in spring 2013 at different altitudes and using different scan rates. The one wavelength Leica ALS50-II laser scanner on board of a small aircraft was used to determine the sea level (with respect to the GRS80 reference ellipsoid), which follows roughly the equipotential surface of the Earth's gravity field. For the validation a high-resolution (1'x2') regional gravimetric GRAV-GEOID2011 model was used. This geoid model covers the entire area of Estonia and surrounding waters of the Baltic Sea. The fit between the geoid model and GNSS/levelling data within the Estonian dry land revealed RMS of residuals ±1… ±2 cm. Note that such fitting validation cannot proceed over marine areas. Therefore, an ALS observation-based methodology was developed to evaluate the GRAV-GEOID2011 quality over marine areas. The accuracy of acquired ALS dataset were analyzed, also an optimal width of nadir-corridor containing good quality ALS data was determined. Impact of ALS scan angle range and flight altitude to obtainable vertical accuracy were investigated as well. The quality of point cloud is analysed by cross validation between overlapped flight lines and the comparison with tide gauge stations readings. The comparisons revealed that the ALS based profiles of sea level heights agree reasonably with the regional geoid model (within accuracy of the ALS data and after applying corrections due to sea level variations). Thus ALS measurements are suitable for measuring sea surface heights and validating marine geoid models.

  12. Concepts of Model Verification and Validation

    SciTech Connect

    B.H.Thacker; S.W.Doebling; F.M.Hemez; M.C. Anderson; J.E. Pepin; E.A. Rodriguez

    2004-10-30

    Model verification and validation (V&V) is an enabling methodology for the development of computational models that can be used to make engineering predictions with quantified confidence. Model V&V procedures are needed by government and industry to reduce the time, cost, and risk associated with full-scale testing of products, materials, and weapon systems. Quantifying the confidence and predictive accuracy of model calculations provides the decision-maker with the information necessary for making high-consequence decisions. The development of guidelines and procedures for conducting a model V&V program are currently being defined by a broad spectrum of researchers. This report reviews the concepts involved in such a program. Model V&V is a current topic of great interest to both government and industry. In response to a ban on the production of new strategic weapons and nuclear testing, the Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship Program (SSP). An objective of the SSP is to maintain a high level of confidence in the safety, reliability, and performance of the existing nuclear weapons stockpile in the absence of nuclear testing. This objective has challenged the national laboratories to develop high-confidence tools and methods that can be used to provide credible models needed for stockpile certification via numerical simulation. There has been a significant increase in activity recently to define V&V methods and procedures. The U.S. Department of Defense (DoD) Modeling and Simulation Office (DMSO) is working to develop fundamental concepts and terminology for V&V applied to high-level systems such as ballistic missile defense and battle management simulations. The American Society of Mechanical Engineers (ASME) has recently formed a Standards Committee for the development of V&V procedures for computational solid mechanics models. The Defense Nuclear Facilities Safety Board (DNFSB) has been a proponent of model V&V for all safety-related nuclear facility design, analyses, and operations. In fact, DNFSB 2002-1 recommends to the DOE and National Nuclear Security Administration (NNSA) that a V&V process be performed for all safety related software and analysis. Model verification and validation are the primary processes for quantifying and building credibility in numerical models. Verification is the process of determining that a model implementation accurately represents the developer's conceptual description of the model and its solution. Validation is the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model. Both verification and validation are processes that accumulate evidence of a model's correctness or accuracy for a specific scenario; thus, V&V cannot prove that a model is correct and accurate for all possible scenarios, but, rather, it can provide evidence that the model is sufficiently accurate for its intended use. Model V&V is fundamentally different from software V&V. Code developers developing computer programs perform software V&V to ensure code correctness, reliability, and robustness. In model V&V, the end product is a predictive model based on fundamental physics of the problem being solved. In all applications of practical interest, the calculations involved in obtaining solutions with the model require a computer code, e.g., finite element or finite difference analysis. Therefore, engineers seeking to develop credible predictive models critically need model V&V guidelines and procedures. The expected outcome of the model V&V process is the quantified level of agreement between experimental data and model prediction, as well as the predictive accuracy of the model. This report attempts to describe the general philosophy, definitions, concepts, and processes for conducting a successful V&V program. This objective is motivated by the need for highly accurate numerical models for making predictions to s

  13. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    USGS Publications Warehouse

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead of promoting passive or self-righteous decisions.

  14. Using Model Checking to Validate AI Planner Domain Models

    NASA Technical Reports Server (NTRS)

    Penix, John; Pecheur, Charles; Havelund, Klaus

    1999-01-01

    This report describes an investigation into using model checking to assist validation of domain models for the HSTS planner. The planner models are specified using a qualitative temporal interval logic with quantitative duration constraints. We conducted several experiments to translate the domain modeling language into the SMV, Spin and Murphi model checkers. This allowed a direct comparison of how the different systems would support specific types of validation tasks. The preliminary results indicate that model checking is useful for finding faults in models that may not be easily identified by generating test plans.

  15. Morphodynamic model validation for tropical river junctions

    NASA Astrophysics Data System (ADS)

    Dixon, Simon; Nicholas, Andrew; Sambrook Smith, Greg

    2015-04-01

    The use of morphodynamic numerical modelling as an exploratory tool for understanding tropical braided river evolution and processes is well established. However there remains a challenge in confirming how well complex numerical models are representing reality. Complete validation of morphodynamic models is likely to prove impossible with confirmation of model predictions inherently partial and validation only ever possible in relative terms. Within these limitations it is still vital for researchers to confirm that models are accurately representing morphodynamic processes and that model output is shown to match to a variety of field observations to increase the probability the model is performing correctly. To date the majority of morphodynamic model validation has focused on comparing planform features or statistics from a single time slice. Furthermore, these approaches have also usually only discriminated between "wet" and "dry" parts of the system with no account for vegetation. There is therefore a need for a robust method to compare the morphological evolution of tropical braided rivers to model output. In this presentation we describe a method for extracting land cover classification data from Landsat imagery using a supervised classification system. By generating land cover classifications, including vegetation, for multiple years we are then able to generate areas of erosion and deposition between years. These data allow comparison between the predictions generated by an established morphodynamic model (HSTAR) and field data between time-steps, as well as for individual time steps. This effectively allows the "dynamic" aspect of the morphodynamic model predictions to be compared to observations. We further advance these comparisons by using image analysis techniques to compare the: planform, erosional and depositional shapes generated by the model and from field observations. Using this suite of techniques we are able to dramatically increase the number and detail of our observational data and the robustness of resulting comparisons to model predictions. By increasing our confidence in model output we are able to subsequently use numerical modelling as a heuristic tool to investigate tropical river processes and morphodynamics at large river junctions.

  16. Validation of model improvements for the GISS GCM

    SciTech Connect

    Marengo, J.A.; Druyan, L.M.

    1994-08-01

    The general circulation model of the NASA/Goddard Institute for Space Studies (GISS GCM) was designed primarily for global climate change and climate sensitivity applications. The modelling group at GISS has developed new and more physically appropriate parameterizations of meteorological/hydrological processes which are being validated in an effort to improve the performance of the Model II version of the GISS GCM. This study discusses some preliminary evaluations of this testing based on multiple-year simulations at 4{degrees}latitude by 5{degrees}longitude horizontal resolution. These runs individually incorporate new formulations of the planetary boundary laver (PBL), the moist comulus convection scheme and the ground hydrology and compare results using B-grid and C-grid numerics. The new PBL produces a realistically stronger tropical surface circulation, while the new cumulus scheme generates more realistic distributions of tropical convection and moisture. The main impact of the more sophisticated ground hydrology model is to increase surface air temperatures. Improvements in modelled sea level pressure and rainfall features by the C-grid are somewhat offset by increases in speed excesses at the cores of the summer hemisphere westerly jets. Each modelling innovation targeted a different aspect of the climate not adequately represented by Model II. However, since the various modelling changes were tested individually, the present evaluation could not demonstrate many dramatic improvements in the simulated climates. This documentation of impacts should, however serve as a benchmark for the validation of future simulations of the GISS GCM that combine all of the modelling improvements. 34 refs., 8 figs.

  17. VALIDITY OF THE STANDARD CROSS-CORRELATION TEST FOR MODEL

    E-print Network

    Van den Hof, Paul

    VALIDITY OF THE STANDARD CROSS-CORRELATION TEST FOR MODEL STRUCTURE VALIDATION Sippe G. Douma)validation of this assumption that the model structure is rich enough to contain the true system. The standard test that this standard test itself is valid only under exactly those assumptions it is meant to verify. As a result

  18. Model validation in soft systems practice

    SciTech Connect

    Checkland, P.

    1995-03-01

    The concept of `a model` usually evokes the connotation `model of part of the real world`. That is an almost automatic response. It makes sense especially in relation to the way the concept has been developed and used in natural science. Classical operational research (OR), with its scientific aspirations, and systems engineering, use the concept in the same way and in addition use models as surrogates for the real world, on which experimentation is cheap. In these fields the key feature of a model is representativeness. In soft systems methodology (SSM) models are not of part of the world; they are only relevant to debate about the real world and are used in a cyclic learning process. The paper shows how the different concepts of validation in classical OR and SSM lead to a way of sharply defining the nature of `soft OR`. 21 refs.

  19. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  20. Validation of Space Weather Models at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, M. M.; Hesse, M.; Pulkkinen, A.; Maddox, M.; Rastaetter, L.; Berrios, D.; Zheng, Y.; MacNeice, P. J.; Shim, J.; Taktakishvili, A.; Chulaki, A.

    2011-01-01

    The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership to support the research and developmental work necessary to substantially increase space weather modeling capabilities and to facilitate advanced models deployment in forecasting operations. Space weather models and coupled model chains hosted at the CCMC range from the solar corona to the Earth's upper atmosphere. CCMC has developed a number of real-time modeling systems, as well as a large number of modeling and data products tailored to address the space weather needs of NASA's robotic missions. The CCMC conducts unbiased model testing and validation and evaluates model readiness for operational environment. CCMC has been leading recent comprehensive modeling challenges under GEM, CEDAR and SHINE programs. The presentation will focus on experience in carrying out comprehensive and systematic validation of large sets of. space weather models

  1. Turbulence Modeling Validation, Testing, and Development

    NASA Technical Reports Server (NTRS)

    Bardina, J. E.; Huang, P. G.; Coakley, T. J.

    1997-01-01

    The primary objective of this work is to provide accurate numerical solutions for selected flow fields and to compare and evaluate the performance of selected turbulence models with experimental results. Four popular turbulence models have been tested and validated against experimental data often turbulent flows. The models are: (1) the two-equation k-epsilon model of Wilcox, (2) the two-equation k-epsilon model of Launder and Sharma, (3) the two-equation k-omega/k-epsilon SST model of Menter, and (4) the one-equation model of Spalart and Allmaras. The flows investigated are five free shear flows consisting of a mixing layer, a round jet, a plane jet, a plane wake, and a compressible mixing layer; and five boundary layer flows consisting of an incompressible flat plate, a Mach 5 adiabatic flat plate, a separated boundary layer, an axisymmetric shock-wave/boundary layer interaction, and an RAE 2822 transonic airfoil. The experimental data for these flows are well established and have been extensively used in model developments. The results are shown in the following four sections: Part A describes the equations of motion and boundary conditions; Part B describes the model equations, constants, parameters, boundary conditions, and numerical implementation; and Parts C and D describe the experimental data and the performance of the models in the free-shear flows and the boundary layer flows, respectively.

  2. Validation of cleaning procedures for highly potent drugs. II. Bisnafide.

    PubMed

    Segretario, J; Cook, S C; Umbles, C L; Walker, J T; Woodeshick, R W; Rubino, J T; Shea, J A

    1998-11-01

    The objective of this work was the development and validation of procedures designed to clean glass and stainless steel surfaces after exposure to the experimental anticancer drug, bisnafide. The cleaning procedures, using 5% acetic acid water, Alconox, and water, were validated using a wipe test and an HPLC method developed to quantitate low levels of bisnafide. The procedure developed for cleaning stainless steel is more stringent than that for glass because of the apparent greater affinity of bisnafide for stainless steel. The HPLC method is shown to be linear and reproducible (RSD 4.4% or less), with a detection limit of 4 ng/ml. Recoveries of 95.1, 83.5, and 70.0% were obtained from the wipe pads, glass plates, and stainless steel plates, respectively, at levels of approximately 0.7-1.7 ng/cm2. The cleaning procedures are shown to clean glass and stainless steel plates to less than 0.19 and 0.33 ng bisnafide/cm2, respectively. These results further demonstrate the need to fully characterize the recovery of drugs from surfaces and swabs in order to properly validate cleaning procedures. In addition, they demonstrate the potential need to develop surface-specific cleaning procedures. PMID:9834949

  3. Validation of Kp Estimation and Prediction Models

    NASA Astrophysics Data System (ADS)

    McCollough, J. P., II; Young, S. L.; Frey, W.

    2014-12-01

    Specifification and forecast of geomagnetic indices is an important capability for space weather operations. The University Partnering for Operational Support (UPOS) effort at the Applied Physics Laboratory of Johns Hopkins University (JHU/APL) produced many space weather models, including the Kp Predictor and Kp Estimator. We perform a validation of index forecast products against definitive indices computed by the Deutches GeoForschungsZentstrum Potsdam (GFZ). We compute continuous predictant skill scores, as well as 2x2 contingency tables and associated scalar quantities for different index thresholds. We also compute a skill score against a nowcast persistence model. We discuss various sources of error for the models and how they may potentially be improved.

  4. Plasma Reactor Modeling and Validation Experiments

    NASA Technical Reports Server (NTRS)

    Meyyappan, M.; Bose, D.; Hash, D.; Hwang, H.; Cruden, B.; Sharma, S. P.; Rao, M. V. V. S.; Arnold, Jim (Technical Monitor)

    2001-01-01

    Plasma processing is a key processing stop in integrated circuit manufacturing. Low pressure, high density plum reactors are widely used for etching and deposition. Inductively coupled plasma (ICP) source has become popular recently in many processing applications. In order to accelerate equipment and process design, an understanding of the physics and chemistry, particularly, plasma power coupling, plasma and processing uniformity and mechanism is important. This understanding is facilitated by comprehensive modeling and simulation as well as plasma diagnostics to provide the necessary data for model validation which are addressed in this presentation. We have developed a complete code for simulating an ICP reactor and the model consists of transport of electrons, ions, and neutrals, Poisson's equation, and Maxwell's equation along with gas flow and energy equations. Results will be presented for chlorine and fluorocarbon plasmas and compared with data from Langmuir probe, mass spectrometry and FTIR.

  5. Bayes factor of model selection validates FLMP.

    PubMed

    Massaro, D W; Cohen, M M; Campbell, C S; Rodriguez, T

    2001-03-01

    The fuzzy logical model of perception (FLMP; Massaro, 1998) has been extremely successful at describing performance across a wide range of ecological domains as well as for a broad spectrum of individuals. An important issue is whether this descriptive ability is theoretically informative or whether it simply reflects the model's ability to describe a wider range of possible outcomes. Previous tests and contrasts of this model with others have been adjudicated on the basis of both a root mean square deviation (RMSD) for goodness-of-fit and an observed RMSD relative to a benchmark RMSD if the model was indeed correct. We extend the model evaluation by another technique called Bayes factor (Kass & Raftery, 1995; Myung & Pitt, 1997). The FLMP maintains its significant descriptive advantage with this new criterion. In a series of simulations, the RMSD also accurately recovers the correct model under actual experimental conditions. When additional variability was added to the results, the models continued to be recoverable. In addition to its descriptive accuracy, RMSD should not be ignored in model testing because it can be justified theoretically and provides a direct and meaningful index of goodness-of-fit. We also make the case for the necessity of free parameters in model testing. Finally, using Newton's law of universal gravitation as an analogy, we argue that it might not be valid to expect a model's fit to be invariant across the whole range of possible parameter values for the model. We advocate that model selection should be analogous to perceptual judgment, which is characterized by the optimal use of multiple sources of information (e.g., the FLMP). Conclusions about models should be based on several selection criteria. PMID:11340853

  6. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  7. Integrated Medical Model Verification, Validation, and Credibility

    NASA Technical Reports Server (NTRS)

    Walton, Marlei; Kerstman, Eric; Foy, Millennia; Shah, Ronak; Saile, Lynn; Boley, Lynn; Butler, Doug; Myers, Jerry

    2014-01-01

    The Integrated Medical Model (IMM) was designed to forecast relative changes for a specified set of crew health and mission success risk metrics by using a probabilistic (stochastic process) model based on historical data, cohort data, and subject matter expert opinion. A probabilistic approach is taken since exact (deterministic) results would not appropriately reflect the uncertainty in the IMM inputs. Once the IMM was conceptualized, a plan was needed to rigorously assess input information, framework and code, and output results of the IMM, and ensure that end user requests and requirements were considered during all stages of model development and implementation. METHODS: In 2008, the IMM team developed a comprehensive verification and validation (VV) plan, which specified internal and external review criteria encompassing 1) verification of data and IMM structure to ensure proper implementation of the IMM, 2) several validation techniques to confirm that the simulation capability of the IMM appropriately represents occurrences and consequences of medical conditions during space missions, and 3) credibility processes to develop user confidence in the information derived from the IMM. When the NASA-STD-7009 (7009) was published, the IMM team updated their verification, validation, and credibility (VVC) project plan to meet 7009 requirements and include 7009 tools in reporting VVC status of the IMM. RESULTS: IMM VVC updates are compiled recurrently and include 7009 Compliance and Credibility matrices, IMM VV Plan status, and a synopsis of any changes or updates to the IMM during the reporting period. Reporting tools have evolved over the lifetime of the IMM project to better communicate VVC status. This has included refining original 7009 methodology with augmentation from the NASA-STD-7009 Guidance Document. End user requests and requirements are being satisfied as evidenced by ISS Program acceptance of IMM risk forecasts, transition to an operational model and simulation tool, and completion of service requests from a broad end user consortium including Operations, Science and Technology Planning, and Exploration Planning. CONCLUSIONS: The VVC approach established by the IMM project of combining the IMM VV Plan with 7009 requirements is comprehensive and includes the involvement of end users at every stage in IMM evolution. Methods and techniques used to quantify the VVC status of the IMM have not only received approval from the local NASA community but have also garnered recognition by other federal agencies seeking to develop similar guidelines in the medical modeling community.

  8. Validated predictive modelling of the environmental resistome

    PubMed Central

    Amos, Gregory CA; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-01-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  9. Validated predictive modelling of the environmental resistome.

    PubMed

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  10. A decision support system (GesCoN) for managing fertigation in vegetable crops. Part II—model calibration and validation under different environmental growing conditions on field grown tomato

    PubMed Central

    Conversa, Giulia; Bonasia, Anna; Di Gioia, Francesco; Elia, Antonio

    2015-01-01

    The GesCoN model was evaluated for its capability to simulate growth, nitrogen uptake, and productivity of open field tomato grown under different environmental and cultural conditions. Five datasets collected from experimental trials carried out in Foggia (IT) were used for calibration and 13 datasets collected from trials conducted in Foggia, Perugia (IT), and Florida (USA) were used for validation. The goodness of fitting was performed by comparing the observed and simulated shoot dry weight (SDW) and N crop uptake during crop seasons, total dry weight (TDW), N uptake and fresh yield (TFY). In SDW model calibration, the relative RMSE values fell within the good 10–15% range, percent BIAS (PBIAS) ranged between ?11.5 and 7.4%. The Nash-Sutcliffe efficiency (NSE) was very close to the optimal value 1. In the N uptake calibration RRMSE and PBIAS were very low (7%, and ?1.78, respectively) and NSE close to 1. The validation of SDW (RRMSE = 16.7%; NSE = 0.96) and N uptake (RRMSE = 16.8%; NSE = 0.96) showed the good accuracy of GesCoN. A model under- or overestimation of the SDW and N uptake occurred when higher or a lower N rates and/or a more or less efficient system were used compared to the calibration trial. The in-season adjustment, using the “SDWcheck” procedure, greatly improved model simulations both in the calibration and in the validation phases. The TFY prediction was quite good except in Florida, where a large overestimation (+16%) was linked to a different harvest index (0.53) compared to the cultivars used for model calibration and validation in Italian areas. The soil water content at the 10–30 cm depth appears to be well-simulated by the software, and the GesCoN proved to be able to adaptively control potential yield and DW accumulation under limited N soil availability scenarios and consequently to modify fertilizer application. The DSSwell simulate SDW accumulation and N uptake of different tomato genotypes grown under Mediterranean and subtropical conditions. PMID:26217351

  11. Boron-10 Lined Proportional Counter Model Validation

    SciTech Connect

    Lintereur, Azaree T.; Siciliano, Edward R.; Kouzes, Richard T.

    2012-06-30

    The Department of Energy Office of Nuclear Safeguards (NA-241) is supporting the project “Coincidence Counting With Boron-Based Alternative Neutron Detection Technology” at Pacific Northwest National Laboratory (PNNL) for the development of an alternative neutron coincidence counter. The goal of this project is to design, build and demonstrate a boron-lined proportional tube-based alternative system in the configuration of a coincidence counter. This report discusses the validation studies performed to establish the degree of accuracy of the computer modeling methods current used to simulate the response of boron-lined tubes. This is the precursor to developing models for the uranium neutron coincidence collar under Task 2 of this project.

  12. [Catalonia's primary healthcare accreditation model: a valid model].

    PubMed

    Davins, Josep; Gens, Montserrat; Pareja, Clara; Guzmán, Ramón; Marquet, Roser; Vallès, Roser

    2014-07-01

    There are few experiences of accreditation models validated by primary care teams (EAP). The aim of this study was to detail the process of design, development, and subsequent validation of the consensus EAP accreditation model of Catalonia. An Operating Committee of the Health Department of Catalonia revised models proposed by the European Foundation for Quality Management, the Joint Commission International and the Institut Català de la Salut and proposed 628 essential standards to the technical group (25 experts in primary care and quality of care), to establish consensus standards. The consensus document was piloted in 30 EAP for the purpose of validating the contents, testing standards and identifying evidence. Finally, we did a survey to assess acceptance and validation of the document. The Technical Group agreed on a total of 414 essential standards. The pilot selected a total of 379. Mean compliance with the standards of the final document in the 30 EAP was 70.4%. The standards results were the worst fulfilment percentage. The survey target that 83% of the EAP found it useful and 78% found the content of the accreditation manual suitable as a tool to assess the quality of the EAP, and identify opportunities for improvement. On the downside they highlighted its complexity and laboriousness. We have a model that fits the reality of the EAP, and covers all relevant issues for the functioning of an excellent EAP. The model developed in Catalonia is a model for easy understanding. PMID:25128364

  13. DISCRETE EVENT MODELING IN PTOLEMY II

    E-print Network

    California at Berkeley, University of

    DISCRETE EVENT MODELING IN PTOLEMY II Lukito Muliadi Department of Electrical Engineering in Ptolemy II i Abstract This report describes the discrete-event semantics and its implementation in the Ptolemy II soft- ware architecture. The discrete-event system representation is appropriate for time

  14. Validation of HEDR models. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid.

  15. Controller validation for a validated model set Xavier Bombois(1)

    E-print Network

    Gevers, Michel

    regions is also considered. 3. a control law C has been designed from the model G, using usual design be used for the design of a new control law, when the avalaible controller C is not satisfactory. The authors acknowledge the Belgian Programme on Inter-university Poles of Attraction, initiated

  16. Concurrent and Predictive Validity of the Phelps Kindergarten Readiness Scale-II

    ERIC Educational Resources Information Center

    Duncan, Jennifer; Rafter, Erin M.

    2005-01-01

    The purpose of this research was to establish the concurrent and predictive validity of the Phelps Kindergarten Readiness Scale, Second Edition (PKRS-II; L. Phelps, 2003). Seventy-four kindergarten students of diverse ethnic backgrounds enrolled in a northeastern suburban school participated in the study. The concurrent administration of the…

  17. Diurnal ocean surface layer model validation

    NASA Technical Reports Server (NTRS)

    Hawkins, Jeffrey D.; May, Douglas A.; Abell, Fred, Jr.

    1990-01-01

    The diurnal ocean surface layer (DOSL) model at the Fleet Numerical Oceanography Center forecasts the 24-hour change in a global sea surface temperatures (SST). Validating the DOSL model is a difficult task due to the huge areas involved and the lack of in situ measurements. Therefore, this report details the use of satellite infrared multichannel SST imagery to provide day and night SSTs that can be directly compared to DOSL products. This water-vapor-corrected imagery has the advantages of high thermal sensitivity (0.12 C), large synoptic coverage (nearly 3000 km across), and high spatial resolution that enables diurnal heating events to be readily located and mapped. Several case studies in the subtropical North Atlantic readily show that DOSL results during extreme heating periods agree very well with satellite-imagery-derived values in terms of the pattern of diurnal warming. The low wind and cloud-free conditions necessary for these events to occur lend themselves well to observation via infrared imagery. Thus, the normally cloud-limited aspects of satellite imagery do not come into play for these particular environmental conditions. The fact that the DOSL model does well in extreme events is beneficial from the standpoint that these cases can be associated with the destruction of the surface acoustic duct. This so-called afternoon effect happens as the afternoon warming of the mixed layer disrupts the sound channel and the propagation of acoustic energy.

  18. An upgraded track structure model: experimental validation.

    PubMed

    Grosswendt, B; Conte, V; Colautti, P

    2014-10-01

    The track nanodosemeter developed at the National Laboratories of Legnaro (LNL), Italy allows the direct investigation of the properties of particle tracks, by measuring ionisation-cluster-size distributions caused by ionising particles within a 'nanometre-sized' target volume while passing it at a well-specified impact parameter. To supplement the measurements, a dedicated Monte Carlo code was developed which is able to reproduce the general shape of measured cluster-size distributions with a satisfactory quality. To reduce the still existing quantitative differences between measured and simulated data, the validity of cross sections used in the Monte Carlo model was revisited again, taking into account the large amount of data available now from recent track structure measurements at LNL. Here, special emphasis was laid on a deeper and detailed investigation of the cross sections applied to calculate the energy of secondary electrons after impact ionisation of primary particles: the cross sections due to the HKS model and the so-called Rudd model. Representative results for 240 MeV (12)C-ions are presented. PMID:24327751

  19. Boron-10 Lined Proportional Counter Model Validation

    SciTech Connect

    Lintereur, Azaree T.; Ely, James H.; Kouzes, Richard T.; Rogers, Jeremy L.; Siciliano, Edward R.

    2012-11-18

    The decreasing supply of 3He is stimulating a search for alternative neutron detectors; one potential 3He replacement is 10B-lined proportional counters. Simulations are being performed to predict the performance of systems designed with 10B-lined tubes. Boron-10-lined tubes are challenging to model accurately because the neutron capture material is not the same as the signal generating material. Thus, to simulate the efficiency, the neutron capture reaction products that escape the lining and enter the signal generating fill gas must be tracked. The tube lining thickness and composition are typically proprietary vendor information, and therefore add additional variables to the system simulation. The modeling methodologies used to predict the neutron detection efficiency of 10B-lined proportional counters were validated by comparing simulated to measured results. The measurements were made with a 252Cf source positioned at several distances from a moderated 2.54-cm diameter 10B-lined tube. Models were constructed of the experimental configurations using the Monte Carlo transport code MCNPX, which is capable of tracking the reaction products from the (n,10B) reaction. Several different lining thicknesses and compositions were simulated for comparison with the measured data. This paper presents the results of the evaluation of the experimental and simulated data, and a summary of how the different linings affect the performance of a coincidence counter configuration designed with 10B-lined proportional counters.

  20. Outward Bound Outcome Model Validation and Multilevel Modeling

    ERIC Educational Resources Information Center

    Luo, Yuan-Chun

    2011-01-01

    This study was intended to measure construct validity for the Outward Bound Outcomes Instrument (OBOI) and to predict outcome achievement from individual characteristics and course attributes using multilevel modeling. A sample of 2,340 participants was collected by Outward Bound USA between May and September 2009 using the OBOI. Two phases of…

  1. Simultaneous heat and water model: Model use, calibration and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A discussion of calibration and validation procedures used for the Simultaneous Heat and Water model is presented. Three calibration approaches are presented and compared for simulating soil water content. Approaches included a stepwise local search methodology, trial-and-error calibration, and an...

  2. Validating agent based models through virtual worlds.

    SciTech Connect

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior. Results from our work indicate that virtual worlds have the potential for serving as a proxy in allocating and populating behaviors that would be used within further agent-based modeling studies.

  3. A High-Performance Approach to Model Calibration and Validation Keywords: model validation, cognitive models, behavior moderators, genetic algorithms

    E-print Network

    Ritter, Frank

    algorithms to fit cognitive models to human performance data. The efficiency, accuracy, and non-biasness, cognitive models, behavior moderators, genetic algorithms ABSTRACT: A new model validation approach fits; and is available and extendable to other parameterized models, search algorithms, cognitive

  4. Nonaxisymmetric turbine end wall design: Part II -- Experimental validation

    SciTech Connect

    Hartland, J.C.; Gregory-Smith, D.G.; Harvey, N.W.; Rose, M.G.

    2000-04-01

    The Durham Linear Cascade has been redesigned with the nonaxisymmetric profiled end wall described in the first part of this paper, with the aim of reducing the effects of secondary flow. The design intent was to reduce the passage vortex strength and to produce a more uniform exit flow angle profile in the radial direction with less overturning at the wall. The new end wall has been tested in the linear cascade and a comprehensive set of measurements taken. These include traverses of the flow field at a number of axial planes and surface static pressure distributions on the end wall. Detailed comparisons have been made with the CFD design predictions, and also for the results with a planar end wall. In this way an improved understanding of the effects of end wall profiling has been obtained. The experimental results generally agree with the design predictions, showing a reduction in the strength of the secondary flow at the exit and a more uniform flow angle profile. In a turbine stage these effects would be expected to improve the performance of any downstream blade row. There is also a reduction in the overall loss, which was not given by the CFD design predictions. Areas where there are discrepancies between the CFD calculations and measurement are likely to be due to the turbulence model used. Conclusions for how the three-dimensional linear design system should be used to define end wall geometries for improved turbine performance are presented.

  5. Collaborative Infrastructure for Test-Driven Scientific Model Validation

    E-print Network

    Aldrich, Jonathan

    be valuable to scientific communities as they seek to validate increasingly complex models against growing repositories of empirical data. Scientific communities differ from software communities in several key ways, quantitative models are validated by peer review. For a model to be accepted by a scientific community, its ad

  6. Design and Development Research: A Model Validation Case

    ERIC Educational Resources Information Center

    Tracey, Monica W.

    2009-01-01

    This is a report of one case of a design and development research study that aimed to validate an overlay instructional design model incorporating the theory of multiple intelligences into instructional systems design. After design and expert review model validation, The Multiple Intelligence (MI) Design Model, used with an Instructional Systems…

  7. Industrial validation models 1 4/23/03 Experimental validation of new software technology

    E-print Network

    Zelkowitz, Marvin V.

    Industrial validation models 1 4/23/03 Experimental validation of new software technology Marvin V is developed. This chapter presents a discussion of the set of methods that industrial organizations use before and then a definition of the set of industrial methods. A comparison of the two sets leads into the perspectives

  8. Micromachined accelerometer design, modeling and validation

    SciTech Connect

    Davies, B.R.; Bateman, V.I.; Brown, F.A.; Montague, S.; Murray, J.R.; Rey, D.; Smith, J.H.

    1998-04-01

    Micromachining technologies enable the development of low-cost devices capable of sensing motion in a reliable and accurate manner. The development of various surface micromachined accelerometers and gyroscopes to sense motion is an ongoing activity at Sandia National Laboratories. In addition, Sandia has developed a fabrication process for integrating both the micromechanical structures and microelectronics circuitry of Micro-Electro-Mechanical Systems (MEMS) on the same chip. This integrated surface micromachining process provides substantial performance and reliability advantages in the development of MEMS accelerometers and gyros. A Sandia MEMS team developed a single-axis, micromachined silicon accelerometer capable of surviving and measuring very high accelerations, up to 50,000 times the acceleration due to gravity or 50 k-G (actually measured to 46,000 G). The Sandia integrated surface micromachining process was selected for fabrication of the sensor due to the extreme measurement sensitivity potential associated with integrated microelectronics. Measurement electronics capable of measuring at to Farad (10{sup {minus}18} Farad) changes in capacitance were required due to the very small accelerometer proof mass (< 200 {times} 10{sup {minus}9} gram) used in this surface micromachining process. The small proof mass corresponded to small sensor deflections which in turn required very sensitive electronics to enable accurate acceleration measurement over a range of 1 to 50 k-G. A prototype sensor, based on a suspended plate mass configuration, was developed and the details of the design, modeling, and validation of the device will be presented in this paper. The device was analyzed using both conventional lumped parameter modeling techniques and finite element analysis tools. The device was tested and performed well over its design range.

  9. ExodusII Finite Element Data Model

    Energy Science and Technology Software Center (ESTSC)

    2005-05-14

    EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface. (exodus II is based on netcdf)

  10. Bayesian-based simulation model validation for spacecraft thermal systems

    E-print Network

    Stout, Kevin Dale

    2015-01-01

    Over the last several decades of space flight, spacecraft thermal system modeling software has advanced significantly, but the model validation process, in general, has changed very little. Although most thermal systems ...

  11. Cost model validation: a technical and cultural approach

    NASA Technical Reports Server (NTRS)

    Hihn, J.; Rosenberg, L.; Roust, K.; Warfield, K.

    2001-01-01

    This paper summarizes how JPL's parametric mission cost model (PMCM) has been validated using both formal statistical methods and a variety of peer and management reviews in order to establish organizational acceptance of the cost model estimates.

  12. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    SciTech Connect

    Freeman, J.; Whitmore, J.; Kaffine, L.; Blair, N.; Dobos, A. P.

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  13. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  14. Validation of Numerical Shallow Water Models for Tidal Lagoons

    SciTech Connect

    Eliason, D.; Bourgeois, A.

    1999-11-01

    An analytical solution is presented for the case of a stratified, tidally forced lagoon. This solution, especially its energetics, is useful for the validation of numerical shallow water models under stratified, tidally forced conditions. The utility of the analytical solution for validation is demonstrated for a simple finite difference numerical model. A comparison is presented of the energetics of the numerical and analytical solutions in terms of the convergence of model results to the analytical solution with increasing spatial and temporal resolution.

  15. End-to-end modelling of He II flow systems

    NASA Technical Reports Server (NTRS)

    Mord, A. J.; Snyder, H. A.; Newell, D. A.

    1992-01-01

    A practical computer code has been developed which uses the accepted two-fluid model to simulate He II flow in complicated systems. The full set of equations are used, retaining the coupling between the pressure, temperature and velocity fields. This permits modeling He II flow over the full range of conditions, from strongly or weakly driven flow through large pipes, narrow channels and porous media. The system may include most of the components used in modern superfluid flow systems: non-ideal thermomechanical pumps, tapered sections, constrictions, lines with heated side walls and heat exchangers. The model is validated by comparison with published experimental data. It is applied to a complex system to show some of the non-intuitive feedback effects that can occur. This code is ready to be used as a design tool for practical applications of He II. It can also be used for the design of He II experiments and as a tool for comparison of experimental data with the standard two-fluid model.

  16. Teacher Change Beliefs: Validating a Scale with Structural Equation Modelling

    ERIC Educational Resources Information Center

    Kin, Tai Mei; Abdull Kareem, Omar; Nordin, Mohamad Sahari; Wai Bing, Khuan

    2015-01-01

    The objectives of the study were to validate a substantiated Teacher Change Beliefs Model (TCBM) and an instrument to identify critical components of teacher change beliefs (TCB) in Malaysian secondary schools. Five different pilot test approaches were applied to ensure the validity and reliability of the instrument. A total of 936 teachers from…

  17. Verification and Validation 3D Free-Surface Flow Models

    E-print Network

    topics: 1) Terminology and Basic Methodology 2) Analytical Solutions for Mathematical Verification 3 and Validation Methodology Edited by Richard A. Schmalz, Jr., Ph.D., P.E., F. ASCE, M.AGU,M.TOS Published of the Methodology for Verification and Validation of 3-D Free Surface Flow Models was established with a duration

  18. SAGE III Aerosol Extinction Validation in the Arctic Winter: Comparisons with SAGE II and POAM III

    NASA Technical Reports Server (NTRS)

    Thomason, L. W.; Poole, L. R.; Randall, C. E.

    2007-01-01

    The use of SAGE III multiwavelength aerosol extinction coefficient measurements to infer PSC type is contingent on the robustness of both the extinction magnitude and its spectral variation. Past validation with SAGE II and other similar measurements has shown that the SAGE III extinction coefficient measurements are reliable though the comparisons have been greatly weighted toward measurements made at mid-latitudes. Some aerosol comparisons made in the Arctic winter as a part of SOLVE II suggested that SAGE III values, particularly at longer wavelengths, are too small with the implication that both the magnitude and the wavelength dependence are not reliable. Comparisons with POAM III have also suggested a similar discrepancy. Herein, we use SAGE II data as a common standard for comparison of SAGE III and POAM III measurements in the Arctic winters of 2002/2003 through 2004/2005. During the winter, SAGE II measurements are made infrequently at the same latitudes as these instruments. We have mitigated this problem through the use potential vorticity as a spatial coordinate and thus greatly increased the number of coincident events. We find that SAGE II and III extinction coefficient measurements show a high degree of compatibility at both 1020 nm and 450 nm except a 10-20% bias at both wavelengths. In addition, the 452 to 1020-nm extinction ratio shows a consistent bias of approx. 30% throughout the lower stratosphere. We also find that SAGE II and POAM III are on average consistent though the comparisons show a much higher variability and larger bias than SAGE II/III comparisons. In addition, we find that the two data sets are not well correlated below 18 km. Overall, we find both the extinction values and the spectral dependence from SAGE III are robust and we find no evidence of a significant defect within the Arctic vortex.

  19. Development and Validation of the Juvenile Sexual Offense Recidivism Risk Assessment Tool-II.

    PubMed

    Epperson, Douglas L; Ralston, Christopher A

    2015-12-01

    This article describes the development and initial validation of the Juvenile Sexual Offense Recidivism Risk Assessment Tool-II (JSORRAT-II). Potential predictor variables were extracted from case file information for an exhaustive sample of 636 juveniles in Utah who sexually offended between 1990 and 1992. Simultaneous and hierarchical logistic regression analyses were used to identify the group of variables that was most predictive of subsequent juvenile sexual recidivism. A simple categorical scoring system was applied to these variables without meaningful loss of accuracy in the development sample for any sexual (area under the curve [AUC] = .89) and sexually violent (AUC = .89) juvenile recidivism. The JSORRAT-II was cross-validated on an exhaustive sample of 566 juveniles who had sexually offended in Utah in 1996 and 1997. Reliability of scoring the tool across five coders was quite high (intraclass correlation coefficient [ICC] = .96). Relative to the development sample, however, there was considerable shrinkage in the indices of predictive accuracy for any sexual (AUC = .65) and sexually violent (AUC = .65) juvenile recidivism. The reduced level of accuracy was not explained by severity of the index sexual offense, time at risk, or missing data. Capitalization on chance and other explanations for the possible reduction in predictive accuracy are explored, and potential uses and limitations of the tool are discussed. PMID:24492618

  20. Validating Computational Cognitive Process Models across Multiple Timescales

    NASA Astrophysics Data System (ADS)

    Myers, Christopher; Gluck, Kevin; Gunzelmann, Glenn; Krusmark, Michael

    2010-12-01

    Model comparison is vital to evaluating progress in the fields of artificial general intelligence (AGI) and cognitive architecture. As they mature, AGI and cognitive architectures will become increasingly capable of providing a single model that completes a multitude of tasks, some of which the model was not specifically engineered to perform. These models will be expected to operate for extended periods of time and serve functional roles in real-world contexts. Questions arise regarding how to evaluate such models appropriately, including issues pertaining to model comparison and validation. In this paper, we specifically address model validation across multiple levels of abstraction, using an existing computational process model of unmanned aerial vehicle basic maneuvering to illustrate the relationship between validity and timescales of analysis.

  1. Exploring the Validity of Valproic Acid Animal Model of Autism

    PubMed Central

    Mabunga, Darine Froy N.; Gonzales, Edson Luck T.; Kim, Ji-woon; Kim, Ki Chan

    2015-01-01

    The valproic acid (VPA) animal model of autism spectrum disorder (ASD) is one of the most widely used animal model in the field. Like any other disease models, it can't model the totality of the features seen in autism. Then, is it valid to model autism? This model demonstrates many of the structural and behavioral features that can be observed in individuals with autism. These similarities enable the model to define relevant pathways of developmental dysregulation resulting from environmental manipulation. The uncovering of these complex pathways resulted to the growing pool of potential therapeutic candidates addressing the core symptoms of ASD. Here, we summarize the validity points of VPA that may or may not qualify it as a valid animal model of ASD. PMID:26713077

  2. EXODUS II: A finite element data model

    SciTech Connect

    Schoof, L.A.; Yarberry, V.R.

    1994-09-01

    EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface (API).

  3. Using virtual reality to validate system models

    SciTech Connect

    Winter, V.L.; Caudell, T.P.

    1999-12-09

    To date most validation techniques are highly biased towards calculations involving symbolic representations of problems. These calculations are either formal (in the case of consistency and completeness checks), or informal in the case of code inspections. The authors believe that an essential type of evidence of the correctness of the formalization process must be provided by (i.e., must originate from) human-based calculation. They further believe that human calculation can by significantly amplified by shifting from symbolic representations to graphical representations. This paper describes their preliminary efforts in realizing such a representational shift.

  4. Systematic Verification, Validation and Calibration of Traffic Simulation Models

    E-print Network

    Hellinga, Bruce

    1 Systematic Verification, Validation and Calibration of Traffic Simulation Models H. Rakha1 , B of vehicle stops, total fuel consumption, vehicle emissions of HC, CO, and NOx,, and vehicle accident risk

  5. HEDR model validation plan. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ``tools`` for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ``validate`` these tools. In the sense of the HEDR Project, ``validation`` is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model.

  6. Highlights of Transient Plume Impingement Model Validation and Applications

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael

    2011-01-01

    This paper describes highlights of an ongoing validation effort conducted to assess the viability of applying a set of analytic point source transient free molecule equations to model behavior ranging from molecular effusion to rocket plumes. The validation effort includes encouraging comparisons to both steady and transient studies involving experimental data and direct simulation Monte Carlo results. Finally, this model is applied to describe features of two exotic transient scenarios involving NASA Goddard Space Flight Center satellite programs.

  7. Gear Windage Modeling Progress - Experimental Validation Status

    NASA Technical Reports Server (NTRS)

    Kunz, Rob; Handschuh, Robert F.

    2008-01-01

    In the Subsonics Rotary Wing (SRW) Project being funded for propulsion work at NASA Glenn Research Center, performance of the propulsion system is of high importance. In current rotorcraft drive systems many gearing components operate at high rotational speed (pitch line velocity > 24000 ft/ min). In our testing of high speed helical gear trains at NASA Glenn we have found that the work done on the air - oil mist within the gearbox can become a significant part of the power loss of the system. This loss mechanism is referred to as windage. The effort described in this presentation is to try to understand the variables that affect windage, develop a good experimental data base to validate, the analytical project being conducted at Penn State University by Dr. Rob Kunz under a NASA SRW NRA. The presentation provides an update to the status of these efforts.

  8. VALIDATING COMPLEX CONSTRUCTION SIMULATION MODELS USING 3D VISUALIZATION

    E-print Network

    Kamat, Vineet R.

    . The state-of- the-art construction simulation systems allow the modeling of complex construction operations of the simulation modeling system, of mental plans that are often complex and elaborate. Differences between1 VALIDATING COMPLEX CONSTRUCTION SIMULATION MODELS USING 3D VISUALIZATION Vineet R. Kamat 1 Julio

  9. CMOS Transistor Mismatch Model valid from Weak to Strong Inversion

    E-print Network

    Barranco, Bernabe Linares

    CMOS Transistor Mismatch Model valid from Weak to Strong Inversion Teresa Serrano and PMOS transistors for 30 different geometries has been done with this continuos model. The model is able of transistor mismatch is crucial for precision analog design. Using very reduced transistor geometries produces

  10. Validation and Application of Empirical Liquefaction Models Thomas Oommen1

    E-print Network

    Vogel, Richard M.

    Validation and Application of Empirical Liquefaction Models Thomas Oommen1 ; Laurie G. Baise, M.ASCE2 ; and Richard Vogel, M.ASCE3 Abstract: Empirical liquefaction models ELMs are the standard approach for predicting the occurrence of soil liquefaction. These models are typically based on in situ

  11. Validation of Model Forecasts of the Ambient Solar Wind

    NASA Technical Reports Server (NTRS)

    Macneice, P. J.; Hesse, M.; Kuznetsova, M. M.; Rastaetter, L.; Taktakishvili, A.

    2009-01-01

    Independent and automated validation is a vital step in the progression of models from the research community into operational forecasting use. In this paper we describe a program in development at the CCMC to provide just such a comprehensive validation for models of the ambient solar wind in the inner heliosphere. We have built upon previous efforts published in the community, sharpened their definitions, and completed a baseline study. We also provide first results from this program of the comparative performance of the MHD models available at the CCMC against that of the Wang-Sheeley-Arge (WSA) model. An important goal of this effort is to provide a consistent validation to all available models. Clearly exposing the relative strengths and weaknesses of the different models will enable forecasters to craft more reliable ensemble forecasting strategies. Models of the ambient solar wind are developing rapidly as a result of improvements in data supply, numerical techniques, and computing resources. It is anticipated that in the next five to ten years, the MHD based models will supplant semi-empirical potential based models such as the WSA model, as the best available forecast models. We anticipate that this validation effort will track this evolution and so assist policy makers in gauging the value of past and future investment in modeling support.

  12. Validation of Erosion Modeling: Physical and Numerical Mehrad Kamalzare1

    E-print Network

    Franklin, W. Randolph

    1 Validation of Erosion Modeling: Physical and Numerical Mehrad Kamalzare1 , Christopher Stuetzle2-3590 ABSTRACT The overall intent of this research is to develop numerical models of erosion of levees, dams a geotechnical centrifuge. The erosion is modeled in detail, from beginning to end, that is from the time

  13. VALIDATION OF EROSION MODELING: PHYSICAL AND Mehrad Kamalzare1

    E-print Network

    VALIDATION OF EROSION MODELING: PHYSICAL AND NUMERICAL Mehrad Kamalzare1 , Christopher Stuetzle2-3590 ABSTRACT The overall intent of this research is to develop numerical models of erosion of levees, dams a geotechnical centrifuge. The erosion is modeled in detail, from beginning to end, that is from the time

  14. SWAT: Model use, calibration, and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT including manual calibration procedures...

  15. VERIFICATION AND VALIDATION OF THE SPARC MODEL

    EPA Science Inventory

    Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values--that is, the physical and chemical constants that govern reactivity. Although empirical structure-activity relationships that allow estimation of some ...

  16. Development and validation of model for sand

    NASA Astrophysics Data System (ADS)

    Church, P.; Ingamells, V.; Wood, A.; Gould, P.; Perry, J.; Jardine, A.; Tyas, A.

    2015-09-01

    There is a growing requirement within QinetiQ to develop models for assessments when there is very little experimental data. A theoretical approach to developing equations of state for geological materials has been developed using Quantitative Structure Property Modelling based on the Porter-Gould model approach. This has been applied to well-controlled sand with different moisture contents and particle shapes. The Porter-Gould model describes an elastic response and gives good agreement at high impact pressures with experiment indicating that the response under these conditions is dominated by the molecular response. However at lower pressures the compaction behaviour is dominated by a micro-mechanical response which drives the need for additional theoretical tools and experiments to separate the volumetric and shear compaction behaviour. The constitutive response is fitted to existing triaxial cell data and Quasi-Static (QS) compaction data. This data is then used to construct a model in the hydrocode. The model shows great promise in predicting plate impact, Hopkinson bar, fragment penetration and residual velocity of fragments through a finite thickness of sand.

  17. Validating Requirements for Fault Tolerant Systems Using Model Checking

    NASA Technical Reports Server (NTRS)

    Schneider, Francis; Easterbrook, Steve M.; Callahan, John R.; Holzmann, Gerard J.

    1997-01-01

    Model checking is shown to be an effective tool in validating the behavior of a fault tolerant embedded spacecraft controller. The case study presented here shows that by judiciously abstracting away extraneous complexity, the state space of the model could be exhaustively searched allowing critical functional requirements to be validated down to the design level. Abstracting away detail not germane to the problem of interest leaves by definition a partial specification behind. The success of this procedure shows that it is feasible to effectively validate a partial specification with this technique. Three anomalies were found in the system one of which is an error in the detailed requirements, and the other two are missing/ambiguous requirements. Because the method allows validation of partial specifications, it also is an effective methodology towards maintaining fidelity between a co-evolving specification and an implementation.

  18. Solution Verification Linked to Model Validation, Reliability, and Confidence

    SciTech Connect

    Logan, R W; Nitta, C K

    2004-06-16

    The concepts of Verification and Validation (V&V) can be oversimplified in a succinct manner by saying that 'verification is doing things right' and 'validation is doing the right thing'. In the world of the Finite Element Method (FEM) and computational analysis, it is sometimes said that 'verification means solving the equations right' and 'validation means solving the right equations'. In other words, if one intends to give an answer to the equation '2+2=', then one must run the resulting code to assure that the answer '4' results. However, if the nature of the physics or engineering problem being addressed with this code is multiplicative rather than additive, then even though Verification may succeed (2+2=4 etc), Validation may fail because the equations coded are not those needed to address the real world (multiplicative) problem. We have previously provided a 4-step 'ABCD' quantitative implementation for a quantitative V&V process: (A) Plan the analyses and validation testing that may be needed along the way. Assure that the code[s] chosen have sufficient documentation of software quality and Code Verification (i.e., does 2+2=4?). Perform some calibration analyses and calibration based sensitivity studies (these are not validated sensitivities but are useful for planning purposes). Outline the data and validation analyses that will be needed to turn the calibrated model (and calibrated sensitivities) into validated quantities. (B) Solution Verification: For the system or component being modeled, quantify the uncertainty and error estimates due to spatial, temporal, and iterative discretization during solution. (C) Validation over the data domain: Perform a quantitative validation to provide confidence-bounded uncertainties on the quantity of interest over the domain of available data. (D) Predictive Adequacy: Extend the model validation process of 'C' out to the application domain of interest, which may be outside the domain of available data in one or more planes of multi-dimensional space. Part 'D' should provide the numerical information about the model and its predictive capability such that given a requirement, an adequacy assessment can be made to determine of more validation analyses or data are needed.

  19. Validation of the MULCH-II code for thermal-hydraulic safety analysis of the MIT research reactor conversion to LEU

    SciTech Connect

    Ko, Y.-C.; Hu, L.-W. Olson, Arne P.; Dunn, Floyd E.

    2008-07-15

    An in-house thermal hydraulics code was developed for the steady-state and loss of primary flow analysis of the MIT Research Reactor (MITR). This code is designated as MULti-CHannel-II or MULCH-II. The MULCH-II code is being used for the MITR LEU conversion design study. Features of the MULCH-II code include a multi-channel analysis, the capability to model the transition from forced to natural convection during a loss of primary flow transient, and the ability to calculate safety limits and limiting safety system settings for licensing applications. This paper describes the validation of the code against PLTEMP/ANL 3.0 for steady-state analysis, and against RELAP5-3D for loss of primary coolant transient analysis. Coolant temperature measurements obtained from loss of primary flow transients as part of the MITR-II startup testing were also used for validating this code. The agreement between MULCH-II and the other computer codes is satisfactory. (author)

  20. Measurements of Humidity in the Atmosphere and Validation Experiments (Mohave, Mohave II): Results Overview

    NASA Technical Reports Server (NTRS)

    Leblanc, Thierry; McDermid, Iain S.; McGee, Thomas G.; Twigg, Laurence W.; Sumnicht, Grant K.; Whiteman, David N.; Rush, Kurt D.; Cadirola, Martin P.; Venable, Demetrius D.; Connell, R.; Demoz, Belay B.; Vomel, Holger; Miloshevich, L.

    2008-01-01

    The Measurements of Humidity in the Atmosphere and Validation Experiments (MOHAVE, MOHAVE-II) inter-comparison campaigns took place at the Jet Propulsion Laboratory (JPL) Table Mountain Facility (TMF, 34.5(sup o)N) in October 2006 and 2007 respectively. Both campaigns aimed at evaluating the capability of three Raman lidars for the measurement of water vapor in the upper troposphere and lower stratosphere (UT/LS). During each campaign, more than 200 hours of lidar measurements were compared to balloon borne measurements obtained from 10 Cryogenic Frost-point Hygrometer (CFH) flights and over 50 Vaisala RS92 radiosonde flights. During MOHAVE, fluorescence in all three lidar receivers was identified, causing a significant wet bias above 10-12 km in the lidar profiles as compared to the CFH. All three lidars were reconfigured after MOHAVE, and no such bias was observed during the MOHAVE-II campaign. The lidar profiles agreed very well with the CFH up to 13-17 km altitude, where the lidar measurements become noise limited. The results from MOHAVE-II have shown that the water vapor Raman lidar will be an appropriate technique for the long-term monitoring of water vapor in the UT/LS given a slight increase in its power-aperture, as well as careful calibration.

  1. Shuttle Spacesuit: Fabric/LCVG Model Validation

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tweed, J.; Zeitflin, C.; Kim, M.-H.; Anderson, B. M.; Cucinotta, F. A.; Ware, J.; Persans, A. E.

    2001-01-01

    A detailed spacesuit computational model is being developed at the Langley Research Center for radiation exposure evaluation studies. The details of the construction of the spacesuit are critical to estimation of exposures and assessing the risk to the astronaut on EVA. Past evaluations of spacesuit shielding properties assumed the basic fabric lay-up (Thermal Micrometeroid Garment, fabric restraints, and pressure envelope) and Liquid Cooling and Ventilation Garment (LCVG) could be homogenized as a single layer overestimating the protective properties over 60 percent of the fabric area. The present spacesuit model represents the inhomogeneous distributions of LCVG materials (mainly the water filled cooling tubes). An experimental test is performed using a 34-MeV proton beam and high- resolution detectors to compare with model-predicted transmission factors. Some suggestions are made on possible improved construction methods to improve the spacesuit's protection properties.

  2. Shuttle Spacesuit: Fabric/LCVG Model Validation

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tweed, J.; Zeitlin, C.; Kim, M.-H. Y.; Anderson, B. M.; Cucinotta, F. A.; Ware, J.; Persans, A. E.

    2001-01-01

    A detailed spacesuit computational model is being developed at the Langley Research Center for radiation exposure evaluation studies. The details of the construction of the spacesuit are critical to estimation of exposures and assessing the risk to the astronaut on EVA. Past evaluations of spacesuit shielding properties assumed the basic fabric lay-up (Thermal Micrometeroid Garment, fabric restraints, and pressure envelope) and Liquid Cooling and Ventilation Garment (LCVG) could be homogenized as a single layer overestimating the protective properties over 60 percent of the fabric area. The present spacesuit model represents the inhomogeneous distributions of LCVG materials (mainly the water filled cooling tubes). An experimental test is performed using a 34-MeV proton beam and highresolution detectors to compare with model-predicted transmission factors. Some suggestions are made on possible improved construction methods to improve the spacesuit's protection properties.

  3. Validation of nuclear models used in space radiation shielding applications

    SciTech Connect

    Norman, Ryan B.; Blattnig, Steve R.

    2013-01-15

    A program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as these models are developed over time. In this work, simple validation metrics applicable to testing both model accuracy and consistency with experimental data are developed. The developed metrics treat experimental measurement uncertainty as an interval and are therefore applicable to cases in which epistemic uncertainty dominates the experimental data. To demonstrate the applicability of the metrics, nuclear physics models used by NASA for space radiation shielding applications are compared to an experimental database consisting of over 3600 experimental cross sections. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by examining subsets of the model parameter space.

  4. Functional state modelling approach validation for yeast and bacteria cultivations

    PubMed Central

    Roeva, Olympia; Pencheva, Tania

    2014-01-01

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach.

  5. WEPP: Model use, calibration and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Water Erosion Prediction Project (WEPP) model is a process-based, continuous simulation, distributed parameter, hydrologic and soil erosion prediction system. It has been developed over the past 25 years to allow for easy application to a large number of land management scenarios. Most general o...

  6. WEPP: Model use, calibration, and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Water Erosion Prediction Project (WEPP) model is a process-based, continuous simulation, distributed parameter, hydrologic and soil erosion prediction system. It has been developed over the past 25 years to allow for easy application to a large number of land management scenarios. Most general o...

  7. Qualifying geospatial workflow models for adaptive controlled validity and accuracy

    E-print Network

    Stock, Kristin

    Qualifying geospatial workflow models for adaptive controlled validity and accuracy Didier Leibovici, Gobe Hobona, Kristin Stock and Mike Jackson Centre for Geospatial Sciences, University.leibovici@nottingham.ac.uk Abstract--Sharing geospatial data and geoprocessing models within a system like GEOSS (Global Earth

  8. Validation of 1-D transport and sawtooth models for ITER

    SciTech Connect

    Connor, J.W.; Turner, M.F.; Attenberger, S.E.; Houlberg, W.A.

    1996-12-31

    In this paper the authors describe progress on validating a number of local transport models by comparing their predictions with relevant experimental data from a range of tokamaks in the ITER profile database. This database, the testing procedure and results are discussed. In addition a model for sawtooth oscillations is used to investigate their effect in an ITER plasma with alpha-particles.

  9. Making Validated Educational Models Central in Preschool Standards.

    ERIC Educational Resources Information Center

    Schweinhart, Lawrence J.

    This paper presents some ideas to preschool educators and policy makers about how to make validated educational models central in standards for preschool education and care programs that are available to all 3- and 4-year-olds. Defining an educational model as a coherent body of program practices, curriculum content, program and child, and teacher…

  10. Validation, Optimization and Simulation of Solar Thermoelectric Generator Model

    E-print Network

    Lee, Ho Sung

    1 Validation, Optimization and Simulation of Solar Thermoelectric Generator Model By Ali Hamil to environmental problems such as climate change, acid rain and gases emissions. Thermoelectric generator In this project, a model of solar thermoelectric generator (STEG) is analyzed based on the concept of converting

  11. Modeling HIV Immune Response and Validation with Clinical Data

    E-print Network

    Modeling HIV Immune Response and Validation with Clinical Data H. T. Banksa,1 , M. Davidiana,2 equations is formulated to describe the pathogenesis of HIV infection, wherein certain important features, and stimulation by antigens other than HIV. A stability analysis illustrates the capability of this model

  12. A STANDARDIZED APPROACH TO PV SYSTEM PERFORMANCE MODEL VALIDATION

    E-print Network

    to know how one model may perform relative to another for a given site. This paper suggests a validation technologies and array designs (e.g., fixed tilt vs. tracking) for a given site or to choose between different for setting up these models and reporting results. This paper describes the basic elements for a standardized

  13. EXPLICIT CONTOUR MODEL FOR VEHICLE TRACKING WITH AUTOMATIC HYPOTHESIS VALIDATION

    E-print Network

    Wong, Kenneth K.Y.

    dynamics in its para- meterized template. We integrate the model into a Bayesian framework with multiple cues for vehicle tracking, and eval- uate the correctness of a target hypothesis, with the inforEXPLICIT CONTOUR MODEL FOR VEHICLE TRACKING WITH AUTOMATIC HYPOTHESIS VALIDATION Boris Wai-Sing Yiu

  14. A DESIGN-DRIVEN VALIDATION APPROACH USING BAYESIAN PREDICTION MODELS

    E-print Network

    Chen, Wei

    the one that can provide the discrimination (with good resolution) between competing design candidates1 A DESIGN-DRIVEN VALIDATION APPROACH USING BAYESIAN PREDICTION MODELS ABSTRACT In most at very limited test points. However, from the design perspective, a good model should be considered

  15. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  16. NONLINEAR CO N SOURCEMESFET BEHAVIOUR AND MODEL VALIDATION

    E-print Network

    NONLINEAR CO N SOURCEMESFET BEHAVIOUR AND MODEL VALIDATION G. Passiopoulos*,D.R .Webster",A. E conditions and load resistance on the nonlinear behaviour of a MESFET Common Source (CS) amplifier at medium nonlinear device modelling of MESFETs is of primary importance in the design of high frequency circuits

  17. Resilience in an ocean model Strategy, implementation and validation

    E-print Network

    Resilience in an ocean model Strategy, implementation and validation TR-CMGC-13-110 Eric Maisonnave in NEMO ocean model, without the help of spare resources and avoiding to handle the associated on of ocean circulation are well preserved, including a case when failures continuously affect calculations

  18. Theory and Implementation of Nuclear Safety System Codes - Part II: System Code Closure Relations, Validation, and Limitations

    SciTech Connect

    Glenn A Roth; Fatih Aydogan

    2014-09-01

    This is Part II of two articles describing the details of thermal-hydraulic sys- tem codes. In this second part of the article series, the system code closure relationships (used to model thermal and mechanical non-equilibrium and the coupling of the phases) for the governing equations are discussed and evaluated. These include several thermal and hydraulic models, such as heat transfer coefficients for various flow regimes, two phase pressure correlations, two phase friction correlations, drag coefficients and interfacial models be- tween the fields. These models are often developed from experimental data. The experiment conditions should be understood to evaluate the efficacy of the closure models. Code verification and validation, including Separate Effects Tests (SETs) and Integral effects tests (IETs) is also assessed. It can be shown from the assessments that the test cases cover a significant section of the system code capabilities, but some of the more advanced reactor designs will push the limits of validation for the codes. Lastly, the limitations of the codes are discussed by considering next generation power plants, such as Small Modular Reactors (SMRs), analyz- ing not only existing nuclear power plants, but also next generation nuclear power plants. The nuclear industry is developing new, innovative reactor designs, such as Small Modular Reactors (SMRs), High-Temperature Gas-cooled Reactors (HTGRs) and others. Sub-types of these reactor designs utilize pebbles, prismatic graphite moderators, helical steam generators, in- novative fuel types, and many other design features that may not be fully analyzed by current system codes. This second part completes the series on the comparison and evaluation of the selected reactor system codes by discussing the closure relations, val- idation and limitations. These two articles indicate areas where the models can be improved to adequately address issues with new reactor design and development.

  19. Predicting the ungauged basin: Model validation and realism assessment

    NASA Astrophysics Data System (ADS)

    van Emmerik, Tim; Mulder, Gert; Eilander, Dirk; Piet, Marijn; Savenije, Hubert

    2015-10-01

    The hydrological decade on Predictions in Ungauged Basins (PUB) led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this paper we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. This paper does not present a generic approach that can be transferred to other ungauged catchments, but it aims to show how clever model design and alternative data acquisition can result in a valuable hydrological model for an ungauged catchment.

  20. ESEEM Analysis of Multi-Histidine Cu(II)-Coordination in Model Complexes, Peptides, and Amyloid-?

    PubMed Central

    2015-01-01

    We validate the use of ESEEM to predict the number of 14N nuclei coupled to a Cu(II) ion by the use of model complexes and two small peptides with well-known Cu(II) coordination. We apply this method to gain new insight into less explored aspects of Cu(II) coordination in amyloid-? (A?). A? has two coordination modes of Cu(II) at physiological pH. A controversy has existed regarding the number of histidine residues coordinated to the Cu(II) ion in component II, which is dominant at high pH (?8.7) values. Importantly, with an excess amount of Zn(II) ions, as is the case in brain tissues affected by Alzheimer’s disease, component II becomes the dominant coordination mode, as Zn(II) selectively substitutes component I bound to Cu(II). We confirm that component II only contains single histidine coordination, using ESEEM and set of model complexes. The ESEEM experiments carried out on systematically 15N-labeled peptides reveal that, in component II, His 13 and His 14 are more favored as equatorial ligands compared to His 6. Revealing molecular level details of subcomponents in metal ion coordination is critical in understanding the role of metal ions in Alzheimer’s disease etiology. PMID:25014537

  1. A prediction model for ocular damage - Experimental validation.

    PubMed

    Heussner, Nico; Vagos, Márcia; Spitzer, Martin S; Stork, Wilhelm

    2015-08-01

    With the increasing number of laser applications in medicine and technology, accidental as well as intentional exposure of the human eye to laser sources has become a major concern. Therefore, a prediction model for ocular damage (PMOD) is presented within this work and validated for long-term exposure. This model is a combination of a raytracing model with a thermodynamical model of the human and an application which determines the thermal damage by the implementation of the Arrhenius integral. The model is based on our earlier work and is here validated against temperature measurements taken with porcine eye samples. For this validation, three different powers were used: 50mW, 100mW and 200mW with a spot size of 1.9mm. Also, the measurements were taken with two different sensing systems, an infrared camera and a fibre optic probe placed within the tissue. The temperatures were measured up to 60s and then compared against simulations. The measured temperatures were found to be in good agreement with the values predicted by the PMOD-model. To our best knowledge, this is the first model which is validated for both short-term and long-term irradiations in terms of temperature and thus demonstrates that temperatures can be accurately predicted within the thermal damage regime. PMID:26267496

  2. Experiments for foam model development and validation.

    SciTech Connect

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F.; Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  3. Tutorial: Building Ptolemy II Models Graphically Edward A. Lee

    E-print Network

    Tutorial: Building Ptolemy II Models Graphically Edward A. Lee Stephen Neuendorffer Electrical Modeling and Design 1 Tutorial: Building Ptolemy II Models Graphically Authors: Edward A. Lee Steve Neuendorffer 1. Introduction This tutorial document explains how to build Ptolemy II models using Vergil

  4. Modeling of a Foamed Emulsion Bioreactor: II. Model Parametric Sensitivity

    E-print Network

    ARTICLE Modeling of a Foamed Emulsion Bioreactor: II. Model Parametric Sensitivity Eunsung Kan: The sensitivity of a conceptual model of a foam emulsion bioreactor (FEBR) used for the control of toluene vapors high perfor- mance bioreactor system called the foamed emulsion bioreactor (FEBR). The FEBR consists

  5. Validation of the Serpent 2 code on TRIGA Mark II benchmark experiments.

    PubMed

    ?ali?, Dušan; Žerovnik, Gašper; Trkov, Andrej; Snoj, Luka

    2016-01-01

    The main aim of this paper is the development and validation of a 3D computational model of TRIGA research reactor using Serpent 2 code. The calculated parameters were compared to the experimental results and to calculations performed with the MCNP code. The results show that the calculated normalized reaction rates and flux distribution within the core are in good agreement with MCNP and experiment, while in the reflector the flux distribution differ up to 3% from the measurements. PMID:26516989

  6. Validation and Calibration in ACE Models: An Investigation on the CATS model.

    E-print Network

    Tesfatsion, Leigh

    Validation and Calibration in ACE Models: An Investigation on the CATS model. Carlo Bianchi deal with some validation (and a ...rst calibration) experiments on the CATS model proposed in Gallegati et al. (2003a, 2004b). The CATS model has been intensively used (see, for example, Delli Gatti et

  7. Closed Form Solution for Minimum Norm Model-Validating Uncertainty

    NASA Technical Reports Server (NTRS)

    Lim, Kyong Been

    1997-01-01

    A methodology in which structured uncertainty models are directly constructed from measurement data for use in robust control design of multivariable systems is proposed. The formulation allows a general linear fractional transformation uncertainty structure connections with respect to a given nominal model. Existence conditions are given, and under mild assumptions, a closed-form expression for the smallest norm structured uncertainty that validates the model is given. The uncertainty bound computation is simple and is formulated for both open and closed loop systems.

  8. Validation techniques of agent based modelling for geospatial simulations

    NASA Astrophysics Data System (ADS)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  9. Antibody modeling assessment II. Structures and models.

    PubMed

    Teplyakov, Alexey; Luo, Jinquan; Obmolova, Galina; Malia, Thomas J; Sweet, Raymond; Stanfield, Robyn L; Kodangattil, Sreekumar; Almagro, Juan Carlos; Gilliland, Gary L

    2014-08-01

    To assess the state-of-the-art in antibody structure modeling, a blinded study was conducted. Eleven unpublished Fab crystal structures were used as a benchmark to compare Fv models generated by seven structure prediction methodologies. In the first round, each participant submitted three non-ranked complete Fv models for each target. In the second round, CDR-H3 modeling was performed in the context of the correct environment provided by the crystal structures with CDR-H3 removed. In this report we describe the reference structures and present our assessment of the models. Some of the essential sources of errors in the predictions were traced to the selection of the structure template, both in terms of the CDR canonical structures and VL/VH packing. On top of this, the errors present in the Protein Data Bank structures were sometimes propagated in the current models, which emphasized the need for the curated structural database devoid of errors. Modeling non-canonical structures, including CDR-H3, remains the biggest challenge for antibody structure prediction. PMID:24633955

  10. Analysis of Experimental Data for High Burnup PWR Spent Fuel Isotopic Validation - Vandellos II Reactor

    SciTech Connect

    Ilas, Germina; Gauld, Ian C

    2011-01-01

    This report is one of the several recent NUREG/CR reports documenting benchmark-quality radiochemical assay data and the use of the data to validate computer code predictions of isotopic composition for spent nuclear fuel, to establish the uncertainty and bias associated with code predictions. The experimental data analyzed in the current report were acquired from a high-burnup fuel program coordinated by Spanish organizations. The measurements included extensive actinide and fission product data of importance to spent fuel safety applications, including burnup credit, decay heat, and radiation source terms. Six unique spent fuel samples from three uranium oxide fuel rods were analyzed. The fuel rods had a 4.5 wt % {sup 235}U initial enrichment and were irradiated in the Vandellos II pressurized water reactor operated in Spain. The burnups of the fuel samples range from 42 to 78 GWd/MTU. The measurements were used to validate the two-dimensional depletion sequence TRITON in the SCALE computer code system.

  11. Using the split Hopkinson pressure bar to validate material models

    PubMed Central

    Church, Philip; Cornish, Rory; Cullis, Ian; Gould, Peter; Lewtas, Ian

    2014-01-01

    This paper gives a discussion of the use of the split-Hopkinson bar with particular reference to the requirements of materials modelling at QinetiQ. This is to deploy validated material models for numerical simulations that are physically based and have as little characterization overhead as possible. In order to have confidence that the models have a wide range of applicability, this means, at most, characterizing the models at low rate and then validating them at high rate. The split Hopkinson pressure bar (SHPB) is ideal for this purpose. It is also a very useful tool for analysing material behaviour under non-shock wave loading. This means understanding the output of the test and developing techniques for reliable comparison of simulations with SHPB data. For materials other than metals comparison with an output stress v strain curve is not sufficient as the assumptions built into the classical analysis are generally violated. The method described in this paper compares the simulations with as much validation data as can be derived from deployed instrumentation including the raw strain gauge data on the input and output bars, which avoids any assumptions about stress equilibrium. One has to take into account Pochhammer–Chree oscillations and their effect on the specimen and recognize that this is itself also a valuable validation test of the material model. PMID:25071238

  12. Validation of Computer Models for Homeland Security Purposes

    SciTech Connect

    Schweppe, John E.; Ely, James H.; Kouzes, Richard T.; McConn, Ronald J.; Pagh, Richard T.; Robinson, Sean M.; Siciliano, Edward R.; Borgardt, James D.; Bender, Sarah E.; Earnhart, Alison H.

    2005-10-23

    At Pacific Northwest National Laboratory, we are developing computer models of radiation portal monitors for screening vehicles and cargo. Detailed models of the radiation detection equipment, vehicles, cargo containers, cargos, and radioactive sources have been created. These are used to determine the optimal configuration of detectors and the best alarm algorithms for the detection of items of interest while minimizing nuisance alarms due to the presence of legitimate radioactive material in the commerce stream. Most of the modeling is done with the Monte Carlo code MCNP to describe the transport of gammas and neutrons from extended sources through large, irregularly shaped absorbers to large detectors. A fundamental prerequisite is the validation of the computational models against field measurements. We describe the first step of this validation process, the comparison of the models to measurements with bare static sources.

  13. Validation of Hydrological Models Using Stable Isotope Tracers.

    NASA Astrophysics Data System (ADS)

    Stadnyk, T. A.; Kouwen, N.; Edwards, T.

    2004-05-01

    The delineation of source areas for groundwater recharge is the first step in protecting groundwater resources as a source of water for human consumption and ecological preservation. To accomplish this task, a thorough understanding of water pathways from precipitation to streamflow is required. The rainfall-runoff process can be modelled using hydrological models, in which conservative tracers can be incorporated and used to disaggregate streamflow into its various origins and pathways. The measurement of naturally occurring isotopes in streamflow can then provide a relatively simplistic and inexpensive validation tool by verifying that flow paths and residence times are being correctly modelled. The objective of this research is to validate flowpaths in hydrological models by comparing modelled conservative tracers to measured isotopic data, where it is available. A tracer module has been integrated with the WATFLOOD model; a fully distributed, physically based, meso-scale hydrologic model for watersheds having response times larger than one hour. Conservative tracers are used to track water through the model by quantifying and segregating the various contributions to the total streamflow. Groundwater flow separation is accomplished using simplified storage routing of groundwater through the subsurface and into the stream. A specified concentration of tracer is added to the groundwater at its origin and upon reaching the stream; a mass balance is performed to determine the concentration of tracer in the stream, allowing for a separation of groundwater from streamflow. Other flow tracers have also been modelled, including ones for surface water, interflow, flows from different landcovers, and flows from different sub-basins. Validation of the WATFLOOD models flowpaths will be made using the flow separation tracers and measured isotope data from the lower Liard River Basin near Fort Simpson, Northwest Territories. Examples of flow separations using additional tracers will be presented for the Grand River watershed, where isotope data is not yet available for validation purposes, but other baseflow separation techniques have been applied and can be used for comparison.

  14. Technical Report 0524 Validation of SECI Model in Education

    E-print Network

    New South Wales, University of

    Technical Report 0524 Validation of SECI Model in Education Cat Kutay1,2 and Aybüke Aurum2,3 1School of Computer Science and Engineering 2National ICT Australia 3School of Information Systems@unsw.edu.au Abstract The use of Knowledge Management (KM) is increasingly relevant to education as our knowledge

  15. MODELS FOR SUBMARINE OUTFALL - VALIDATION AND PREDICTION UNCERTAINTIES

    EPA Science Inventory

    This address reports on some efforts to verify and validate dilution models, including those found in Visual Plumes. This is done in the context of problem experience: a range of problems, including different pollutants such as bacteria; scales, including near-field and far-field...

  16. Foundation Heat Exchanger Model and Design Tool Development and Validation

    E-print Network

    Foundation Heat Exchanger Model and Design Tool Development and Validation The attached document heat exchanger design. In the case of net zero energy homes or homes approaching net zero energy the following: Lee, E.S., D.E. Fisher and J.D. Spitler. 2013. Efficient Horizontal Ground Heat Exchanger

  17. Validating Work Discrimination and Coping Strategy Models for Sexual Minorities

    ERIC Educational Resources Information Center

    Chung, Y. Barry; Williams, Wendi; Dispenza, Franco

    2009-01-01

    The purpose of this study was to validate and expand on Y. B. Chung's (2001) models of work discrimination and coping strategies among lesbian, gay, and bisexual persons. In semistructured individual interviews, 17 lesbians and gay men reported 35 discrimination incidents and their related coping strategies. Responses were coded based on Chung's…

  18. ID Model Construction and Validation: A Multiple Intelligences Case

    ERIC Educational Resources Information Center

    Tracey, Monica W.; Richey, Rita C.

    2007-01-01

    This is a report of a developmental research study that aimed to construct and validate an instructional design (ID) model that incorporates the theory and practice of multiple intelligences (MI). The study consisted of three phases. In phase one, the theoretical foundations of multiple Intelligences and ID were examined to guide the development…

  19. A Model for Investigating Predictive Validity at Highly Selective Institutions.

    ERIC Educational Resources Information Center

    Gross, Alan L.; And Others

    A statistical model for investigating predictive validity at highly selective institutions is described. When the selection ratio is small, one must typically deal with a data set containing relatively large amounts of missing data on both criterion and predictor variables. Standard statistical approaches are based on the strong assumption that…

  20. Modeling and Validation of Biased Human Trust Mark Hoogendoorn1

    E-print Network

    Treur, Jan

    with such a criterion in mind. Especially when developing an agent that interacts with humans, providing the agentModeling and Validation of Biased Human Trust Mark Hoogendoorn1 , S. Waqar Jaffry1 , and Peter Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands {mhoogen, swjaffry, treur}@cs.vu.nl 2 TNO Human Factors

  1. Validation of a tuber blight (Phytophthora infestans) prediction model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Potato tuber blight caused by Phytophthora infestans accounts for significant losses in storage. There is limited published quantitative data on predicting tuber blight. We validated a tuber blight prediction model developed in New York with cultivars Allegany, NY 101, and Katahdin using independent...

  2. Angular Distribution Models for Top-of-Atmosphere Radiative Flux Estimation from the Clouds and the Earth's Radiant Energy System Instrument on the Tropical Rainfall Measuring Mission Satellite. Part II; Validation

    NASA Technical Reports Server (NTRS)

    Loeb, N. G.; Loukachine, K.; Wielicki, B. A.; Young, D. F.

    2003-01-01

    Top-of-atmosphere (TOA) radiative fluxes from the Clouds and the Earth s Radiant Energy System (CERES) are estimated from empirical angular distribution models (ADMs) that convert instantaneous radiance measurements to TOA fluxes. This paper evaluates the accuracy of CERES TOA fluxes obtained from a new set of ADMs developed for the CERES instrument onboard the Tropical Rainfall Measuring Mission (TRMM). The uncertainty in regional monthly mean reflected shortwave (SW) and emitted longwave (LW) TOA fluxes is less than 0.5 W/sq m, based on comparisons with TOA fluxes evaluated by direct integration of the measured radiances. When stratified by viewing geometry, TOA fluxes from different angles are consistent to within 2% in the SW and 0.7% (or 2 W/sq m) in the LW. In contrast, TOA fluxes based on ADMs from the Earth Radiation Budget Experiment (ERBE) applied to the same CERES radiance measurements show a 10% relative increase with viewing zenith angle in the SW and a 3.5% (9 W/sq m) decrease with viewing zenith angle in the LW. Based on multiangle CERES radiance measurements, 18 regional instantaneous TOA flux errors from the new CERES ADMs are estimated to be 10 W/sq m in the SW and, 3.5 W/sq m in the LW. The errors show little or no dependence on cloud phase, cloud optical depth, and cloud infrared emissivity. An analysis of cloud radiative forcing (CRF) sensitivity to differences between ERBE and CERES TRMM ADMs, scene identification, and directional models of albedo as a function of solar zenith angle shows that ADM and clear-sky scene identification differences can lead to an 8 W/sq m root-mean-square (rms) difference in 18 daily mean SW CRF and a 4 W/sq m rms difference in LW CRF. In contrast, monthly mean SW and LW CRF differences reach 3 W/sq m. CRF is found to be relatively insensitive to differences between the ERBE and CERES TRMM directional models.

  3. PASTIS: Bayesian extrasolar planet validation II. Constraining exoplanet blend scenarios using spectroscopic diagnoses

    E-print Network

    Santerne, A; Almenara, J -M; Bouchy, F; Deleuil, M; Figueira, P; Hébrard, G; Moutou, C; Rodionov, S; Santos, N C

    2015-01-01

    The statistical validation of transiting exoplanets proved to be an efficient technique to secure the nature of small exoplanet signals which cannot be established by purely spectroscopic means. However, the spectroscopic diagnoses are providing us with useful constraints on the presence of blended stellar contaminants. In this paper, we present how a contaminating star affects the measurements of the various spectroscopic diagnoses as function of the parameters of the target and contaminating stars using the model implemented into the PASTIS planet-validation software. We find particular cases for which a blend might produce a large radial velocity signal but no bisector variation. It might also produce a bisector variation anti-correlated with the radial velocity one, as in the case of stellar spots. In those cases, the full width half maximum variation provides complementary constraints. These results can be used to constrain blend scenarios for transiting planet candidates or radial velocity planets. We r...

  4. Noncommutative Geometry models for Particle Physics and Cosmology, Lecture II

    E-print Network

    Marcolli, Matilde

    Noncommutative Geometry models for Particle Physics and Cosmology, Lecture II Matilde Marcolli Villa de Leyva school, July 2011 Matilde Marcolli NCG models for particles and cosmology, II #12;This Diff(M) 1. Matilde Marcolli NCG models for particles and cosmology, II #12;Symmetries viewpoint: can

  5. Solar swimming pool heating: Description of a validated model

    SciTech Connect

    Haaf, W.; Luboschik, U.; Tesche, B. )

    1994-07-01

    In the framework of a European Demonstration Programme, co-financed by CEC and national bodies, a model was elaborated and validated for open-air swimming pools having a minimal surface of 100 m[sup 2] and a minimal depth of 0.5 m. The model consists of two parts, the energy balance of the pool and the solar plant. The theoretical background of the energy balance of an open-air swimming pool was found to be poor. Special monitoring campaigns were used to validate the dynamic model using mathematical parameter identification methods. The final model was simplified in order to shorten calculation time and to improve the user-friendliness by reducing the input values to the most important one. The programme is commercially available. However, it requires the hourly meteorological data of a test reference year (TRY) as an input. The users are mainly designing engineers.

  6. A Bayesian Tool for Validating Process-Based Models

    NASA Astrophysics Data System (ADS)

    Abban, B. K.; Papanicolaou, T.; Cowles, M. K.; Jiao, F.

    2012-12-01

    Process-based models provide an effective means of studying and evaluating the response of watersheds to varying hydrological and ecological conditions. The conventional approach to modeling watersheds is first calibrating the model with some observed data, and then validating it with additional observed data. The calibration process typically involves varying some of the independent input parameters within their plausible ranges until the model results match observations in the real world. The model is deemed validated if it is able to match the additional observed data. The problem, however, is that most models are validated at the same locations for which they were calibrated, typically the watershed outlet. This means that any inferences for a sub-area within the watershed, for which a calibration was not performed, may not be justified as there is no guarantee that the correct non-linear processes are being captured. In soil erosion studies, although landscapes are heterogeneous with regards to soil properties and land use, calibration and validation usually involves the use of aggregated soil samples, making it difficult to assess whether or not model predictions for the different land uses are correct. These aggregated samples are the result of non-linear mixing that occurs between soils from different land uses as they travel from their points of origin to the point of sampling. To provide a greater degree of confidence in calibrated models, therefore, this study presents a Bayesian statistical tool that can be used to provide an added level of validation to process-based models. The presented tool is an un-mixing model that is furnished with probability distributions capable of simulating the non-linear mixing of soils as they travel to sampling points. The output from the model is the proportion of eroded soil that originates from each land use in the watershed. This allows for the direct evaluation of process-based models to determine if they are able to predict adequately the proportion of eroded sediment from each land use. In addition, the Bayesian tool provides a direct means of quantifying the uncertainty related to simulated results.

  7. Concurrent Validity of the WISC-IV and DAS-II in Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Kuriakose, Sarah

    2014-01-01

    Cognitive assessments are used for a variety of research and clinical purposes in children with autism spectrum disorder (ASD). This study establishes concurrent validity of the Wechsler Intelligence Scales for Children-fourth edition (WISC-IV) and Differential Ability Scales-second edition (DAS-II) in a sample of children with ASD with a broad…

  8. Validating the Thinking Styles Inventory-Revised II among Chinese University Students with Hearing Impairment through Test Accommodations

    ERIC Educational Resources Information Center

    Cheng, Sanyin; Zhang, Li-Fang

    2014-01-01

    The present study pioneered in adopting test accommodations to validate the Thinking Styles Inventory-Revised II (TSI-R2; Sternberg, Wagner, & Zhang, 2007) among Chinese university students with hearing impairment. A series of three studies were conducted that drew their samples from the same two universities, in which accommodating test…

  9. Propeller aircraft interior noise model utilization study and validation

    NASA Technical Reports Server (NTRS)

    Pope, L. D.

    1984-01-01

    Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.

  10. Experimentally validated finite element model of electrocaloric multilayer ceramic structures

    SciTech Connect

    Smith, N. A. S. E-mail: maciej.rokosz@npl.co.uk Correia, T. M. E-mail: maciej.rokosz@npl.co.uk; Rokosz, M. K. E-mail: maciej.rokosz@npl.co.uk

    2014-07-28

    A novel finite element model to simulate the electrocaloric response of a multilayer ceramic capacitor (MLCC) under real environment and operational conditions has been developed. The two-dimensional transient conductive heat transfer model presented includes the electrocaloric effect as a source term, as well as accounting for radiative and convective effects. The model has been validated with experimental data obtained from the direct imaging of MLCC transient temperature variation under application of an electric field. The good agreement between simulated and experimental data, suggests that the novel experimental direct measurement methodology and the finite element model could be used to support the design of optimised electrocaloric units and operating conditions.

  11. Validating the BHR RANS model for variable density turbulence

    SciTech Connect

    Israel, Daniel M; Gore, Robert A; Stalsberg - Zarling, Krista L

    2009-01-01

    The BHR RANS model is a turbulence model for multi-fluid flows in which density variation plays a strong role in the turbulence processes. In this paper they demonstrate the usefulness of BHR over a wide range of flows which include the effects of shear, buoyancy, and shocks. The results are in good agreement with experimental and DNS data across the entire set of validation cases, with no need to retune model coefficients between cases. The model has potential application to a number of aerospace related flow problems.

  12. The validity and reliability of the Knowledge of Women's Issues and Epilepsy (KOWIE) Questionnaires I and II.

    PubMed

    Long, Lucretia; McAuley, James W; Shneker, Bassel; Moore, J Layne

    2005-04-01

    The Knowledge of Women's Issues in Epilepsy (KOWIE) Questionnaires I and II were developed to assess what women with epilepsy (WWE) and practitioners know about relevant topics and concerns. Prior to disseminating any tool, an instrument should be both valid and reliable. The purpose of this study was to report the validity and reliability of the KOWIE Questionnaires I and II. To establish validity, the original KOWIE was sent to five experts who critiqued the relevance of each item. A content validity inventory (CVI) was developed later and sent to 20 additional epilepsy experts across the country. Tool stability was evaluated by test-retest procedures. Patients and practitioners completed corresponding tools on day one, and 24 hours later, on day two. Participants were asked to not review information on the topic of interest until after study procedures were completed. Sixteen of 20 expert responses were included in data analysis; 4 were excluded due to incomplete data. The CVI correlation coefficient was 0.92. Test-retest results from all 9 patients and 18 of 20 healthcare professionals were included in data analysis. Correlation coefficients were 0.88 and 0.83 for the KOWIE I and II, respectively, confirming these questionnaires are valid and reliable. While future knowledge may require altering both tools, the current instrument may be used as an assessment tool and guide intervention as it pertains to outcomes in WWE. PMID:15902950

  13. II. 1d elastic models curve II.1 Introduction

    E-print Network

    Amit, Yali

    Finite expansion: fl j (t; ` j ) = d X k=1 ` j;k / k (t) (II.3) for j = 1; 2. ffl Penalty: E(`) = ff X k expansion: @D @` 1;k = Z 1 0 ` G out (fl(t; `)) \\Gamma G in (fl(t; `)) ' â?? fl 2 / k (t)dt (II.6) @D @` 2;k = Z 1 0 ` G in (fl(t; `)) \\Gamma G out (fl(t; `)) ' â?? fl 1 / k (t)dt: (II.7) ffl @D @` 1;k ; k = 1

  14. Validation of a Model for Teaching Canine Fundoscopy.

    PubMed

    Nibblett, Belle Marie D; Pereira, Mary Mauldin; Williamson, Julie A; Sithole, Fortune

    2015-01-01

    A validated teaching model for canine fundoscopic examination was developed to improve Day One fundoscopy skills while at the same time reducing use of teaching dogs. This novel eye model was created from a hollow plastic ball with a cutout for the pupil, a suspended 20-diopter lens, and paint and paper simulation of relevant eye structures. This eye model was mounted on a wooden stand with canine head landmarks useful in performing fundoscopy. Veterinary educators performed fundoscopy using this model and completed a survey to establish face and content validity. Subsequently, veterinary students were randomly assigned to pre-laboratory training with or without the use of this teaching model. After completion of an ophthalmology laboratory on teaching dogs, student outcome was assessed by measuring students' ability to see a symbol inserted on the simulated retina in the model. Students also completed a survey regarding their experience with the model and the laboratory. Overall, veterinary educators agreed that this eye model was well constructed and useful in teaching good fundoscopic technique. Student performance of fundoscopy was not negatively impacted by the use of the model. This novel canine model shows promise as a teaching and assessment tool for fundoscopy. PMID:25769909

  15. Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns

    PubMed Central

    Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain

    2015-01-01

    Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model’s predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917

  16. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and Space Station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  17. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  18. The TIGGE Model Validation Portal: An Improvement In Data Interoperability

    NASA Astrophysics Data System (ADS)

    Cram, T.; Schuster, D. C.; Wilcox, H.; Worley, S. J.

    2011-12-01

    The THORPEX Interactive Grand Global Ensemble (TIGGE), a major component of the World Weather Research Programme, was created to help foster and accelerate the accuracy of 1-day to 2-week high-impact weather forecasts for the benefit of humanity. A key element of this effort is the ability of weather researchers to perform model forecast validation, a statistical procedure by which observational data is used to evaluate how well a numerical model forecast performs as a function of forecast time and model fields. The current methods available for obtaining model forecast verification data can be time-consuming. For example, a user may need to obtain observational, in-situ, and model forecast data from multiple providers and sources in order to carry out the verification process. In most cases, the user is required to download a set of data covering a larger domain and over a longer period of time than is necessary for the user's research. The data preparation challenge is exacerbated if the requested data sets are provided in inconsistent formats, requiring the user to convert the multiple datasets into a preferred common data format. The TIGGE model validation portal, a new product developed for the NCAR Research Data Archive (RDA), strives to solve this data interoperability problem by bringing together and providing observational, model forecast, and in-situ data into a single data package, and in a common data format. Developed to help augment TIGGE research and facilitate researchers' ability to validate TIGGE model forecasts, the portal allows users to submit a delayed-mode data request for the observational and model parameters of their choosing. Additionally, users have the option of requesting a temporal and spatial subset from the global dataset to fit their research needs. This convenience saves both time and storage resources, and allows users to focus their efforts on model verification and research.

  19. Prediction of driving ability: Are we building valid models?

    PubMed

    Hoggarth, Petra A; Innes, Carrie R H; Dalrymple-Alford, John C; Jones, Richard D

    2015-04-01

    The prediction of on-road driving ability using off-road measures is a key aim in driving research. The primary goal in most classification models is to determine a small number of off-road variables that predict driving ability with high accuracy. Unfortunately, classification models are often over-fitted to the study sample, leading to inflation of predictive accuracy, poor generalization to the relevant population and, thus, poor validity. Many driving studies do not report sufficient details to determine the risk of model over-fitting and few report any validation technique, which is critical to test the generalizability of a model. After reviewing the literature, we generated a model using a moderately large sample size (n=279) employing best practice techniques in the context of regression modelling. By then randomly selecting progressively smaller sample sizes we show that a low ratio of participants to independent variables can result in over-fitted models and spurious conclusions regarding model accuracy. We conclude that more stable models can be constructed by following a few guidelines. PMID:25667204

  20. Experimentally Validated Bounding Models for the Scout II Quadrupedal Robot

    E-print Network

    Poulakakis, Ioannis

    hopping height, forward speed and body posture, making complex gaits possible on monopedal, bipedal. implemented a bounding gait in the Patrush robot using a neural oscillator based controller, [5]. Each three degree of freedom (DOF) leg featured an actuated hip and knee, and an unactuated, compliant foot joint

  1. A verification and validation process for model-driven engineering

    NASA Astrophysics Data System (ADS)

    Delmas, R.; Pires, A. F.; Polacsek, T.

    2013-12-01

    Model Driven Engineering practitioners already benefit from many well established verification tools, for Object Constraint Language (OCL), for instance. Recently, constraint satisfaction techniques have been brought to Model-Driven Engineering (MDE) and have shown promising results on model verification tasks. With all these tools, it becomes possible to provide users with formal support from early model design phases to model instantiation phases. In this paper, a selection of such tools and methods is presented, and an attempt is made to define a verification and validation process for model design and instance creation centered on UML (Unified Modeling Language) class diagrams and declarative constraints, and involving the selected tools. The suggested process is illustrated with a simple example.

  2. Validation results of wind diesel simulation model TKKMOD

    NASA Astrophysics Data System (ADS)

    Manninen, L. M.

    The document summarizes the results of TKKMOD validation procedure. TKKMOD is a simulation model developed at Helsinki University of Technology for a specific wind-diesel system layout. The model has been included into the European wind-diesel modeling software package WDLTOOLS under the CEC JOULE project Engineering Design Tools for Wind-Diesel Systems (JOUR-0078). The simulation model is utilized for calculation of long-term performance of the reference system configuration for given wind and load conditions. The main results are energy flows, energy losses in the system components, diesel fuel consumption, and the number of diesel engine starts. The work has been funded through the Finnish Advanced Energy System R&D Programme (NEMO). The validation has been performed using the data from EFI (Norwegian Electric Power Institute), since data from the Finnish reference system is not yet available. The EFI system has a slightly different configuration with similar overall operating principles and approximately same battery capacity. The validation data set, 394 hours of measured data, is from the first prototype wind-diesel system on the island FROYA off the Norwegian coast.

  3. KINEROS2-AGWA: Model Use, Calibration, and Validation

    NASA Technical Reports Server (NTRS)

    Goodrich, D C.; Burns, I. S.; Unkrich, C. L.; Semmens, D. J.; Guertin, D. P.; Hernandez, M.; Yatheendradas, S.; Kennedy, J. R.; Levick, L. R..

    2013-01-01

    KINEROS (KINematic runoff and EROSion) originated in the 1960s as a distributed event-based model that conceptualizes a watershed as a cascade of overland flow model elements that flow into trapezoidal channel model elements. KINEROS was one of the first widely available watershed models that interactively coupled a finite difference approximation of the kinematic overland flow equations to a physically based infiltration model. Development and improvement of KINEROS continued from the 1960s on a variety of projects for a range of purposes, which has resulted in a suite of KINEROS-based modeling tools. This article focuses on KINEROS2 (K2), a spatially distributed, event-based watershed rainfall-runoff and erosion model, and the companion ArcGIS-based Automated Geospatial Watershed Assessment (AGWA) tool. AGWA automates the time-consuming tasks of watershed delineation into distributed model elements and initial parameterization of these elements using commonly available, national GIS data layers. A variety of approaches have been used to calibrate and validate K2 successfully across a relatively broad range of applications (e.g., urbanization, pre- and post-fire, hillslope erosion, erosion from roads, runoff and recharge, and manure transport). The case studies presented in this article (1) compare lumped to stepwise calibration and validation of runoff and sediment at plot, hillslope, and small watershed scales; and (2) demonstrate an uncalibrated application to address relative change in watershed response to wildfire.

  4. Validation of the SUNY Satellite Model in a Meteosat Evironment

    SciTech Connect

    Perez, R.; Schlemmer, J.; Renne, D.; Cowlin, S.; George, R.; Bandyopadhyay, B.

    2009-01-01

    The paper presents a validation of the SUNY satellite-to-irradiance model against four ground-truth stations from the Indian solar radiation network located in and around the province of Rajasthan, India. The SUNY model had initially been developed and tested to process US weather satellite data from the GOES series and has been used as part of the production of the US National Solar Resource Data Base (NSRDB). Here the model is applied to processes data from the European weather satellites Meteosat 5 and 7.

  5. A model for the separation of cloud and aerosol in SAGE II occultation data

    NASA Technical Reports Server (NTRS)

    Kent, G. S.; Winker, D. M.; Osborn, M. T.; Skeens, K. M.

    1993-01-01

    The Stratospheric Aerosol and Gas Experiment (SAGE) II satellite experiment measures the extinction due to aerosols and thin cloud, at wavelengths of 0.525 and 1.02 micrometers, down to an altitude of 6 km. The wavelength dependence of the extinction due to aerosols differs from that of the extinction due to cloud and is used as the basis of a model for separating these two components. The model is presented and its validation using airborne lidar data, obtained coincident with SAGE II observations, is described. This comparison shows that smaller SAGE II cloud extinction values correspond to the presence of subvisible cirrus cloud in the lidar record. Examples of aerosol and cloud data products obtained using this model to interpret SAGE II upper tropospheric and lower stratospheric data are also shown.

  6. Validation of orthopaedic bench models for trauma surgery.

    PubMed

    Leong, J J H; Leff, D R; Das, A; Aggarwal, R; Reilly, P; Atkinson, H D E; Emery, R J; Darzi, A W

    2008-07-01

    The aim of this study was to validate the use of three models of fracture fixation in the assessment of technical skills. We recruited 21 subjects (six experts, seven intermediates, and eight novices) to perform three procedures: application of a dynamic compression plate on a cadaver porcine model, insertion of an unreamed tibial intramedullary nail, and application of a forearm external fixator, both on synthetic bone models. The primary outcome measures were the Objective Structural Assessment of technical skills global rating scale on video recordings of the procedures which were scored by two independent expert observers, and the hand movements of the surgeons which were analysed using the Imperial College Surgical Assessment Device. The video scores were significantly different for the three groups in all three procedures (p < 0.05), with excellent inter-rater reliability (alpha = 0.88). The novice and intermediate groups specifically were significantly different in their performance with dynamic compression plate and intramedullary nails (p < 0.05). Movement analysis distinguished between the three groups in the dynamic compression plate model, but a ceiling effect was demonstrated in the intramedullary nail and external fixator procedures, where intermediates and experts performed to comparable standards (p > 0.6). A total of 85% (18 of 21) of the subjects found the dynamic compression model and 57% (12 of 21) found all the models acceptable tools of assessment. This study has validated a low-cost, high-fidelity porcine dynamic compression plate model using video rating scores for skills assessment and movement analysis. It has also demonstrated that Synbone models for the application of and intramedullary nail and an external fixator are less sensitive and should be improved for further assessment of surgical skills in trauma. The availability of valid objective tools of assessment of surgical skills allows further studies into improving methods of training. PMID:18591610

  7. Rationality Validation of a Layered Decision Model for Network Defense

    SciTech Connect

    Wei, Huaqiang; Alves-Foss, James; Zhang, Du; Frincke, Deb

    2007-08-31

    We propose a cost-effective network defense strategy built on three key: three decision layers: security policies, defense strategies, and real-time defense tactics for countering immediate threats. A layered decision model (LDM) can be used to capture this decision process. The LDM helps decision-makers gain insight into the hierarchical relationships among inter-connected entities and decision types, and supports the selection of cost-effective defense mechanisms to safeguard computer networks. To be effective as a business tool, it is first necessary to validate the rationality of model before applying it to real-world business cases. This paper describes our efforts in validating the LDM rationality through simulation.

  8. Daily validation procedure of chromatographic assay using gaussoexponential modelling.

    PubMed

    Tamisier-Karolak, S L; Tod, M; Bonnardel, P; Czok, M; Cardot, P

    1995-07-01

    High performance liquid chromatography is one of the most successful analytical methods used for the quantitative determination of drugs in biological samples. However, this method is marked by a lack of performance reproducibility: chromatographic peaks become wider and even asymmetrical as the column ages. These progressive changes in the chromatographic parameters have to be taken into account when evaluating the validation criteria for the method. These criteria change with the ageing process of the column leading to the need for new estimations to assure the quality of the results. Procedures are proposed for the daily determination of some validation criteria using the exponentially modified Gaussian (EMG) model of the chromatographic peak. This modelling has been studied on simulated chromatographic peaks in order to obtain the relationships between chromatographic measurements and EMG parameters. PMID:8580155

  9. Approaches to Validation of Models for Low Gravity Fluid Behavior

    NASA Technical Reports Server (NTRS)

    Chato, David J.; Marchetta, Jeffery; Hochstein, John I.; Kassemi, Mohammad

    2005-01-01

    This paper details the author experiences with the validation of computer models to predict low gravity fluid behavior. It reviews the literature of low gravity fluid behavior as a starting point for developing a baseline set of test cases. It examines authors attempts to validate their models against these cases and the issues they encountered. The main issues seem to be that: Most of the data is described by empirical correlation rather than fundamental relation; Detailed measurements of the flow field have not been made; Free surface shapes are observed but through thick plastic cylinders, and therefore subject to a great deal of optical distortion; and Heat transfer process time constants are on the order of minutes to days but the zero-gravity time available has been only seconds.

  10. Calibration of Predictor Models Using Multiple Validation Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  11. A benchmark for the validation of solidification modelling algorithms

    NASA Astrophysics Data System (ADS)

    Kaschnitz, E.; Heugenhauser, S.; Schumacher, P.

    2015-06-01

    This work presents two three-dimensional solidification models, which were solved by several commercial solvers (MAGMASOFT, FLOW-3D, ProCAST, WinCast, ANSYS, and OpenFOAM). Surprisingly, the results show noticeable differences. The results are analyzed similar to a round-robin test procedure to obtain reference values for temperatures and their uncertainties at selected positions in the model. The first model is similar to an adiabatic calorimeter with an aluminum alloy solidifying in a copper block. For this model, an analytical solution for the overall temperature at steady state can be calculated. The second model implements additional heat transfer boundary conditions at outer faces. The geometry of the models, the initial and boundary conditions as well as the material properties are kept as simple as possible but, nevertheless, close to a realistic solidification situation. The gained temperature results can be used to validate self-written solidification solvers and check the accuracy of commercial solidification programs.

  12. Finite element modeling for validation of structural damage identification experimentation.

    SciTech Connect

    Stinemates, D. W.; Bennett, J. G.

    2001-01-01

    The project described in this report was performed to couple experimental and analytical techniques in the field of structural health monitoring and darnage identification. To do this, a finite dement model was Constructed of a simulated three-story building used for damage identification experiments. The model was used in conjunction with data from thie physical structure to research damage identification algorithms. Of particular interest was modeling slip in joints as a function of bolt torque and predicting the smallest change of torque that could be detected experimentally. After being validated with results from the physical structure, the model was used to produce data to test the capabilities of damage identification algorithms. This report describes the finite element model constructed, the results obtained, and proposed future use of the model.

  13. In-Drift Microbial Communities Model Validation Calculations

    SciTech Connect

    D. M. Jolley

    2001-09-24

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  14. Pharmacophore modeling studies of type I and type II kinase inhibitors of Tie2.

    PubMed

    Xie, Qing-Qing; Xie, Huan-Zhang; Ren, Ji-Xia; Li, Lin-Li; Yang, Sheng-Yong

    2009-02-01

    In this study, chemical feature based pharmacophore models of type I and type II kinase inhibitors of Tie2 have been developed with the aid of HipHop and HypoRefine modules within Catalyst program package. The best HipHop pharmacophore model Hypo1_I for type I kinase inhibitors contains one hydrogen-bond acceptor, one hydrogen-bond donor, one general hydrophobic, one hydrophobic aromatic, and one ring aromatic feature. And the best HypoRefine model Hypo1_II for type II kinase inhibitors, which was characterized by the best correlation coefficient (0.976032) and the lowest RMSD (0.74204), consists of two hydrogen-bond donors, one hydrophobic aromatic, and two general hydrophobic features, as well as two excluded volumes. These pharmacophore models have been validated by using either or both test set and cross validation methods, which shows that both the Hypo1_I and Hypo1_II have a good predictive ability. The space arrangements of the pharmacophore features in Hypo1_II are consistent with the locations of the three portions making up a typical type II kinase inhibitor, namely, the portion occupying the ATP binding region (ATP-binding-region portion, AP), that occupying the hydrophobic region (hydrophobic-region portion, HP), and that linking AP and HP (bridge portion, BP). Our study also reveals that the ATP-binding-region portion of the type II kinase inhibitors plays an important role to the bioactivity of the type II kinase inhibitors. Structural modifications on this portion should be helpful to further improve the inhibitory potency of type II kinase inhibitors. PMID:19138543

  15. A validated approach for modeling collapse of steel structures

    NASA Astrophysics Data System (ADS)

    Saykin, Vitaliy Victorovich

    A civil engineering structure is faced with many hazardous conditions such as blasts, earthquakes, hurricanes, tornadoes, floods, and fires during its lifetime. Even though structures are designed for credible events that can happen during a lifetime of the structure, extreme events do happen and cause catastrophic failures. Understanding the causes and effects of structural collapse is now at the core of critical areas of national need. One factor that makes studying structural collapse difficult is the lack of full-scale structural collapse experimental test results against which researchers could validate their proposed collapse modeling approaches. The goal of this work is the creation of an element deletion strategy based on fracture models for use in validated prediction of collapse of steel structures. The current work reviews the state-of-the-art of finite element deletion strategies for use in collapse modeling of structures. It is shown that current approaches to element deletion in collapse modeling do not take into account stress triaxiality in vulnerable areas of the structure, which is important for proper fracture and element deletion modeling. The report then reviews triaxiality and its role in fracture prediction. It is shown that fracture in ductile materials is a function of triaxiality. It is also shown that, depending on the triaxiality range, different fracture mechanisms are active and should be accounted for. An approach using semi-empirical fracture models as a function of triaxiality are employed. The models to determine fracture initiation, softening and subsequent finite element deletion are outlined. This procedure allows for stress-displacement softening at an integration point of a finite element in order to subsequently remove the element. This approach avoids abrupt changes in the stress that would create dynamic instabilities, thus making the results more reliable and accurate. The calibration and validation of these models are shown. The calibration is performed using a particle swarm optimization algorithm to establish accurate parameters when calibrated to circumferentially notched tensile coupons. It is shown that consistent, accurate predictions are attained using the chosen models. The variation of triaxiality in steel material during plastic hardening and softening is reported. The range of triaxiality in steel structures undergoing collapse is investigated in detail and the accuracy of the chosen finite element deletion approaches is discussed. This is done through validation of different structural components and structural frames undergoing severe fracture and collapse.

  16. Validation of a transparent decision model to rate drug interactions

    PubMed Central

    2012-01-01

    Background Multiple databases provide ratings of drug-drug interactions. The ratings are often based on different criteria and lack background information on the decision making process. User acceptance of rating systems could be improved by providing a transparent decision path for each category. Methods We rated 200 randomly selected potential drug-drug interactions by a transparent decision model developed by our team. The cases were generated from ward round observations and physicians’ queries from an outpatient setting. We compared our ratings to those assigned by a senior clinical pharmacologist and by a standard interaction database, and thus validated the model. Results The decision model rated consistently with the standard database and the pharmacologist in 94 and 156 cases, respectively. In two cases the model decision required correction. Following removal of systematic model construction differences, the DM was fully consistent with other rating systems. Conclusion The decision model reproducibly rates interactions and elucidates systematic differences. We propose to supply validated decision paths alongside the interaction rating to improve comprehensibility and to enable physicians to interpret the ratings in a clinical context. PMID:22950884

  17. Validation of the WATEQ4 geochemical model for uranium

    SciTech Connect

    Krupka, K.M.; Jenne, E.A.; Deutsch, W.J.

    1983-09-01

    As part of the Geochemical Modeling and Nuclide/Rock/Groundwater Interactions Studies Program, a study was conducted to partially validate the WATEQ4 aqueous speciation-solubility geochemical model for uranium. The solubility controls determined with the WATEQ4 geochemical model were in excellent agreement with those laboratory studies in which the solids schoepite (UO/sub 2/(OH)/sub 2/ . H/sub 2/O), UO/sub 2/(OH)/sub 2/, and rutherfordine ((UO/sub 2/CO/sub 3/) were identified as actual solubility controls for uranium. The results of modeling solution analyses from laboratory studies of uranyl phosphate solids, however, identified possible errors in the characterization of solids in the original solubility experiments. As part of this study, significant deficiencies in the WATEQ4 thermodynamic data base for uranium solutes and solids were corrected. Revisions included recalculation of selected uranium reactions. Additionally, thermodynamic data for the hydroxyl complexes of U(VI), including anionic (VI) species, were evaluated (to the extent permitted by the available data). Vanadium reactions were also added to the thermodynamic data base because uranium-vanadium solids can exist in natural ground-water systems. This study is only a partial validation of the WATEQ4 geochemical model because the available laboratory solubility studies do not cover the range of solid phases, alkaline pH values, and concentrations of inorganic complexing ligands needed to evaluate the potential solubility of uranium in ground waters associated with various proposed nuclear waste repositories. Further validation of this or other geochemical models for uranium will require careful determinations of uraninite solubility over the pH range of 7 to 10 under highly reducing conditions and of uranyl hydroxide and phosphate solubilities over the pH range of 7 to 10 under oxygenated conditions.

  18. Model selection, identification and validation in anaerobic digestion: a review.

    PubMed

    Donoso-Bravo, Andres; Mailier, Johan; Martin, Cristina; Rodríguez, Jorge; Aceves-Lara, César Arturo; Vande Wouwer, Alain

    2011-11-01

    Anaerobic digestion enables waste (water) treatment and energy production in the form of biogas. The successful implementation of this process has lead to an increasing interest worldwide. However, anaerobic digestion is a complex biological process, where hundreds of microbial populations are involved, and whose start-up and operation are delicate issues. In order to better understand the process dynamics and to optimize the operating conditions, the availability of dynamic models is of paramount importance. Such models have to be inferred from prior knowledge and experimental data collected from real plants. Modeling and parameter identification are vast subjects, offering a realm of approaches and methods, which can be difficult to fully understand by scientists and engineers dedicated to the plant operation and improvements. This review article discusses existing modeling frameworks and methodologies for parameter estimation and model validation in the field of anaerobic digestion processes. The point of view is pragmatic, intentionally focusing on simple but efficient methods. PMID:21920578

  19. Validating a spatially distributed hydrological model with soil morphology data

    NASA Astrophysics Data System (ADS)

    Doppler, T.; Honti, M.; Zihlmann, U.; Weisskopf, P.; Stamm, C.

    2013-10-01

    Spatially distributed hydrological models are popular tools in hydrology and they are claimed to be useful to support management decisions. Despite the high spatial resolution of the computed variables, calibration and validation is often carried out only on discharge time-series at specific locations due to the lack of spatially distributed reference data. Because of this restriction, the predictive power of these models, with regard to predicted spatial patterns, can usually not be judged. An example of spatial predictions in hydrology is the prediction of saturated areas in agricultural catchments. These areas can be important source areas for the transport of agrochemicals to the stream. We set up a spatially distributed model to predict saturated areas in a 1.2 km2 catchment in Switzerland with moderate topography. Around 40% of the catchment area are artificially drained. We measured weather data, discharge and groundwater levels in 11 piezometers for 1.5 yr. For broadening the spatially distributed data sets that can be used for model calibration and validation, we translated soil morphological data available from soil maps into an estimate of the duration of soil saturation in the soil horizons. We used redox-morphology signs for these estimates. This resulted in a data set with high spatial coverage on which the model predictions were validated. In general, these saturation estimates corresponded well to the measured groundwater levels. We worked with a model that would be applicable for management decisions because of its fast calculation speed and rather low data requirements. We simultaneously calibrated the model to the groundwater levels in the piezometers and discharge. The model was able to reproduce the general hydrological behavior of the catchment in terms of discharge and absolute groundwater levels. However, the accuracy of the groundwater level predictions was not high enough to be used for the prediction of saturated areas. The groundwater level dynamics were not adequately reproduced and the predicted spatial patterns of soil saturation did not correspond to the patterns estimated from the soil map. Our results indicate that an accurate prediction of the groundwater level dynamics of the shallow groundwater in our catchment that is subject to artificial drainage would require a more complex model. Especially high spatial resolution and very detailed process representations at the boundary between the unsaturated and the saturated zone are expected to be crucial. The data needed for such a detailed model are not generally available. The high computational demand and the complex model setup would require more resources than the direct identification of saturated areas in the field. This severely hampers the practical use of such models despite their usefulness for scientific purposes.

  20. Infrared ship signature prediction, model validation, and sky radiance

    NASA Astrophysics Data System (ADS)

    Neele, Filip

    2005-05-01

    The increased interest during the last decade in the infrared signature of (new) ships results in a clear need of validated infrared signature prediction codes. This paper presents the results of comparing an in-house developed signature prediction code with measurements made in the 3 5 ?m band in both clear-sky and overcast conditions. During the measurements, sensors measured the short-wave and long-wave irradiation from sun and sky, which forms a significant part of the heat flux exchange between ship and environment, but is linked weakly to the standard meteorological data measured routinely (e.g., air temperature, relative humidity, wind speed, pressure, cloud cover). The aim of the signature model validation is checking the heat flux balance algorithm in the model and the representation of the target. Any uncertainties in the prediction of the radiative properties of the environment (which are usually computed with a code like MODTRAN) must be minimised. It is shown that for the validation of signature prediction models the standard meteorological data are insufficient for the computation of sky radiance and solar irradiation with atmospheric radiation models (MODTRAN). Comparisons between model predictions and data are shown for predictions computed with and without global irradiation data. The results underline the necessity of measuring the irradiation (from sun, sky, sea or land environment) on the target during a signature measurement trial. Only then does the trial produce the data needed as a reference for the computation of the infrared signature of the ship in conditions other than those during the trial.

  1. Validation Analysis of the Shoal Groundwater Flow and Transport Model

    SciTech Connect

    A. Hassan; J. Chapman

    2008-11-01

    Environmental restoration at the Shoal underground nuclear test is following a process prescribed by a Federal Facility Agreement and Consent Order (FFACO) between the U.S. Department of Energy, the U.S. Department of Defense, and the State of Nevada. Characterization of the site included two stages of well drilling and testing in 1996 and 1999, and development and revision of numerical models of groundwater flow and radionuclide transport. Agreement on a contaminant boundary for the site and a corrective action plan was reached in 2006. Later that same year, three wells were installed for the purposes of model validation and site monitoring. The FFACO prescribes a five-year proof-of-concept period for demonstrating that the site groundwater model is capable of producing meaningful results with an acceptable level of uncertainty. The corrective action plan specifies a rigorous seven step validation process. The accepted groundwater model is evaluated using that process in light of the newly acquired data. The conceptual model of ground water flow for the Project Shoal Area considers groundwater flow through the fractured granite aquifer comprising the Sand Springs Range. Water enters the system by the infiltration of precipitation directly on the surface of the mountain range. Groundwater leaves the granite aquifer by flowing into alluvial deposits in the adjacent basins of Fourmile Flat and Fairview Valley. A groundwater divide is interpreted as coinciding with the western portion of the Sand Springs Range, west of the underground nuclear test, preventing flow from the test into Fourmile Flat. A very low conductivity shear zone east of the nuclear test roughly parallels the divide. The presence of these lateral boundaries, coupled with a regional discharge area to the northeast, is interpreted in the model as causing groundwater from the site to flow in a northeastward direction into Fairview Valley. Steady-state flow conditions are assumed given the absence of groundwater withdrawal activities in the area. The conceptual and numerical models were developed based upon regional hydrogeologic investigations conducted in the 1960s, site characterization investigations (including ten wells and various geophysical and geologic studies) at Shoal itself prior to and immediately after the test, and two site characterization campaigns in the 1990s for environmental restoration purposes (including eight wells and a year-long tracer test). The new wells are denoted MV-1, MV-2, and MV-3, and are located to the northnortheast of the nuclear test. The groundwater model was generally lacking data in the north-northeastern area; only HC-1 and the abandoned PM-2 wells existed in this area. The wells provide data on fracture orientation and frequency, water levels, hydraulic conductivity, and water chemistry for comparison with the groundwater model. A total of 12 real-number validation targets were available for the validation analysis, including five values of hydraulic head, three hydraulic conductivity measurements, three hydraulic gradient values, and one angle value for the lateral gradient in radians. In addition, the fracture dip and orientation data provide comparisons to the distributions used in the model and radiochemistry is available for comparison to model output. Goodness-of-fit analysis indicates that some of the model realizations correspond well with the newly acquired conductivity, head, and gradient data, while others do not. Other tests indicated that additional model realizations may be needed to test if the model input distributions need refinement to improve model performance. This approach (generating additional realizations) was not followed because it was realized that there was a temporal component to the data disconnect: the new head measurements are on the high side of the model distributions, but the heads at the original calibration locations themselves have also increased over time. This indicates that the steady-state assumption of the groundwater model is in error. To test the robustness of the model d

  2. Calibration and validation of DRAINMOD to model bioretention hydrology

    NASA Astrophysics Data System (ADS)

    Brown, R. A.; Skaggs, R. W.; Hunt, W. F.

    2013-04-01

    SummaryPrevious field studies have shown that the hydrologic performance of bioretention cells varies greatly because of factors such as underlying soil type, physiographic region, drainage configuration, surface storage volume, drainage area to bioretention surface area ratio, and media depth. To more accurately describe bioretention hydrologic response, a long-term hydrologic model that generates a water balance is needed. Some current bioretention models lack the ability to perform long-term simulations and others have never been calibrated from field monitored bioretention cells with underdrains. All peer-reviewed models lack the ability to simultaneously perform both of the following functions: (1) model an internal water storage (IWS) zone drainage configuration and (2) account for soil-water content using the soil-water characteristic curve. DRAINMOD, a widely-accepted agricultural drainage model, was used to simulate the hydrologic response of runoff entering a bioretention cell. The concepts of water movement in bioretention cells are very similar to those of agricultural fields with drainage pipes, so many bioretention design specifications corresponded directly to DRAINMOD inputs. Detailed hydrologic measurements were collected from two bioretention field sites in Nashville and Rocky Mount, North Carolina, to calibrate and test the model. Each field site had two sets of bioretention cells with varying media depths, media types, drainage configurations, underlying soil types, and surface storage volumes. After 12 months, one of these characteristics was altered - surface storage volume at Nashville and IWS zone depth at Rocky Mount. At Nashville, during the second year (post-repair period), the Nash-Sutcliffe coefficients for drainage and exfiltration/evapotranspiration (ET) both exceeded 0.8 during the calibration and validation periods. During the first year (pre-repair period), the Nash-Sutcliffe coefficients for drainage, overflow, and exfiltration/ET ranged from 0.6 to 0.9 during both the calibration and validation periods. The bioretention cells at Rocky Mount included an IWS zone. For both the calibration and validation periods, the modeled volume of exfiltration/ET was within 1% and 5% of the estimated volume for the cells with sand (Sand cell) and sandy clay loam (SCL cell) underlying soils, respectively. Nash-Sutcliffe coefficients for the SCL cell during both the calibration and validation periods were 0.92.

  3. A turbulence model for iced airfoils and its validation

    NASA Technical Reports Server (NTRS)

    Shin, Jaiwon; Chen, Hsun H.; Cebeci, Tuncer

    1992-01-01

    A turbulence model based on the extension of the algebraic eddy viscosity formulation of Cebeci and Smith developed for two dimensional flows over smooth and rough surfaces is described for iced airfoils and validated for computed ice shapes obtained for a range of total temperatures varying from 28 to -15 F. The validation is made with an interactive boundary layer method which uses a panel method to compute the inviscid flow and an inverse finite difference boundary layer method to compute the viscous flow. The interaction between inviscid and viscous flows is established by the use of the Hilbert integral. The calculated drag coefficients compare well with recent experimental data taken at the NASA-Lewis Icing Research Tunnel (IRT) and show that, in general, the drag increase due to ice accretion can be predicted well and efficiently.

  4. Spectral modeling of Type II SNe

    NASA Astrophysics Data System (ADS)

    Dessart, Luc

    2015-08-01

    The red supergiant phase represents the final stage of evolution in the life of moderate mass (8-25Msun) massive stars. Hidden from view, the core changes considerably its structure, progressing through the advanced stages of nuclear burning, and eventually becomes degenerate. Upon reaching the Chandrasekhar mass, this Fe or ONeMg core collapses, leading to the formation of a proto neutron star. A type II supernova results if the shock that forms at core bounce, eventually wins over the envelope accretion and reaches the progenitor surface.The electromagnetic display of such core-collapse SNe starts with this shock breakout, and persists for months as the ejecta releases the energy deposited initially by the shock or continuously through radioactive decay. Over a timescale of weeks to months, the originally optically-thick ejecta thins out and turns nebular. SN radiation contains a wealth of information about the explosion physics (energy, explosive nucleosynthesis), the progenitor properties (structure and composition). Polarised radiation also offers signatures that can help constrain the morphology of the ejecta.In this talk, I will review the current status of type II SN spectral modelling, and emphasise that a proper solution requires a time dependent treatment of the radiative transfer problem. I will discuss the wealth of information that can be gleaned from spectra as well as light curves, from both the early times (photospheric phase) and late times (nebular phase). I will discuss the diversity of Type SNe properties and how they are related to the diversity of red supergiant stars from which they originate.SN radiation offers an alternate means of constraining the properties of red-supergiant stars. To wrap up, I will illustrate how SNe II-P can also be used as probes, for example to constrain the metallicity of their environment.

  5. Image quality assessment in digital mammography: part II. NPWE as a validated alternative for contrast detail analysis

    NASA Astrophysics Data System (ADS)

    Monnin, P.; Marshall, N. W.; Bosmans, H.; Bochud, F. O.; Verdun, F. R.

    2011-07-01

    Assessment of image quality for digital x-ray mammography systems used in European screening programs relies mainly on contrast-detail CDMAM phantom scoring and requires the acquisition and analysis of many images in order to reduce variability in threshold detectability. Part II of this study proposes an alternative method based on the detectability index (d') calculated for a non-prewhitened model observer with an eye filter (NPWE). The detectability index was calculated from the normalized noise power spectrum and image contrast, both measured from an image of a 5 cm poly(methyl methacrylate) phantom containing a 0.2 mm thick aluminium square, and the pre-sampling modulation transfer function. This was performed as a function of air kerma at the detector for 11 different digital mammography systems. These calculated d' values were compared against threshold gold thickness (T) results measured with the CDMAM test object and against derived theoretical relationships. A simple relationship was found between T and d', as a function of detector air kerma; a linear relationship was found between d' and contrast-to-noise ratio. The values of threshold thickness used to specify acceptable performance in the European Guidelines for 0.10 and 0.25 mm diameter discs were equivalent to threshold calculated detectability indices of 1.05 and 6.30, respectively. The NPWE method is a validated alternative to CDMAM scoring for use in the image quality specification, quality control and optimization of digital x-ray systems for screening mammography.

  6. Validating the Framingham Hypertension Risk Score: results from the Whitehall II study.

    PubMed

    Kivimäki, Mika; Batty, G David; Singh-Manoux, Archana; Ferrie, Jane E; Tabak, Adam G; Jokela, Markus; Marmot, Michael G; Smith, George Davey; Shipley, Martin J

    2009-09-01

    A promising hypertension risk prediction score using data from the US Framingham Offspring Study has been developed, but this score has not been tested in other cohorts. We examined the predictive performance of the Framingham hypertension risk score in a European population, the Whitehall II Study. Participants were 6704 London-based civil servants aged 35 to 68 years, 31% women, free from prevalent hypertension, diabetes mellitus, and coronary heart disease. Standard clinical examinations of blood pressure, weight and height, current cigarette smoking, and parental history of hypertension were undertaken every 5 years for a total of 4 times. We recorded a total of 2043 incident (new-onset) cases of hypertension in three 5-year baseline follow-up data cycles. Both discrimination (C statistic: 0.80) and calibration (Hosmer-Lemeshow chi(2): 11.5) of the Framingham hypertension risk score were good. Agreement between the predicted and observed hypertension incidences was excellent across the risk score distribution. The overall predicted:observed ratio was 1.08, slightly better among individuals >50 years of age (0.99 in men and 1.02 in women) than in younger participants (1.16 in men and 1.18 in women). Reclassification with a modified score on the basis of our study population did not improve the prediction (net reclassification improvement: -0.5%; 95% CI: -2.5% to 1.5%). These data suggest that the Framingham hypertension risk score provides a valid tool with which to estimate near-term risk of developing hypertension. PMID:19597041

  7. PASTIS: Bayesian extrasolar planet validation - II. Constraining exoplanet blend scenarios using spectroscopic diagnoses

    NASA Astrophysics Data System (ADS)

    Santerne, A.; Díaz, R. F.; Almenara, J.-M.; Bouchy, F.; Deleuil, M.; Figueira, P.; Hébrard, G.; Moutou, C.; Rodionov, S.; Santos, N. C.

    2015-08-01

    The statistical validation of transiting exoplanets proved to be an efficient technique to secure the nature of small exoplanet signals which cannot be established by purely spectroscopic means. However, the spectroscopic diagnoses are providing us with useful constraints on the presence of blended stellar contaminants. In this paper, we present how a contaminating star affects the measurements of the various spectroscopic diagnoses as a function of the parameters of the target and contaminating stars using the model implemented into the PASTIS planet-validation software. We find particular cases for which a blend might produce a large radial velocity signal but no bisector variation. It might also produce a bisector variation anticorrelated with the radial velocity one, as in the case of stellar spots. In those cases, the full width at half-maximum variation provides complementary constraints. These results can be used to constrain blend scenarios for transiting planet candidates or radial velocity planets. We review all the spectroscopic diagnoses reported in the literature so far, especially the ones to monitor the line asymmetry. We estimate their uncertainty and compare their sensitivity to blends. Based on that, we recommend the use of BiGauss which is the most sensitive diagnosis to monitor line-profile asymmetry. In this paper, we also investigate the sensitivity of the radial velocities to constrain blend scenarios and develop a formalism to estimate the level of dilution of a blended signal. Finally, we apply our blend model to re-analyse the spectroscopic diagnoses of HD 16702, an unresolved face-on binary which exhibits bisector variations.

  8. Validation of High Displacement Piezoelectric Actuator Finite Element Models

    NASA Technical Reports Server (NTRS)

    Taleghani, B. K.

    2000-01-01

    The paper presents the results obtained by using NASTRAN(Registered Trademark) and ANSYS(Regitered Trademark) finite element codes to predict doming of the THUNDER piezoelectric actuators during the manufacturing process and subsequent straining due to an applied input voltage. To effectively use such devices in engineering applications, modeling and characterization are essential. Length, width, dome height, and thickness are important parameters for users of such devices. Therefore, finite element models were used to assess the effects of these parameters. NASTRAN(Registered Trademark) and ANSYS(Registered Trademark) used different methods for modeling piezoelectric effects. In NASTRAN(Registered Trademark), a thermal analogy was used to represent voltage at nodes as equivalent temperatures, while ANSYS(Registered Trademark) processed the voltage directly using piezoelectric finite elements. The results of finite element models were validated by using the experimental results.

  9. Atmospheric forcing validation for modeling the central Arctic

    NASA Astrophysics Data System (ADS)

    Makshtas, A.; Atkinson, D.; Kulakov, M.; Shutilin, S.; Krishfield, R.; Proshutinsky, A.

    2007-10-01

    We compare daily data from the National Center for Atmospheric Research and National Centers for Environmental Prediction ``Reanalysis 1'' project with observational data obtained from the North Pole drifting stations in order to validate the atmospheric forcing data used in coupled ice-ocean models. This analysis is conducted to assess the role of errors associated with model forcing before performing model verifications against observed ocean variables. Our analysis shows an excellent agreement between observed and reanalysis sea level pressures and a relatively good correlation between observed and reanalysis surface winds. The observed temperature is in good agreement with reanalysis data only in winter. Specific air humidity and cloudiness are not reproduced well by reanalysis and are not recommended for model forcing. An example sensitivity study demonstrates that the equilibrium ice thickness obtained using NP forcing is two times thicker than using reanalysis forcing.

  10. MODEL VALIDATION FOR A NONINVASIVE ARTERIAL STENOSIS DETECTION PROBLEM

    PubMed Central

    BANKS, H. THOMAS; HU, SHUHUA; KENZ, ZACKARY R.; KRUSE, CAROLA; SHAW, SIMON; WHITEMAN, JOHN; BREWIN, MARK P.; GREENWALD, STEPHEN E.; BIRCH, MALCOLM J.

    2014-01-01

    A current thrust in medical research is the development of a non-invasive method for detection, localization, and characterization of an arterial stenosis (a blockage or partial blockage in an artery). A method has been proposed to detect shear waves in the chest cavity which have been generated by disturbances in the blood flow resulting from a stenosis. In order to develop this methodology further, we use one-dimensional shear wave experimental data from novel acoustic phantoms to validate a corresponding viscoelastic mathematical model. We estimate model parameters which give a good fit (in a sense to be precisely defined) to the experimental data, and use asymptotic error theory to provide confidence intervals for parameter estimates. Finally, since a robust error model is necessary for accurate parameter estimates and confidence analysis, we include a comparison of absolute and relative models for measurement error. PMID:24506547

  11. Characterization and Validation of a Canine Pruritic Model.

    PubMed

    Aberg, Gunnar A K; Arulnesan, Nada; Bolger, Gordon T; Ciofalo, Vincent B; Pucaj, Kresimir

    2015-08-01

    Preclinical Research The mechanisms mediating canine pruritus are poorly understood with few models due to limited methods for inducing pruritus in dogs. Chloroquine (CQ) is a widely used antimalarial drug that causes pruritus in humans and mice. We have developed a canine model of pruritus where CQ reliably induced pruritus in all dogs tested following intravenous administration. This model is presently being used to test antipruritic activity of drug candidate molecules. This publication has been validated in a blinded cross-over study in eight beagle dogs using the reference standards, oclacitinib and prednisolone, and has been used to test a new compound, norketotifen. All compounds reduced CQ-induced pruritus in the dog. The sensitivity of the model was demonstrated using norketotifen, which at three dose levels, dose-dependently, inhibited scratching events compared with placebo. PMID:26220424

  12. Validation of coupled atmosphere-fire behavior models

    SciTech Connect

    Bossert, J.E.; Reisner, J.M.; Linn, R.R.; Winterkamp, J.L.; Schaub, R.; Riggan, P.J.

    1998-12-31

    Recent advances in numerical modeling and computer power have made it feasible to simulate the dynamical interaction and feedback between the heat and turbulence induced by wildfires and the local atmospheric wind and temperature fields. At Los Alamos National Laboratory, the authors have developed a modeling system that includes this interaction by coupling a high resolution atmospheric dynamics model, HIGRAD, with a fire behavior model, BEHAVE, to predict the spread of wildfires. The HIGRAD/BEHAVE model is run at very high resolution to properly resolve the fire/atmosphere interaction. At present, these coupled wildfire model simulations are computationally intensive. The additional complexity of these models require sophisticated methods for assuring their reliability in real world applications. With this in mind, a substantial part of the research effort is directed at model validation. Several instrumented prescribed fires have been conducted with multi-agency support and participation from chaparral, marsh, and scrub environments in coastal areas of Florida and inland California. In this paper, the authors first describe the data required to initialize the components of the wildfire modeling system. Then they present results from one of the Florida fires, and discuss a strategy for further testing and improvement of coupled weather/wildfire models.

  13. Validating and Verifying Biomathematical Models of Human Fatigue

    NASA Technical Reports Server (NTRS)

    Martinez, Siera Brooke; Quintero, Luis Ortiz; Flynn-Evans, Erin

    2015-01-01

    Airline pilots experience acute and chronic sleep deprivation, sleep inertia, and circadian desynchrony due to the need to schedule flight operations around the clock. This sleep loss and circadian desynchrony gives rise to cognitive impairments, reduced vigilance and inconsistent performance. Several biomathematical models, based principally on patterns observed in circadian rhythms and homeostatic drive, have been developed to predict a pilots levels of fatigue or alertness. These models allow for the Federal Aviation Administration (FAA) and commercial airlines to make decisions about pilot capabilities and flight schedules. Although these models have been validated in a laboratory setting, they have not been thoroughly tested in operational environments where uncontrolled factors, such as environmental sleep disrupters, caffeine use and napping, may impact actual pilot alertness and performance. We will compare the predictions of three prominent biomathematical fatigue models (McCauley Model, Harvard Model, and the privately-sold SAFTE-FAST Model) to actual measures of alertness and performance. We collected sleep logs, movement and light recordings, psychomotor vigilance task (PVT), and urinary melatonin (a marker of circadian phase) from 44 pilots in a short-haul commercial airline over one month. We will statistically compare with the model predictions to lapses on the PVT and circadian phase. We will calculate the sensitivity and specificity of each model prediction under different scheduling conditions. Our findings will aid operational decision-makers in determining the reliability of each model under real-world scheduling situations.

  14. Coupled Thermal-Chemical-Mechanical Modeling of Validation Cookoff Experiments

    SciTech Connect

    ERIKSON,WILLIAM W.; SCHMITT,ROBERT G.; ATWOOD,A.I.; CURRAN,P.D.

    2000-11-27

    The cookoff of energetic materials involves the combined effects of several physical and chemical processes. These processes include heat transfer, chemical decomposition, and mechanical response. The interaction and coupling between these processes influence both the time-to-event and the violence of reaction. The prediction of the behavior of explosives during cookoff, particularly with respect to reaction violence, is a challenging task. To this end, a joint DoD/DOE program has been initiated to develop models for cookoff, and to perform experiments to validate those models. In this paper, a series of cookoff analyses are presented and compared with data from a number of experiments for the aluminized, RDX-based, Navy explosive PBXN-109. The traditional thermal-chemical analysis is used to calculate time-to-event and characterize the heat transfer and boundary conditions. A reaction mechanism based on Tarver and McGuire's work on RDX{sup 2} was adjusted to match the spherical one-dimensional time-to-explosion data. The predicted time-to-event using this reaction mechanism compares favorably with the validation tests. Coupled thermal-chemical-mechanical analysis is used to calculate the mechanical response of the confinement and the energetic material state prior to ignition. The predicted state of the material includes the temperature, stress-field, porosity, and extent of reaction. There is little experimental data for comparison to these calculations. The hoop strain in the confining steel tube gives an estimation of the radial stress in the explosive. The inferred pressure from the measured hoop strain and calculated radial stress agree qualitatively. However, validation of the mechanical response model and the chemical reaction mechanism requires more data. A post-ignition burn dynamics model was applied to calculate the confinement dynamics. The burn dynamics calculations suffer from a lack of characterization of the confinement for the flaw-dominated failure mode experienced in the tests. High-pressure burning rates are needed for more detailed post-ignition studies. Sub-models for chemistry, mechanical response and burn dynamics need to be validated against data from less complex experiments. The sub-models can then be used in integrated analysis for comparison with experimental data taken during integrated tests.

  15. Cross-Cultural Validation of the Beck Depression Inventory-II across U.S. and Turkish Samples

    ERIC Educational Resources Information Center

    Canel-Cinarbas, Deniz; Cui, Ying; Lauridsen, Erica

    2011-01-01

    The purpose of this study was to test the Beck Depression Inventory-II (BDI-II) for factorial invariance across Turkish and U.S. college student samples. The results indicated that (a) a two-factor model has an adequate fit for both samples, thus providing evidence of configural invariance, and (b) there is a metric invariance but "no" sufficient…

  16. Validation of thermal models for a prototypical MEMS thermal actuator.

    SciTech Connect

    Gallis, Michail A.; Torczynski, John Robert; Piekos, Edward Stanley; Serrano, Justin Raymond; Gorby, Allen D.; Phinney, Leslie Mary

    2008-09-01

    This report documents technical work performed to complete the ASC Level 2 Milestone 2841: validation of thermal models for a prototypical MEMS thermal actuator. This effort requires completion of the following task: the comparison between calculated and measured temperature profiles of a heated stationary microbeam in air. Such heated microbeams are prototypical structures in virtually all electrically driven microscale thermal actuators. This task is divided into four major subtasks. (1) Perform validation experiments on prototypical heated stationary microbeams in which material properties such as thermal conductivity and electrical resistivity are measured if not known and temperature profiles along the beams are measured as a function of electrical power and gas pressure. (2) Develop a noncontinuum gas-phase heat-transfer model for typical MEMS situations including effects such as temperature discontinuities at gas-solid interfaces across which heat is flowing, and incorporate this model into the ASC FEM heat-conduction code Calore to enable it to simulate these effects with good accuracy. (3) Develop a noncontinuum solid-phase heat transfer model for typical MEMS situations including an effective thermal conductivity that depends on device geometry and grain size, and incorporate this model into the FEM heat-conduction code Calore to enable it to simulate these effects with good accuracy. (4) Perform combined gas-solid heat-transfer simulations using Calore with these models for the experimentally investigated devices, and compare simulation and experimental temperature profiles to assess model accuracy. These subtasks have been completed successfully, thereby completing the milestone task. Model and experimental temperature profiles are found to be in reasonable agreement for all cases examined. Modest systematic differences appear to be related to uncertainties in the geometric dimensions of the test structures and in the thermal conductivity of the polycrystalline silicon test structures, as well as uncontrolled nonuniform changes in this quantity over time and during operation.

  17. Development and Validation of a 3-Dimensional CFB Furnace Model

    NASA Astrophysics Data System (ADS)

    Vepsäläinen, Arl; Myöhänen, Karl; Hyppäneni, Timo; Leino, Timo; Tourunen, Antti

    At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents CFB process analysis focused on combustion and NO profiles in pilot and industrial scale bituminous coal combustion.

  18. Deviatoric constitutive model: domain of strain rate validity

    SciTech Connect

    Zocher, Marvin A

    2009-01-01

    A case is made for using an enhanced methodology in determining the parameters that appear in a deviatoric constitutive model. Predictability rests on our ability to solve a properly posed initial boundary value problem (IBVP), which incorporates an accurate reflection of material constitutive behavior. That reflection is provided through the constitutive model. Moreover, the constitutive model is required for mathematical closure of the IBVP. Common practice in the shock physics community is to divide the Cauchy tensor into spherical and deviatoric parts, and to develop separate models for spherical and deviatoric constitutive response. Our focus shall be on the Cauchy deviator and deviatoric constitutive behavior. Discussions related to the spherical part of the Cauchy tensor are reserved for another time. A number of deviatoric constitutive models have been developed for utilization in the solution of IBVPs that are of interest to those working in the field of shock physics, e.g. All of these models are phenomenological and contain a number of parameters that must be determined in light of experimental data. The methodology employed in determining these parameters dictates the loading regime over which the model can be expected to be accurate. The focus of this paper is the methodology employed in determining model parameters and the consequences of that methodology as it relates to the domain of strain rate validity. We shall begin by describing the methodology that is typically employed. We shall discuss limitations imposed upon predictive capability by the typically employed methodology. We shall propose a modification to the typically employed methodology that significantly extends the domain of strain rate validity.

  19. Alkaline membrane fuel cell (AMFC) modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Sommer, E. M.; Martins, L. S.; Vargas, J. V. C.; Gardolinski, J. E. F. C.; Ordonez, J. C.; Marino, C. E. B.

    2012-09-01

    This paper aims to produce a dynamic model that is computationally fast to predict the response of the single AMFC according to variations of physical properties of the materials, and operating and design parameters. The model is based on electrochemical principles, and mass, momentum, energy and species conservation. It also takes into account pressure drop in the gas channels and the temperature gradient with respect to space in the flow direction. The simulation results comprise temperature distribution, net power and polarization curves, which were experimentally validated by direct comparison to voltage and current measurements performed in a cellulose-based AMFC prototype for different electrolyte (KOH) solution concentrations (y), showing good quantitative and qualitative agreement. It is concluded that the startup transient is short and that there are optimal values of y (?40 wt. % ) which lead to maximum power, that are herein shown experimentally for the first time. In the process, the model was used to formulate empirical correlations for the exchange current density (i0) in the electrodes with respect to the electrolyte concentration for future fuel cell development. Therefore, the adjusted and validated model is expected to be a useful tool for AMFC control, design and optimization purposes.

  20. Validation of two air quality models for Indian mining conditions.

    PubMed

    Chaulya, S K; Ahmad, M; Singh, R S; Bandopadhyay, L K; Bondyopadhay, C; Mondal, G C

    2003-02-01

    All major mining activity particularly opencast mining contributes to the problem of suspended particulate matter (SPM) directly or indirectly. Therefore, assessment and prediction are required to prevent and minimize the deterioration of SPM due to various opencast mining operations. Determination of emission rate of SPM for these activities and validation of air quality models are the first and foremost concern. In view of the above, the study was taken up for determination of emission rate for SPM to calculate emission rate of various opencast mining activities and validation of commonly used two air quality models for Indian mining conditions. To achieve the objectives, eight coal and three iron ore mining sites were selected to generate site specific emission data by considering type of mining, method of working, geographical location, accessibility and above all resource availability. The study covers various mining activities and locations including drilling, overburden loading and unloading, coal/mineral loading and unloading, coal handling or screening plant, exposed overburden dump, stock yard, workshop, exposed pit surface, transport road and haul road. Validation of the study was carried out through Fugitive Dust Model (FDM) and Point, Area and Line sources model (PAL2) by assigning the measured emission rate for each mining activity, meteorological data and other details of the respective mine as an input to the models. Both the models were run separately for the same set of input data for each mine to get the predicted SPM concentration at three receptor locations for each mine. The receptor locations were selected such a way that at the same places the actual filed measurement were carried out for SPM concentration. Statistical analysis was carried out to assess the performance of the models based on a set measured and predicted SPM concentration data. The value of coefficient of correlation for PAL2 and FDM was calculated to be 0.990-0.994 and 0.966-0.997, respectively, which shows a fairly good agreement between measured and predicted values of SPM concentration. The average index of agreement values for PAL2 and FDM was found to be 0.665 and 0.752, respectively, which represents that the prediction by PAL2 and FDM models are accurate by 66.5 and 75.2%, respectively. These indicate that FDM model is more suited for Indian mining conditions. PMID:12602620

  1. Modal testing for model validation of structures with discrete nonlinearities

    PubMed Central

    Ewins, D. J.; Weekes, B.; delli Carri, A.

    2015-01-01

    Model validation using data from modal tests is now widely practiced in many industries for advanced structural dynamic design analysis, especially where structural integrity is a primary requirement. These industries tend to demand highly efficient designs for their critical structures which, as a result, are increasingly operating in regimes where traditional linearity assumptions are no longer adequate. In particular, many modern structures are found to contain localized areas, often around joints or boundaries, where the actual mechanical behaviour is far from linear. Such structures need to have appropriate representation of these nonlinear features incorporated into the otherwise largely linear models that are used for design and operation. This paper proposes an approach to this task which is an extension of existing linear techniques, especially in the testing phase, involving only just as much nonlinear analysis as is necessary to construct a model which is good enough, or ‘valid’: i.e. capable of predicting the nonlinear response behaviour of the structure under all in-service operating and test conditions with a prescribed accuracy. A short-list of methods described in the recent literature categorized using our framework is given, which identifies those areas in which further development is most urgently required. PMID:26303924

  2. Experimental Validation of a Pulse Tube Cfd Model

    NASA Astrophysics Data System (ADS)

    Taylor, R. P.; Nellis, G. F.; Klein, S. A.; Radebaugh, R.; Lewis, M.; Bradley, P.

    2010-04-01

    Computational fluid dynamic (CFD) analysis has been applied by various authors to study the processes occurring in the pulse tube cryocooler and carry out parametric design and optimization. However, a thorough and quantitative validation of the CFD model predications against experimental data has not been accomplished. This is in part due to the difficulty associated with measuring the specific quantities of interest (e.g., internal enthalpy flows and acoustic power) rather than generic system performance (e.g., cooling power). This paper presents the experimental validation of a previously published two-dimensional, axisymmetric CFD model of the pulse tube and its associated flow transitions. The test facility designed for this purpose is unique in that it allows the precise measurement of the cold end acoustic power, regenerator loss, and cooling power. Therefore, it allows the separate and precise measurement of both the pulse tube loss and the regenerator loss. The experimental results are presented for various pulse tube and flow transition configurations operating at a cold end temperature of 80 K over a range of pressure ratios. The comparison of the model prediction to the experimental data is presented with discussion.

  3. Modal testing for model validation of structures with discrete nonlinearities.

    PubMed

    Ewins, D J; Weekes, B; Delli Carri, A

    2015-09-28

    Model validation using data from modal tests is now widely practiced in many industries for advanced structural dynamic design analysis, especially where structural integrity is a primary requirement. These industries tend to demand highly efficient designs for their critical structures which, as a result, are increasingly operating in regimes where traditional linearity assumptions are no longer adequate. In particular, many modern structures are found to contain localized areas, often around joints or boundaries, where the actual mechanical behaviour is far from linear. Such structures need to have appropriate representation of these nonlinear features incorporated into the otherwise largely linear models that are used for design and operation. This paper proposes an approach to this task which is an extension of existing linear techniques, especially in the testing phase, involving only just as much nonlinear analysis as is necessary to construct a model which is good enough, or 'valid': i.e. capable of predicting the nonlinear response behaviour of the structure under all in-service operating and test conditions with a prescribed accuracy. A short-list of methods described in the recent literature categorized using our framework is given, which identifies those areas in which further development is most urgently required. PMID:26303924

  4. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology

    PubMed Central

    Krishnamoorthi, Shankarjee; Perotti, Luigi E.; Borgstrom, Nils P.; Ajijola, Olujimi A.; Frid, Anna; Ponnaluri, Aditya V.; Weiss, James N.; Qu, Zhilin; Klug, William S.; Ennis, Daniel B.; Garfinkel, Alan

    2014-01-01

    We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG) and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations. PMID:25493967

  5. Multiscale Modeling of Gastrointestinal Electrophysiology and Experimental Validation

    PubMed Central

    Du, Peng; O'Grady, Greg; Davidson, John B.; Cheng, Leo K.; Pullan, Andrew J.

    2011-01-01

    Normal gastrointestinal (GI) motility results from the coordinated interplay of multiple cooperating mechanisms, both intrinsic and extrinsic to the GI tract. A fundamental component of this activity is an omnipresent electrical activity termed slow waves, which is generated and propagated by the interstitial cells of Cajal (ICCs). The role of ICC loss and network degradation in GI motility disorders is a significant area of ongoing research. This review examines recent progress in the multiscale modeling framework for effectively integrating a vast range of experimental data in GI electrophysiology, and outlines the prospect of how modeling can provide new insights into GI function in health and disease. The review begins with an overview of the GI tract and its electrophysiology, and then focuses on recent work on modeling GI electrical activity, spanning from cell to body biophysical scales. Mathematical cell models of the ICCs and smooth muscle cell are presented. The continuum framework of monodomain and bidomain models for tissue and organ models are then considered, and the forward techniques used to model the resultant body surface potential and magnetic field are discussed. The review then outlines recent progress in experimental support and validation of modeling, and concludes with a discussion on potential future research directions in this field. PMID:21133835

  6. Image based validation of dynamical models for cell reorientation.

    PubMed

    Lockley, Robert; Ladds, Graham; Bretschneider, Till

    2015-06-01

    A key feature of directed cell movement is the ability of cells to reorient quickly in response to changes in the direction of an extracellular stimulus. Mathematical models have suggested quite different regulatory mechanisms to explain reorientation, raising the question of how we can validate these models in a rigorous way. In this study, we fit three reaction-diffusion models to experimental data of Dictyostelium amoebae reorienting in response to alternating gradients of mechanical shear flow. The experimental readouts we use to fit are spatio-temporal distributions of a fluorescent reporter for cortical F-actin labeling the cell front. Experiments performed under different conditions are fitted simultaneously to challenge the models with different types of cellular dynamics. Although the model proposed by Otsuji is unable to provide a satisfactory fit, those suggested by Meinhardt and Levchenko fit equally well. Further, we show that reduction of the three-variable Meinhardt model to a two-variable model also provides an excellent fit, but has the advantage of all parameters being uniquely identifiable. Our work demonstrates that model selection and identifiability analysis, commonly applied to temporal dynamics problems in systems biology, can be a powerful tool when extended to spatio-temporal imaging data. PMID:25492625

  7. A Model for Estimating the Reliability and Validity of Criterion-Referenced Measures.

    ERIC Educational Resources Information Center

    Edmonston, Leon P.; Randall, Robert S.

    A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in…

  8. A validation study of a stochastic model of human interaction

    NASA Astrophysics Data System (ADS)

    Burchfield, Mitchel Talmadge

    The purpose of this dissertation is to validate a stochastic model of human interactions which is part of a developmentalism paradigm. Incorporating elements of ancient and contemporary philosophy and science, developmentalism defines human development as a progression of increasing competence and utilizes compatible theories of developmental psychology, cognitive psychology, educational psychology, social psychology, curriculum development, neurology, psychophysics, and physics. To validate a stochastic model of human interactions, the study addressed four research questions: (a) Does attitude vary over time? (b) What are the distributional assumptions underlying attitudes? (c) Does the stochastic model, {-}N{intlimitssbsp{-infty}{infty}}varphi(chi,tau)\\ Psi(tau)dtau, have utility for the study of attitudinal distributions and dynamics? (d) Are the Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein theories applicable to human groups? Approximately 25,000 attitude observations were made using the Semantic Differential Scale. Positions of individuals varied over time and the logistic model predicted observed distributions with correlations between 0.98 and 1.0, with estimated standard errors significantly less than the magnitudes of the parameters. The results bring into question the applicability of Fisherian research designs (Fisher, 1922, 1928, 1938) for behavioral research based on the apparent failure of two fundamental assumptions-the noninteractive nature of the objects being studied and normal distribution of attributes. The findings indicate that individual belief structures are representable in terms of a psychological space which has the same or similar properties as physical space. The psychological space not only has dimension, but individuals interact by force equations similar to those described in theoretical physics models. Nonlinear regression techniques were used to estimate Fermi-Dirac parameters from the data. The model explained a high degree of the variance in each probability distribution. The correlation between predicted and observed probabilities ranged from a low of 0.955 to a high value of 0.998, indicating that humans behave in psychological space as Fermions behave in momentum space.

  9. Radiative transfer model for contaminated slabs: experimental validations

    NASA Astrophysics Data System (ADS)

    Andrieu, F.; Schmidt, F.; Schmitt, B.; Douté, S.; Brissaud, O.

    2015-09-01

    This article presents a set of spectro-goniometric measurements of different water ice samples and the comparison with an approximated radiative transfer model. The experiments were done using the spectro-radiogoniometer described in Brissaud et al. (2004). The radiative transfer model assumes an isotropization of the flux after the second interface and is fully described in Andrieu et al. (2015). Two kinds of experiments were conducted. First, the specular spot was closely investigated, at high angular resolution, at the wavelength of 1.5 ?m, where ice behaves as a very absorbing media. Second, the bidirectional reflectance was sampled at various geometries, including low phase angles on 61 wavelengths ranging from 0.8 to 2.0 ?m. In order to validate the model, we made qualitative tests to demonstrate the relative isotropization of the flux. We also conducted quantitative assessments by using a Bayesian inversion method in order to estimate the parameters (e.g., sample thickness, surface roughness) from the radiative measurements only. A simple comparison between the retrieved parameters and the direct independent measurements allowed us to validate the model. We developed an innovative Bayesian inversion approach to quantitatively estimate the uncertainties in the parameters avoiding the usual slow Monte Carlo approach. First we built lookup tables, and then we searched the best fits and calculated a posteriori density probability functions. The results show that the model is able to reproduce the geometrical energy distribution in the specular spot, as well as the spectral behavior of water ice slabs. In addition, the different parameters of the model are compatible with independent measurements.

  10. Conformational Analysis of the DFG-Out Kinase Motif and Biochemical Profiling of Structurally Validated Type II Inhibitors

    PubMed Central

    2015-01-01

    Structural coverage of the human kinome has been steadily increasing over time. The structures provide valuable insights into the molecular basis of kinase function and also provide a foundation for understanding the mechanisms of kinase inhibitors. There are a large number of kinase structures in the PDB for which the Asp and Phe of the DFG motif on the activation loop swap positions, resulting in the formation of a new allosteric pocket. We refer to these structures as “classical DFG-out” conformations in order to distinguish them from conformations that have also been referred to as DFG-out in the literature but that do not have a fully formed allosteric pocket. We have completed a structural analysis of almost 200 small molecule inhibitors bound to classical DFG-out conformations; we find that they are recognized by both type I and type II inhibitors. In contrast, we find that nonclassical DFG-out conformations strongly select against type II inhibitors because these structures have not formed a large enough allosteric pocket to accommodate this type of binding mode. In the course of this study we discovered that the number of structurally validated type II inhibitors that can be found in the PDB and that are also represented in publicly available biochemical profiling studies of kinase inhibitors is very small. We have obtained new profiling results for several additional structurally validated type II inhibitors identified through our conformational analysis. Although the available profiling data for type II inhibitors is still much smaller than for type I inhibitors, a comparison of the two data sets supports the conclusion that type II inhibitors are more selective than type I. We comment on the possible contribution of the DFG-in to DFG-out conformational reorganization to the selectivity. PMID:25478866

  11. Organic acid modeling and model validation: Workshop summary. Final report

    SciTech Connect

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E&S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled ``Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.`` The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  12. Organic acid modeling and model validation: Workshop summary

    SciTech Connect

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.'' The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  13. Coarse Grained Model for Biological Simulations: Recent Refinements and Validation

    PubMed Central

    Vicatos, Spyridon; Rychkova, Anna; Mukherjee, Shayantani; Warshel, Arieh

    2014-01-01

    Exploring the free energy landscape of proteins and modeling the corresponding functional aspects presents a major challenge for computer simulation approaches. This challenge is due to the complexity of the landscape and the enormous computer time needed for converging simulations. The use of various simplified coarse grained (CG) models offers an effective way of sampling the landscape, but most current models are not expected to give a reliable description of protein stability and functional aspects. The main problem is associated with insufficient focus on the electrostatic features of the model. In this respect our recent CG model offers significant advantage as it has been refined while focusing on its electrostatic free energy. Here we review the current state of our model, describing recent refinement, extensions and validation studies while focusing on demonstrating key applications. These include studies of protein stability, extending the model to include membranes and electrolytes and electrodes as well as studies of voltage activated proteins, protein insertion trough the translocon, the action of molecular motors and even the coupling of the stalled ribosome and the translocon. Our example illustrates the general potential of our approach in overcoming major challenges in studies of structure function correlation in proteins and large macromolecular complexes. PMID:25050439

  14. Sound Transmission Validation and Sensitivity Studies in Numerical Models.

    PubMed

    Oberrecht, Steve P; Krysl, Petr; Cranford, Ted W

    2016-01-01

    In 1974, Norris and Harvey published an experimental study of sound transmission into the head of the bottlenose dolphin. We used this rare source of data to validate our Vibroacoustic Toolkit, an array of numerical modeling simulation tools. Norris and Harvey provided measurements of received sound pressure in various locations within the dolphin's head from a sound source that was moved around the outside of the head. Our toolkit was used to predict the curves of pressure with the best-guess input data (material properties, transducer and hydrophone locations, and geometry of the animal's head). In addition, we performed a series of sensitivity analyses (SAs). SA is concerned with understanding how input changes to the model influence the outputs. SA can enhance understanding of a complex model by finding and analyzing unexpected model behavior, discriminating which inputs have a dominant effect on particular outputs, exploring how inputs combine to affect outputs, and gaining insight as to what additional information improves the model's ability to predict. Even when a computational model does not adequately reproduce the behavior of a physical system, its sensitivities may be useful for developing inferences about key features of the physical system. Our findings may become a valuable source of information for modeling the interactions between sound and anatomy. PMID:26611033

  15. Experimental validation of a numerical model for subway induced vibrations

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Degrande, G.; Lombaert, G.

    2009-04-01

    This paper presents the experimental validation of a coupled periodic finite element-boundary element model for the prediction of subway induced vibrations. The model fully accounts for the dynamic interaction between the train, the track, the tunnel and the soil. The periodicity or invariance of the tunnel and the soil in the longitudinal direction is exploited using the Floquet transformation, which allows for an efficient formulation in the frequency-wavenumber domain. A general analytical formulation is used to compute the response of three-dimensional invariant or periodic media that are excited by moving loads. The numerical model is validated by means of several experiments that have been performed at a site in Regent's Park on the Bakerloo line of London Underground. Vibration measurements have been performed on the axle boxes of the train, on the rail, the tunnel invert and the tunnel wall, and in the free field, both at the surface and at a depth of 15 m. Prior to these vibration measurements, the dynamic soil characteristics and the track characteristics have been determined. The Bakerloo line tunnel of London Underground has been modelled using the coupled periodic finite element-boundary element approach and free field vibrations due to the passage of a train at different speeds have been predicted and compared to the measurements. The correspondence between the predicted and measured response in the tunnel is reasonably good, although some differences are observed in the free field. The discrepancies are explained on the basis of various uncertainties involved in the problem. The variation in the response with train speed is similar for the measurements as well as the predictions. This study demonstrates the applicability of the coupled periodic finite element-boundary element model to make realistic predictions of the vibrations from underground railways.

  16. Validating a spatially distributed hydrological model with soil morphology data

    NASA Astrophysics Data System (ADS)

    Doppler, T.; Honti, M.; Zihlmann, U.; Weisskopf, P.; Stamm, C.

    2014-09-01

    Spatially distributed models are popular tools in hydrology claimed to be useful to support management decisions. Despite the high spatial resolution of the computed variables, calibration and validation is often carried out only on discharge time series at specific locations due to the lack of spatially distributed reference data. Because of this restriction, the predictive power of these models, with regard to predicted spatial patterns, can usually not be judged. An example of spatial predictions in hydrology is the prediction of saturated areas in agricultural catchments. These areas can be important source areas for inputs of agrochemicals to the stream. We set up a spatially distributed model to predict saturated areas in a 1.2 km2 catchment in Switzerland with moderate topography and artificial drainage. We translated soil morphological data available from soil maps into an estimate of the duration of soil saturation in the soil horizons. This resulted in a data set with high spatial coverage on which the model predictions were validated. In general, these saturation estimates corresponded well to the measured groundwater levels. We worked with a model that would be applicable for management decisions because of its fast calculation speed and rather low data requirements. We simultaneously calibrated the model to observed groundwater levels and discharge. The model was able to reproduce the general hydrological behavior of the catchment in terms of discharge and absolute groundwater levels. However, the the groundwater level predictions were not accurate enough to be used for the prediction of saturated areas. Groundwater level dynamics were not adequately reproduced and the predicted spatial saturation patterns did not correspond to those estimated from the soil map. Our results indicate that an accurate prediction of the groundwater level dynamics of the shallow groundwater in our catchment that is subject to artificial drainage would require a model that better represents processes at the boundary between the unsaturated and the saturated zone. However, data needed for such a more detailed model are not generally available. This severely hampers the practical use of such models despite their usefulness for scientific purposes.

  17. Validation of hydrogen gas stratification and mixing models

    SciTech Connect

    Wu, Hsingtzu; Zhao, Haihua

    2015-11-01

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for a large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10?? to 3.29 × 10?? m³/s. Computing time for each BMIX++ model with a normal desktop computer is less than 5 min.

  18. Modeling Topaz-II system performance

    SciTech Connect

    Lee, H.H.; Klein, A.C. )

    1993-01-01

    The US acquisition of the Topaz-11 in-core thermionic space reactor test system from Russia provides a good opportunity to perform a comparison of the Russian reported data and the results from computer codes such as MCNP (Ref. 3) and TFEHX (Ref. 4). The comparison study includes both neutronic and thermionic performance analyses. The Topaz II thermionic reactor is modeled with MCNP using actual Russian dimensions and parameters. The computation of the neutronic performance considers several important aspects such as the fuel enrichment and location of the thermionic fuel elements (TFES) in the reactor core. The neutronic analysis included the calculation of both radial and axial power distribution, which are then used in the TFEHX code for electrical performance. The reactor modeled consists of 37 single-cell TFEs distributed in a 13-cm-radius zirconium hydride block surrounded by 8 cm of beryllium metal reflector. The TFEs use 90% enriched [sup 235]U and molybdenum coated with a thin layer of [sup 184]W for emitter surface. Electrons emitted are captured by a collector surface with a gap filled with cesium vapor between the collector and emitter surfaces. The collector surface is electrically insulated with alumina. Liquid NaK provides the cooling system for the TFEs. The axial thermal power distribution is obtained by dividing the TFE into 40 axial nodes. Comparison of the true axial power distribution with that produced by electrical heaters was also performed.

  19. Nonlinear ultrasound modelling and validation of fatigue damage

    NASA Astrophysics Data System (ADS)

    Fierro, G. P. Malfense; Ciampa, F.; Ginzburg, D.; Onder, E.; Meo, M.

    2015-05-01

    Nonlinear ultrasound techniques have shown greater sensitivity to microcracks and they can be used to detect structural damages at their early stages. However, there is still a lack of numerical models available in commercial finite element analysis (FEA) tools that are able to simulate the interaction of elastic waves with the materials nonlinear behaviour. In this study, a nonlinear constitutive material model was developed to predict the structural response under continuous harmonic excitation of a fatigued isotropic sample that showed anharmonic effects. Particularly, by means of Landau's theory and Kelvin tensorial representation, this model provided an understanding of the elastic nonlinear phenomena such as the second harmonic generation in three-dimensional solid media. The numerical scheme was implemented and evaluated using a commercially available FEA software LS-DYNA, and it showed a good numerical characterisation of the second harmonic amplitude generated by the damaged region known as the nonlinear response area (NRA). Since this process requires only the experimental second-order nonlinear parameter and rough damage size estimation as an input, it does not need any baseline testing with the undamaged structure or any dynamic modelling of the fatigue crack growth. To validate this numerical model, the second-order nonlinear parameter was experimentally evaluated at various points over the fatigue life of an aluminium (AA6082-T6) coupon and the crack propagation was measured using an optical microscope. A good correlation was achieved between the experimental set-up and the nonlinear constitutive model.

  20. Investigating the validity of the networked imaging sensor model

    NASA Astrophysics Data System (ADS)

    Friedman, Melvin

    2015-05-01

    The Networked Imaging Sensor (NIS) model takes as input target acquisition probability as a function of time for individuals or individual imaging sensors, and outputs target acquisition probability for a collection of imaging sensors and individuals. System target acquisition takes place the moment the first sensor or individual acquires the target. The derivation of the NIS model implies it is applicable to multiple moving sensors and targets. The principal assumption of the NIS model is independence of events that give rise to input target acquisition probabilities. For investigating the validity of the NIS model, we consider a collection of single images where neither the sensor nor target is moving. This paper investigates the ability of the NIS model to predict system target acquisition performance when multiple observers view first and second Gen thermal imagery, field-of-view imagery that has either zero or one stationary target in a laboratory environment when observers have a maximum of 12, 17 or unlimited seconds to acquire the target. Modeled and measured target acquisition performance are in good agreement.

  1. Modeling and Validation of a Propellant Mixer for Controller Design

    NASA Technical Reports Server (NTRS)

    Richter, Hanz; Barbieri, Enrique; Figueroa, Fernando

    2003-01-01

    A mixing chamber used in rocket engine testing at the NASA Stennis Space Center is modelled by a system of two nonlinear ordinary differential equations. The mixer is used to condition the thermodynamic properties of cryogenic liquid propellant by controlled injection of the same substance in the gaseous phase. The three inputs of the mixer are the positions of the valves regulating the liquid and gas flows at the inlets, and the position of the exit valve regulating the flow of conditioned propellant. Mixer operation during a test requires the regulation of its internal pressure, exit mass flow, and exit temperature. A mathematical model is developed to facilitate subsequent controller designs. The model must be simple enough to lend itself to subsequent feedback controller design, yet its accuracy must be tested against real data. For this reason, the model includes function calls to thermodynamic property data. Some structural properties of the resulting model that pertain to controller design, such as uniqueness of the equilibrium point, feedback linearizability and local stability are shown to hold under conditions having direct physical interpretation. The existence of fixed valve positions that attain a desired operating condition is also shown. Validation of the model against real data is likewise provided.

  2. Modelling and validation of multiple reflections for enhanced laser welding

    NASA Astrophysics Data System (ADS)

    Milewski, J.; Sklar, E.

    1996-05-01

    The effects of multiple internal reflections within a laser weld joint as functions of joint geometry and processing conditions have been characterized. A computer-based ray tracing model is used to predict the reflective propagation of laser beam energy focused into the narrow gap of a metal joint for the purpose of predicting the location of melting and coalescence to form a weld. Quantitative comparisons are made between simulation cases. Experimental results are provided for qualitative model validation. This method is proposed as a way to enhance process efficiency and design laser welds which display deep penetration and high depth-to-width aspect ratios without high powered systems or keyhole mode melting.

  3. Ultrasonic transducers for cure monitoring: design, modelling and validation

    NASA Astrophysics Data System (ADS)

    Lionetto, Francesca; Montagna, Francesco; Maffezzoli, Alfonso

    2011-12-01

    The finite element method (FEM) has been applied to simulate the ultrasonic wave propagation in a multilayered transducer, expressly designed for high-frequency dynamic mechanical analysis of polymers. The FEM model includes an electro-acoustic (active element) and some acoustic (passive elements) transmission lines. The simulation of the acoustic propagation accounts for the interaction between the piezoceramic and the materials in the buffer rod and backing, and the coupling between the electric and mechanical properties of the piezoelectric material. As a result of the simulations, the geometry and size of the modelled ultrasonic transducer has been optimized and used for the realization of a prototype transducer for cure monitoring. The transducer performance has been validated by measuring the velocity changes during the polymerization of a thermosetting matrix of composite materials.

  4. Multiscale analysis and validation of the MODIS LAI product II. Sampling strategy

    E-print Network

    Myneni, Ranga B.

    a method for validation of the Moderate Resolution Imaging Spectroradiometer Leaf Area Index (MODIS LAI and first Earth views from the Moderate Resolution Imaging Spectroradiometer (MODIS) were taken in late for validation of biophysical products derived from moderate resolution sensors such as MODIS. For a homogeneous

  5. Predictive validity of behavioural animal models for chronic pain

    PubMed Central

    Berge, Odd-Geir

    2011-01-01

    Rodent models of chronic pain may elucidate pathophysiological mechanisms and identify potential drug targets, but whether they predict clinical efficacy of novel compounds is controversial. Several potential analgesics have failed in clinical trials, in spite of strong animal modelling support for efficacy, but there are also examples of successful modelling. Significant differences in how methods are implemented and results are reported means that a literature-based comparison between preclinical data and clinical trials will not reveal whether a particular model is generally predictive. Limited reports on negative outcomes prevents reliable estimate of specificity of any model. Animal models tend to be validated with standard analgesics and may be biased towards tractable pain mechanisms. But preclinical publications rarely contain drug exposure data, and drugs are usually given in high doses and as a single administration, which may lead to drug distribution and exposure deviating significantly from clinical conditions. The greatest challenge for predictive modelling is, however, the heterogeneity of the target patient populations, in terms of both symptoms and pharmacology, probably reflecting differences in pathophysiology. In well-controlled clinical trials, a majority of patients shows less than 50% reduction in pain. A model that responds well to current analgesics should therefore predict efficacy only in a subset of patients within a diagnostic group. It follows that successful translation requires several models for each indication, reflecting critical pathophysiological processes, combined with data linking exposure levels with effect on target. LINKED ARTICLES This article is part of a themed issue on Translational Neuropharmacology. To view the other articles in this issue visit http://dx.doi.org/10.1111/bph.2011.164.issue-4 PMID:21371010

  6. Literature-derived bioaccumulation models for earthworms: Development and validation

    SciTech Connect

    Sample, B.E.; Suter, G.W. II; Beauchamp, J.J.; Efroymson, R.A.

    1999-09-01

    Estimation of contaminant concentrations in earthworms is a critical component in many ecological risk assessments. Without site-specific data, literature-derived uptake factors or models are frequently used. Although considerable research has been conducted on contaminant transfer from soil to earthworms, most studies focus on only a single location. External validation of transfer models has not been performed. The authors developed a database of soil and tissue concentrations for nine inorganic and two organic chemicals. Only studies that presented total concentrations in departed earthworms were included. Uptake factors and simple and multiple regression models of natural-log-transformed concentrations of each analyte in soil and earthworms were developed using data from 26 studies. These models were then applied to data from six additional studies. Estimated and observed earthworm concentrations were compared using nonparametric Wilcoxon signed-rank tests. Relative accuracy and quality of different estimation methods were evaluated by calculating the proportional deviation of the estimate from the measured value. With the exception of Cr, significant, single-variable (e.g., soil concentration) regression models were fit for each analyte. Inclusion of soil Ca improved model fits for Cd and Pb. Soil pH only marginally improved model fits. The best general estimates of chemical concentrations in earthworms were generated by simple ln-ln regression models for As, Cd, Cu, Hg, Mn, Pb, Zn, and polychlorinated biphenyls. No method accurately estimated Cr or Ni in earthworms. Although multiple regression models including pH generated better estimates for a few analytes, in general, the predictive utility gained by incorporating environmental variables was marginal.

  7. USING GRAPHICAL MODELS AND GENOMIC EXPRESSION DATA TO STATISTICALLY VALIDATE MODELS OF

    E-print Network

    Hartemink, Alexander

    USING GRAPHICAL MODELS AND GENOMIC EXPRESSION DATA TO STATISTICALLY VALIDATE MODELS OF GENETIC Technology Square, Cambridge, MA 02139 RICHARD A. YOUNG Whitehead Institute for Biomedical Research Nine Cambridge Center, Cambridge, MA 02142 We propose a model-driven approach for analyzing genomic expression

  8. Systematic approach to verification and validation: High explosive burn models

    SciTech Connect

    Menikoff, Ralph; Scovel, Christina A.

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code, run a simulation, and generate a comparison plot showing simulated and experimental velocity gauge data. These scripts are then applied to several series of experiments and to several HE burn models. The same systematic approach is applicable to other types of material models; for example, equations of state models and material strength models.

  9. Validation of a Global Hydrodynamic Flood Inundation Model

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Smith, A.; Sampson, C. C.; Alfieri, L.; Neal, J. C.

    2014-12-01

    In this work we present first validation results for a hyper-resolution global flood inundation model. We use a true hydrodynamic model (LISFLOOD-FP) to simulate flood inundation at 1km resolution globally and then use downscaling algorithms to determine flood extent and depth at 90m spatial resolution. Terrain data are taken from a custom version of the SRTM data set that has been processed specifically for hydrodynamic modelling. Return periods of flood flows along the entire global river network are determined using: (1) empirical relationships between catchment characteristics and index flood magnitude in different hydroclimatic zones derived from global runoff data; and (2) an index flood growth curve, also empirically derived. Bankful return period flow is then used to set channel width and depth, and flood defence impacts are modelled using empirical relationships between GDP, urbanization and defence standard of protection. The results of these simulations are global flood hazard maps for a number of different return period events from 1 in 5 to 1 in 1000 years. We compare these predictions to flood hazard maps developed by national government agencies in the UK and Germany using similar methods but employing detailed local data, and to observed flood extent at a number of sites including St. Louis, USA and Bangkok in Thailand. Results show that global flood hazard models can have considerable skill given careful treatment to overcome errors in the publicly available data that are used as their input.

  10. Validation of two-equation turbulence models for propulsion flowfields

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Venkateswaran, S.; Merkle, Charles L.

    1994-01-01

    The objective of the study is to assess the capability of two-equation turbulence models for simulating propulsion-related flowfields. The standard kappa-epsilon model with Chien's low Reynolds number formulation for near-wall effects is used as the baseline turbulence model. Several experimental test cases, representative of rocket combustor internal flowfields, are used to catalog the performance of the baseline model. Specific flowfields considered here include recirculating flow behind a backstep, mixing between coaxial jets and planar shear layers. Since turbulence solutions are notoriously dependent on grid and numerical methodology, the effects of grid refinement and artificial dissipation on numerical accuracy are studied. In the latter instance, computational results obtained with several central-differenced and upwind-based formulations are compared. Based on these results, improved turbulence modes such as enhanced kappa-epsilon models as well as other two-equation formulations (e.g., kappa-omega) are being studied. In addition, validation of swirling and reacting flowfields are also currently underway.

  11. Traveling with Cognitive Tests: Testing the Validity of a KABC-II Adaptation in India

    ERIC Educational Resources Information Center

    Malda, Maike; van de Vijver, Fons J. R.; Srinivasan, Krishnamachari; Transler, Catherine; Sukumar, Prathima

    2010-01-01

    The authors evaluated the adequacy of an extensive adaptation of the American Kaufman Assessment Battery for Children, second edition (KABC-II), for 6- to 10-year-old Kannada-speaking children of low socioeconomic status in Bangalore, South India. The adapted KABC-II was administered to 598 children. Subtests showed high reliabilities, the…

  12. Global sea-salt modeling: Results and validation against multicampaign shipboard measurements

    E-print Network

    Global sea-salt modeling: Results and validation against multicampaign shipboard measurements of sea-salt concentrations from five different campaigns are used to validate the sea-salt). The validity of the sea-salt parameterizations is tested by employing a global forecasting model and transport

  13. NAIRAS aircraft radiation model development, dose climatology, and initial validation

    NASA Astrophysics Data System (ADS)

    Mertens, Christopher J.; Meier, Matthias M.; Brown, Steven; Norman, Ryan B.; Xu, Xiaojing

    2013-10-01

    The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis suggests that these single-point differences will be within 30% when a new deterministic pion-initiated electromagnetic cascade code is integrated into NAIRAS, an effort which is currently underway.

  14. Image quality assessment in digital mammography: part II. NPWE as a validated alternative for contrast detail analysis.

    PubMed

    Monnin, P; Marshall, N W; Bosmans, H; Bochud, F O; Verdun, F R

    2011-07-21

    Assessment of image quality for digital x-ray mammography systems used in European screening programs relies mainly on contrast-detail CDMAM phantom scoring and requires the acquisition and analysis of many images in order to reduce variability in threshold detectability. Part II of this study proposes an alternative method based on the detectability index (d') calculated for a non-prewhitened model observer with an eye filter (NPWE). The detectability index was calculated from the normalized noise power spectrum and image contrast, both measured from an image of a 5 cm poly(methyl methacrylate) phantom containing a 0.2 mm thick aluminium square, and the pre-sampling modulation transfer function. This was performed as a function of air kerma at the detector for 11 different digital mammography systems. These calculated d' values were compared against threshold gold thickness (T) results measured with the CDMAM test object and against derived theoretical relationships. A simple relationship was found between T and d', as a function of detector air kerma; a linear relationship was found between d' and contrast-to-noise ratio. The values of threshold thickness used to specify acceptable performance in the European Guidelines for 0.10 and 0.25 mm diameter discs were equivalent to threshold calculated detectability indices of 1.05 and 6.30, respectively. The NPWE method is a validated alternative to CDMAM scoring for use in the image quality specification, quality control and optimization of digital x-ray systems for screening mammography. PMID:21701050

  15. PIV validation of blood-heart valve leaflet interaction modelling.

    PubMed

    Kaminsky, R; Dumont, K; Weber, H; Schroll, M; Verdonck, P

    2007-07-01

    The aim of this study was to validate the 2D computational fluid dynamics (CFD) results of a moving heart valve based on a fluid-structure interaction (FSI) algorithm with experimental measurements. Firstly, a pulsatile laminar flow through a monoleaflet valve model with a stiff leaflet was visualized by means of Particle Image Velocimetry (PIV). The inflow data sets were applied to a CFD simulation including blood-leaflet interaction. The measurement section with a fixed leaflet was enclosed into a standard mock loop in series with a Harvard Apparatus Pulsatile Blood Pump, a compliance chamber and a reservoir. Standard 2D PIV measurements were made at a frequency of 60 bpm. Average velocity magnitude results of 36 phase-locked measurements were evaluated at every 10 degrees of the pump cycle. For the CFD flow simulation, a commercially available package from Fluent Inc. was used in combination with inhouse developed FSI code based on the Arbitrary Lagrangian-Eulerian (ALE) method. Then the CFD code was applied to the leaflet to quantify the shear stress on it. Generally, the CFD results are in agreement with the PIV evaluated data in major flow regions, thereby validating the FSI simulation of a monoleaflet valve with a flexible leaflet. The applicability of the new CFD code for quantifying the shear stress on a flexible leaflet is thus demonstrated. PMID:17674341

  16. Development and validation of a realistic head model for EEG

    NASA Astrophysics Data System (ADS)

    Bangera, Nitin Bhalchandra

    The utility of extracranial electrical or magnetic field recordings (EEG or MEG) is greatly enhanced if the generators of the bioelectromagnetic fields can be determined accurately from the measured fields. This procedure, known as the 'inverse method,' depends critically on calculations of the projection from generators in the brain to the EEG and MEG sensors. Improving and validating this calculation, known as the 'forward solution,' is the focus of this dissertation. The improvements involve more accurate modeling of the structures of the brain and thus understanding how current flows within the brain as a result of addition of structures in the forward model. Validation compares calculations using different forward models to the experimental results obtained by stimulating with implanted dipole electrodes. The human brain tissue displays inhomogeneity in electrical conductivity and also displays anisotropy, notably in the skull and brain white matter. In this dissertation, a realistic head model has been implemented using the finite element method to calculate the effects of inhomogeneity and anisotropy in the human brain. Accurate segmentation of the brain tissue type is implemented using a semi-automatic method to segment multimodal imaging data from multi-spectral MRI scans (different flip angles) in conjunction with the regular T1-weighted scans and computed x-ray tomography images. The electrical conductivity in the anisotropic white matter tissue is quantified from diffusion tensor MRI. The finite element model is constructed using AMIRA, a commercial segmentation and visualization tool and solved using ABAQUS, a commercial finite element solver. The model is validated using experimental data collected from intracranial stimulation in medically intractable epileptic patients. Depth electrodes are implanted in medically intractable epileptic patients in order to direct surgical therapy when the foci cannot be localized with the scalp EEG. These patients present the unique opportunity to generate sources at known positions in the human brain using the depth electrodes. Known dipolar sources were created inside the human brain at known locations by injecting a weak biphasic current (sub-threshold) between alternate contacts on the depth electrode. The corresponding bioelectric fields (intracranial and scalp EEG) were recorded in patients during the injection of biphasic pulses. The in vivo depth stimulation data provides a direct test of the performance of the forward model. The factors affecting the accuracy of the intracranial measurements are quantified in a precise manner by studying the effects of including different tissue types and anisotropy. The results show that white matter anisotropy is crucial for predicting the electric fields in a precise manner for intracranial locations, thereby affecting the source reconstructions. Accurate modeling of the skull is necessary for predicting accurately the scalp measurements. In sum, with the aid of high-resolution finite element realistic head models it is possible to accurately predict electric fields generated by current sources in the brain and thus in a precise way, understand the relationship between electromagnetic measure and neuronal activity at the voxel-scale.

  17. Eddy current probe characterization for model input and validation

    NASA Astrophysics Data System (ADS)

    Nakagawa, N.; Khan, T. A.; Gray, J.

    2000-05-01

    The purpose of this paper is to address issues of eddy current (EC) probe characterization from the perspective of model input and validation. In most eddy current modeling approaches, it is the usual practice to assume that the internal structure and dimensions of probes are given when attempting to predict induced EC distributions and probe impedance. The prototype is the Dodd-Deeds probe model which is applicable to a circularly wound coil without any core. In the Dodd-Deeds theory, the probe is described as a current being uniformly distributed within the known cross section of the coil. One needs to know the dimensions and number of windings of the coil in order to be able to perform calculations. Recent advanced numerical EC models can handle complex probe constructions, thus complicating the probe modeling issue even further. Most of these codes require prior knowledge of the probe internal information, which is frequently unavailable for a given probe. One needs, therefore, either to establish a procedure to reconstruct the detailed probe information explicitly, or to find out an alternative probe modeling approach based on characterization measurements such as induced field measurements. The main result of this paper is the explicit probe characterization and modeling by the use of x-ray CT scan data. Specifically, this paper will start with the introduction of the above probe modeling issue, and then proceed to demonstrate the explicit probe characterization and modeling procedure by the x-ray CT data. A split-D differential probe has been chosen as a test sample, and its CAD model will be given, as constructed from the CT data. Also shown will be the degree of accuracy of the impedance predictions obtained by the combination of the probe CAD model and the BEM-based EC model, compared with measurement data. The presentation will end with a brief discussion on the alternative probe modeling approach based on characterization measurements, and current issues associated with the approach.—The work was supported in part by the NSF Industry/University Cooperative Research Program.

  18. Cosmological parameter uncertainties from SALT-II type Ia supernova light curve models

    SciTech Connect

    Mosher, J.; Sako, M.; Guy, J.; Astier, P.; Betoule, M.; El-Hage, P.; Pain, R.; Regnault, N.; Marriner, J.; Biswas, R.; Kuhlmann, S.; Schneider, D. P.

    2014-09-20

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ?120 low-redshift (z < 0.1) SNe Ia, ?255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ?290 SNLS SNe Ia (z ? 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w {sub input} – w {sub recovered}) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty; the average bias on w is –0.014 ± 0.007.

  19. ADOPT: A Historically Validated Light Duty Vehicle Consumer Choice Model

    SciTech Connect

    Brooker, A.; Gonder, J.; Lopp, S.; Ward, J.

    2015-05-04

    The Automotive Deployment Option Projection Tool (ADOPT) is a light-duty vehicle consumer choice and stock model supported by the U.S. Department of Energy’s Vehicle Technologies Office. It estimates technology improvement impacts on U.S. light-duty vehicles sales, petroleum use, and greenhouse gas emissions. ADOPT uses techniques from the multinomial logit method and the mixed logit method estimate sales. Specifically, it estimates sales based on the weighted value of key attributes including vehicle price, fuel cost, acceleration, range and usable volume. The average importance of several attributes changes nonlinearly across its range and changes with income. For several attributes, a distribution of importance around the average value is used to represent consumer heterogeneity. The majority of existing vehicle makes, models, and trims are included to fully represent the market. The Corporate Average Fuel Economy regulations are enforced. The sales feed into the ADOPT stock model. It captures key aspects for summing petroleum use and greenhouse gas emissions This includes capturing the change in vehicle miles traveled by vehicle age, the creation of new model options based on the success of existing vehicles, new vehicle option introduction rate limits, and survival rates by vehicle age. ADOPT has been extensively validated with historical sales data. It matches in key dimensions including sales by fuel economy, acceleration, price, vehicle size class, and powertrain across multiple years. A graphical user interface provides easy and efficient use. It manages the inputs, simulation, and results.

  20. First principles Candu fuel model and validation experimentation

    SciTech Connect

    Corcoran, E.C.; Kaye, M.H.; Lewis, B.J.; Thompson, W.T.; Akbari, F.; Higgs, J.D.; Verrall, R.A.; He, Z.; Mouris, J.F.

    2007-07-01

    Many modeling projects on nuclear fuel rest on a quantitative understanding of the co-existing phases at various stages of burnup. Since the various fission products have considerably different abilities to chemically associate with oxygen, and the O/M ratio is slowly changing as well, the chemical potential (generally expressed as an equivalent oxygen partial pressure) is a function of burnup. Concurrently, well-recognized small fractions of new phases such as inert gas, noble metals, zirconates, etc. also develop. To further complicate matters, the dominant UO{sub 2} fuel phase may be non-stoichiometric and most of minor phases have a variable composition dependent on temperature and possible contact with the coolant in the event of a sheathing defect. A Thermodynamic Fuel Model to predict the phases in partially burned Candu nuclear fuel containing many major fission products has been under development. This model is capable of handling non-stoichiometry in the UO{sub 2} fluorite phase, dilute solution behaviour of significant solute oxides, noble metal inclusions, a second metal solid solution U(Pd-Rh-Ru)3, zirconate and uranate solutions as well as other minor solid phases, and volatile gaseous species. The treatment is a melding of several thermodynamic modeling projects dealing with isolated aspects of this important multi-component system. To simplify the computations, the number of elements has been limited to twenty major representative fission products known to appear in spent fuel. The proportion of elements must first be generated using SCALES-5. Oxygen is inferred from the concentration of the other elements. Provision to study the disposition of very minor fission products is included within the general treatment but these are introduced only on an as needed basis for a particular purpose. The building blocks of the model are the standard Gibbs energies of formation of the many possible compounds expressed as a function of temperature. To these data are added mixing terms associated with the appearance of the component species in particular phases. To validate the model, coulometric titration experiments relating the chemical potential of oxygen to the moles of oxygen introduced to SIMFUEL are underway. A description of the apparatus to oxidize and reduce samples in a controlled way in a H{sub 2}/H{sub 2}O mixture is presented. Existing measurements for irradiated fuel, both defected and non-defected, are also being incorporated into the validation process. (authors)

  1. Development, validation and application of numerical space environment models

    NASA Astrophysics Data System (ADS)

    Honkonen, Ilja

    2013-10-01

    Currently the majority of space-based assets are located inside the Earth's magnetosphere where they must endure the effects of the near-Earth space environment, i.e. space weather, which is driven by the supersonic flow of plasma from the Sun. Space weather refers to the day-to-day changes in the temperature, magnetic field and other parameters of the near-Earth space, similarly to ordinary weather which refers to changes in the atmosphere above ground level. Space weather can also cause adverse effects on the ground, for example, by inducing large direct currents in power transmission systems. The performance of computers has been growing exponentially for many decades and as a result the importance of numerical modeling in science has also increased rapidly. Numerical modeling is especially important in space plasma physics because there are no in-situ observations of space plasmas outside of the heliosphere and it is not feasible to study all aspects of space plasmas in a terrestrial laboratory. With the increasing number of computational cores in supercomputers, the parallel performance of numerical models on distributed memory hardware is also becoming crucial. This thesis consists of an introduction, four peer reviewed articles and describes the process of developing numerical space environment/weather models and the use of such models to study the near-Earth space. A complete model development chain is presented starting from initial planning and design to distributed memory parallelization and optimization, and finally testing, verification and validation of numerical models. A grid library that provides good parallel scalability on distributed memory hardware and several novel features, the distributed cartesian cell-refinable grid (DCCRG), is designed and developed. DCCRG is presently used in two numerical space weather models being developed at the Finnish Meteorological Institute. The first global magnetospheric test particle simulation based on the Vlasov description of plasma is carried out using the Vlasiator model. The test shows that the Vlasov equation for plasma in six-dimensionsional phase space is solved correctly b! y Vlasiator, that results are obtained beyond those of the magnetohydrodynamic (MHD) description of plasma and that global magnetospheric simulations using a hybrid-Vlasov model are feasible on current hardware. For the first time four global magnetospheric models using the MHD description of plasma (BATS-R-US, GUMICS, OpenGGCM, LFM) are run with identical solar wind input and the results compared to observations in the ionosphere and outer magnetosphere. Based on the results of the global magnetospheric MHD model GUMICS a hypothesis is formulated for a new mechanism of plasmoid formation in the Earth's magnetotail.

  2. Utilizing Chamber Data for Developing and Validating Climate Change Models

    NASA Technical Reports Server (NTRS)

    Monje, Oscar

    2012-01-01

    Controlled environment chambers (e.g. growth chambers, SPAR chambers, or open-top chambers) are useful for measuring plant ecosystem responses to climatic variables and CO2 that affect plant water relations. However, data from chambers was found to overestimate responses of C fluxes to CO2 enrichment. Chamber data may be confounded by numerous artifacts (e.g. sidelighting, edge effects, increased temperature and VPD, etc) and this limits what can be measured accurately. Chambers can be used to measure canopy level energy balance under controlled conditions and plant transpiration responses to CO2 concentration can be elucidated. However, these measurements cannot be used directly in model development or validation. The response of stomatal conductance to CO2 will be the same as in the field, but the measured response must be recalculated in such a manner to account for differences in aerodynamic conductance, temperature and VPD between the chamber and the field.

  3. Validation of a Deterministic Vibroacoustic Response Prediction Model

    NASA Technical Reports Server (NTRS)

    Caimi, Raoul E.; Margasahayam, Ravi

    1997-01-01

    This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.

  4. A magnetospheric specification model validation study: Geosynchronous electrons

    NASA Astrophysics Data System (ADS)

    Hilmer, R. V.; Ginet, G. P.

    2000-09-01

    The Rice University Magnetospheric Specification Model (MSM) is an operational space environment model of the inner and middle magnetosphere designed to specify charged particle fluxes up to 100keV. Validation test data taken between January 1996 and June 1998 consist of electron fluxes measured by a charge control system (CCS) on a defense satellite communications system (DSCS) spacecraft. The CCS includes both electrostatic analyzers to measure the particle environment and surface potential monitors to track differential charging between various materials and vehicle ground. While typical RMS error analysis methods provide a sense of the models overall abilities, they do not specifically address physical situations critical to operations, i.e., how well does the model specify when a high differential charging state is probable. In this validation study, differential charging states observed by DSCS are used to determine several threshold fluxes for the associated 20-50keV electrons and joint probability distributions are constructed to determine Hit, Miss, and False Alarm rates for the models. An MSM run covering the two and one-half year interval is performed using the minimum required input parameter set, consisting of only the magnetic activity index Kp, in order to statistically examine the model's seasonal and yearly performance. In addition, the relative merits of the input parameter, i.e., Kp, Dst, the equatorward boundary of diffuse aurora at midnight, cross-polar cap potential, solar wind density and velocity, and interplanetary magnetic field values, are evaluated as drivers of shorter model runs of 100 d each. In an effort to develop operational tools that can address spacecraft charging issues, we also identify temporal features in the model output that can be directly linked to input parameter variations and model boundary conditions. All model output is interpreted using the full three-dimensional, dipole tilt-dependent algorithms currently in operational use at the Air Force 55th Space Weather Squadron (55 SWXS). Results indicate that both diurnal and seasonal activity related variations in geosynchronous electrons are reproduced in a regular and consistent manner regardless of the input parameter used as drivers. The ability of the MSM to specify DSCS electrons in relation to thresholds indicative of spacecraft charging varies with the combination of input parameters used. The input parameter of greatest benefit to the MSM, after the required Kp index, is the polar cap potential drop as determined by DMSP spacecraft. Regarding the highest electron flux threshold, the model typically achieves high HIT rates paired with both high False Alarm rates and higher RMS error. Suggestions are made regarding the utilization of proxy values for the polar cap potential parameter and Kp-dependent model boundary conditions. The importance of generating accurate real-time proxy input data for operational use is stressed.

  5. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  6. Circulation Control Model Experimental Database for CFD Validation

    NASA Technical Reports Server (NTRS)

    Paschal, Keith B.; Neuhart, Danny H.; Beeler, George B.; Allan, Brian G.

    2012-01-01

    A 2D circulation control wing was tested in the Basic Aerodynamic Research Tunnel at the NASA Langley Research Center. A traditional circulation control wing employs tangential blowing along the span over a trailing-edge Coanda surface for the purpose of lift augmentation. This model has been tested extensively at the Georgia Tech Research Institute for the purpose of performance documentation at various blowing rates. The current study seeks to expand on the previous work by documenting additional flow-field data needed for validation of computational fluid dynamics. Two jet momentum coefficients were tested during this entry: 0.047 and 0.114. Boundary-layer transition was investigated and turbulent boundary layers were established on both the upper and lower surfaces of the model. Chordwise and spanwise pressure measurements were made, and tunnel sidewall pressure footprints were documented. Laser Doppler Velocimetry measurements were made on both the upper and lower surface of the model at two chordwise locations (x/c = 0.8 and 0.9) to document the state of the boundary layers near the spanwise blowing slot.

  7. Vibroacoustic Model Validation for a Curved Honeycomb Composite Panel

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Robinson, Jay H.; Grosveld, Ferdinand W.

    2001-01-01

    Finite element and boundary element models are developed to investigate the vibroacoustic response of a curved honeycomb composite sidewall panel. Results from vibroacoustic tests conducted in the NASA Langley Structural Acoustic Loads and Transmission facility are used to validate the numerical predictions. The sidewall panel is constructed from a flexible honeycomb core sandwiched between carbon fiber reinforced composite laminate face sheets. This type of construction is being used in the development of an all-composite aircraft fuselage. In contrast to conventional rib-stiffened aircraft fuselage structures, the composite panel has nominally uniform thickness resulting in a uniform distribution of mass and stiffness. Due to differences in the mass and stiffness distribution, the noise transmission mechanisms for the composite panel are expected to be substantially different from those of a conventional rib-stiffened structure. The development of accurate vibroacoustic models will aide in the understanding of the dominant noise transmission mechanisms and enable optimization studies to be performed that will determine the most beneficial noise control treatments. Finite element and boundary element models of the sidewall panel are described. Vibroacoustic response predictions are presented for forced vibration input and the results are compared with experimental data.

  8. Validation of an Acoustic Impedance Prediction Model for Skewed Resonators

    NASA Technical Reports Server (NTRS)

    Howerton, Brian M.; Parrott, Tony L.

    2009-01-01

    An impedance prediction model was validated experimentally to determine the composite impedance of a series of high-aspect ratio slot resonators incorporating channel skew and sharp bends. Such structures are useful for packaging acoustic liners into constrained spaces for turbofan noise control applications. A formulation of the Zwikker-Kosten Transmission Line (ZKTL) model, incorporating the Richards correction for rectangular channels, is used to calculate the composite normalized impedance of a series of six multi-slot resonator arrays with constant channel length. Experimentally, acoustic data was acquired in the NASA Langley Normal Incidence Tube over the frequency range of 500 to 3500 Hz at 120 and 140 dB OASPL. Normalized impedance was reduced using the Two-Microphone Method for the various combinations of channel skew and sharp 90o and 180o bends. Results show that the presence of skew and/or sharp bends does not significantly alter the impedance of a slot resonator as compared to a straight resonator of the same total channel length. ZKTL predicts the impedance of such resonators very well over the frequency range of interest. The model can be used to design arrays of slot resonators that can be packaged into complex geometries heretofore unsuitable for effective acoustic treatment.

  9. VALIDATION OF A SUB-MODEL OF FORAGE GROWTH OF THE INTEGRATED FARM SYSTEM MODEL - IFSM

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A sub-model of forage production developed for temperate climate is being adapted to tropical conditions in Brazil. Sub-model predictive performance has been evaluated using data of Cynodon spp. Results from sensitivity and validation tests were consistent, but values of DM production for the wet se...

  10. Modeling of a Foamed Emulsion Bioreactor: I. Model Development and Experimental Validation

    E-print Network

    ARTICLE Modeling of a Foamed Emulsion Bioreactor: I. Model Development and Experimental Validation, a new type of bioreactor for air pollution control referred to as the foamed emulsion bior- eactor (FEBR) has been developed. The process relies on the emulsion of an organic phase with a suspension

  11. Validation of population-based disease simulation models: a review of concepts and methods

    PubMed Central

    2010-01-01

    Background Computer simulation models are used increasingly to support public health research and policy, but questions about their quality persist. The purpose of this article is to review the principles and methods for validation of population-based disease simulation models. Methods We developed a comprehensive framework for validating population-based chronic disease simulation models and used this framework in a review of published model validation guidelines. Based on the review, we formulated a set of recommendations for gathering evidence of model credibility. Results Evidence of model credibility derives from examining: 1) the process of model development, 2) the performance of a model, and 3) the quality of decisions based on the model. Many important issues in model validation are insufficiently addressed by current guidelines. These issues include a detailed evaluation of different data sources, graphical representation of models, computer programming, model calibration, between-model comparisons, sensitivity analysis, and predictive validity. The role of external data in model validation depends on the purpose of the model (e.g., decision analysis versus prediction). More research is needed on the methods of comparing the quality of decisions based on different models. Conclusion As the role of simulation modeling in population health is increasing and models are becoming more complex, there is a need for further improvements in model validation methodology and common standards for evaluating model credibility. PMID:21087466

  12. An approach to model validation and model-based prediction -- polyurethane foam case study.

    SciTech Connect

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model-based predictions. Several hypothetical prediction problems are created and addressed. Hypothetical problems are used because no guidance was provided concern

  13. Criterion Validity, Severity Cut Scores, and Test-Retest Reliability of the Beck Depression Inventory-II in a University Counseling Center Sample

    ERIC Educational Resources Information Center

    Sprinkle, Stephen D.; Lurie, Daphne; Insko, Stephanie L.; Atkinson, George; Jones, George L.; Logan, Arthur R.; Bissada, Nancy N.

    2002-01-01

    The criterion validity of the Beck Depression Inventory-II (BDI-II; A. T. Beck, R. A. Steer, & G. K. Brown, 1996) was investigated by pairing blind BDI-II administrations with the major depressive episode portion of the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I; M. B. First, R. L. Spitzer, M. Gibbon, & J. B. W. Williams,…

  14. Validation model for Raman based skin carotenoid detection.

    PubMed

    Ermakov, Igor V; Gellermann, Werner

    2010-12-01

    Raman spectroscopy holds promise as a rapid objective non-invasive optical method for the detection of carotenoid compounds in human tissue in vivo. Carotenoids are of interest due to their functions as antioxidants and/or optical absorbers of phototoxic light at deep blue and near UV wavelengths. In the macular region of the human retina, carotenoids may prevent or delay the onset of age-related tissue degeneration. In human skin, they may help prevent premature skin aging, and are possibly involved in the prevention of certain skin cancers. Furthermore, since carotenoids exist in high concentrations in a wide variety of fruits and vegetables, and are routinely taken up by the human body through the diet, skin carotenoid levels may serve as an objective biomarker for fruit and vegetable intake. Before the Raman method can be accepted as a widespread optical alternative for carotenoid measurements, direct validation studies are needed to compare it with the gold standard of high performance liquid chromatography. This is because the tissue Raman response is in general accompanied by a host of other optical processes which have to be taken into account. In skin, the most prominent is strongly diffusive, non-Raman scattering, leading to relatively shallow light penetration of the blue/green excitation light required for resonant Raman detection of carotenoids. Also, sizable light attenuation exists due to the combined absorption from collagen, porphyrin, hemoglobin, and melanin chromophores, and additional fluorescence is generated by collagen and porphyrins. In this study, we investigate for the first time the direct correlation of in vivo skin tissue carotenoid Raman measurements with subsequent chromatography derived carotenoid concentrations. As tissue site we use heel skin, in which the stratum corneum layer thickness exceeds the light penetration depth, which is free of optically confounding chromophores, which can be easily optically accessed for in vivo RRS measurement, and which can be easily removed for subsequent biochemical measurements. Excellent correlation (coefficient R=0.95) is obtained for this tissue site which could serve as a model site for scaled up future validation studies of large populations. The obtained results provide proof that resonance Raman spectroscopy is a valid non-invasive objective methodology for the quantitative assessment of carotenoid antioxidants in human skin in vivo. PMID:20678465

  15. NAIRAS aircraft radiation model development, dose climatology, and initial validation

    PubMed Central

    Mertens, Christopher J; Meier, Matthias M; Brown, Steven; Norman, Ryan B; Xu, Xiaojing

    2013-01-01

    [1] The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis suggests that these single-point differences will be within 30% when a new deterministic pion-initiated electromagnetic cascade code is integrated into NAIRAS, an effort which is currently underway. PMID:26213513

  16. Stratospheric Heterogeneous Chemistry and Microphysics: Model Development, Validation and Applications

    NASA Technical Reports Server (NTRS)

    Turco, Richard P.

    1996-01-01

    The objectives of this project are to: define the chemical and physical processes leading to stratospheric ozone change that involve polar stratospheric clouds (PSCS) and the reactions occurring on the surfaces of PSC particles; study the formation processes, and the physical and chemical properties of PSCS, that are relevant to atmospheric chemistry and to the interpretation of field measurements taken during polar stratosphere missions; develop quantitative models describing PSC microphysics and heterogeneous chemical processes; assimilate laboratory and field data into these models; and calculate the extent of chemical processing on PSCs and the impact of specific microphysical processes on polar composition and ozone depletion. During the course of the project, a new coupled microphysics/physical-chemistry/ photochemistry model for stratospheric sulfate aerosols and nitric acid and ice PSCs was developed and applied to analyze data collected during NASA's Arctic Airborne Stratospheric Expedition-II (AASE-II) and other missions. In this model, detailed treatments of multicomponent sulfate aerosol physical chemistry, sulfate aerosol microphysics, polar stratospheric cloud microphysics, PSC ice surface chemistry, as well as homogeneous gas-phase chemistry were included for the first time. In recent studies focusing on AASE measurements, the PSC model was used to analyze specific measurements from an aircraft deployment of an aerosol impactor, FSSP, and NO(y) detector. The calculated results are in excellent agreement with observations for particle volumes as well as NO(y) concentrations, thus confirming the importance of supercooled sulfate/nitrate droplets in PSC formation. The same model has been applied to perform a statistical study of PSC properties in the Northern Hemisphere using several hundred high-latitude air parcel trajectories obtained from Goddard. The rates of ozone depletion along trajectories with different meteorological histories are presently being systematically evaluated to identify the principal relationships between ozone loss and aerosol state. Under this project, we formulated a detailed quantitative model that predicts the multicomponent composition of sulfate aerosols under stratospheric conditions, including sulfuric, nitric, hydrochloric, hydrofluoric and hydrobromic acids. This work defined for the first time the behavior of liquid ternary-system type-1b PSCS. The model also allows the compositions and reactivities of sulfate aerosols to be calculated over the entire range of environmental conditions encountered in the stratosphere (and has been incorporated into a trajectory/microphysics model-see above). Important conclusions that derived from this work over the last few years include the following: the HNO3 content of liquid-state aerosols dominate PSCs below about 195 K; the freezing of nitric acid ice from sulfate aerosol solutions is likely to occur within a few degrees K of the water vapor frost point; the uptake and reactions of HCl in liquid aerosols is a critical component of PSC heterogeneous chemistry. In a related application of this work, the inefficiency of chlorine injection into the stratosphere during major volcanic eruptions was explained on the basis of nucleation of sulfuric acid aerosols in rising volcanic plumes leading to the formation of supercooled water droplets on these aerosols, which efficiently scavenges HCl via precipitation.

  17. A numerical model on transient, two-dimensional flow and heat transfer in He II

    NASA Astrophysics Data System (ADS)

    Kitamura, T.; Shiramizu, K.; Fujimoto, N.; Rao, Y. F.; Fukuda, K.

    A new numerical model is developed to study the unique features of flow and heat transfer in superfluid helium or He II. The model, called the simplified model, is derived from the original two-fluid model. It consists of a conventional continuity equation, a momentum equation for the total fluid in the form of a modified Navier-Stokes equation, and an energy equation in the form of the conventional temperature-based energy equation, in which the heat flux due to Gorter-Mellink internal convection is properly incorporated. To verify the validity of the simplified model, the analytical results by the simplified model are compared with those by the original two-fluid model in the analysis of one-dimensional heat transfer in a vertical He II duct heated at the bottom boundary. To demonstrate the capability of the present model for multi-dimensional problems, two-dimensional analysis is performed for internal-convection heat transfer in an He II pool with one of the walls partially heated. The two-dimensional results obtained by the present model are also compared with that by the modified two-dimensional model by Ramadan and Witt.

  18. Mitigation of turbidity currents in reservoirs with passive retention systems: validation of CFD modeling

    NASA Astrophysics Data System (ADS)

    Ferreira, E.; Alves, E.; Ferreira, R. M. L.

    2012-04-01

    Sediment deposition by continuous turbidity currents may affect eco-environmental river dynamics in natural reservoirs and hinder the maneuverability of bottom discharge gates in dam reservoirs. In recent years, innovative techniques have been proposed to enforce the deposition of turbidity further upstream in the reservoir (and away from the dam), namely, the use of solid and permeable obstacles such as water jet screens , geotextile screens, etc.. The main objective of this study is to validate a computational fluid dynamics (CFD) code applied to the simulation of the interaction between a turbidity current and a passive retention system, designed to induce sediment deposition. To accomplish the proposed objective, laboratory tests were conducted where a simple obstacle configuration was subjected to the passage of currents with different initial sediment concentrations. The experimental data was used to build benchmark cases to validate the 3D CFD software ANSYS-CFX. Sensitivity tests of mesh design, turbulence models and discretization requirements were performed. The validation consisted in comparing experimental and numerical results, involving instantaneous and time-averaged sediment concentrations and velocities. In general, a good agreement between the numerical and the experimental values is achieved when: i) realistic outlet conditions are specified, ii) channel roughness is properly calibrated, iii) two equation k - ? models are employed iv) a fine mesh is employed near the bottom boundary. Acknowledgements This study was funded by the Portuguese Foundation for Science and Technology through the project PTDC/ECM/099485/2008. The first author thanks the assistance of Professor Moitinho de Almeida from ICIST and to all members of the project and of the Fluvial Hydraulics group of CEHIDRO.

  19. PARALLEL MEASUREMENT AND MODELING OF TRANSPORT IN THE DARHT II BEAMLINE ON ETA II

    SciTech Connect

    Chambers, F W; Raymond, B A; Falabella, S; Lee, B S; Richardson, R A; Weir, J T; Davis, H A; Schultze, M E

    2005-05-31

    To successfully tune the DARHT II transport beamline requires the close coupling of a model of the beam transport and the measurement of the beam observables as the beam conditions and magnet settings are varied. For the ETA II experiment using the DARHT II beamline components this was achieved using the SUICIDE (Simple User Interface Connecting to an Integrated Data Environment) data analysis environment and the FITS (Fully Integrated Transport Simulation) model. The SUICIDE environment has direct access to the experimental beam transport data at acquisition and the FITS predictions of the transport for immediate comparison. The FITS model is coupled into the control system where it can read magnet current settings for real time modeling. We find this integrated coupling is essential for model verification and the successful development of a tuning aid for the efficient convergence on a useable tune. We show the real time comparisons of simulation and experiment and explore the successes and limitations of this close coupled approach.

  20. Comparing Validity and Reliability in Special Education Title II and IDEA Data

    ERIC Educational Resources Information Center

    Steinbrecher, Trisha D.; McKeown, Debra; Walther-Thomas, Chriss

    2013-01-01

    Previous researchers have found that special education teacher shortages are pervasive and exacerbated by federal policies regarding "highly qualified" teacher requirements. The authors examined special education teacher personnel data from 2 federal data sources to determine if these sources offer a reliable and valid means of…

  1. Validity of Social, Moral and Emotional Facets of Self-Description Questionnaire II

    ERIC Educational Resources Information Center

    Leung, Kim Chau; Marsh, Herbert W.; Yeung, Alexander Seeshing; Abduljabbar, Adel S.

    2015-01-01

    Studies adopting a construct validity approach can be categorized into within- and between-network studies. Few studies have applied between-network approach and tested the correlations of the social (same-sex relations, opposite-sex relations, parent relations), moral (honesty-trustworthiness), and emotional (emotional stability) facets of the…

  2. The African American Acculturation Scale II: Cross-Validation and Short Form.

    ERIC Educational Resources Information Center

    Landrine, Hope; Klonoff, Elizabeth A.

    1995-01-01

    Studied African American culture, using a new, shortened, 33-item African American Acculturation Scale (AAAS-33) to assess the scale's validity and reliability. Comparisons between the original form and AAAS-33 reveal high correlations, however, the longer form may be sensitive to some beliefs, practices, and attitudes not assessed by the short…

  3. Higgs potential in the type II seesaw model

    SciTech Connect

    Arhrib, A.; Benbrik, R.; Chabab, M.; Rahili, L.; Ramadan, J.; Moultaka, G.; Peyranere, M. C.

    2011-11-01

    The standard model Higgs sector, extended by one weak gauge triplet of scalar fields with a very small vacuum expectation value, is a very promising setting to account for neutrino masses through the so-called type II seesaw mechanism. In this paper we consider the general renormalizable doublet/triplet Higgs potential of this model. We perform a detailed study of its main dynamical features that depend on five dimensionless couplings and two mass parameters after spontaneous symmetry breaking, and highlight the implications for the Higgs phenomenology. In particular, we determine (i) the complete set of tree-level unitarity constraints on the couplings of the potential and (ii) the exact tree-level boundedness from below constraints on these couplings, valid for all directions. When combined, these constraints delineate precisely the theoretically allowed parameter space domain within our perturbative approximation. Among the seven physical Higgs states of this model, the mass of the lighter (heavier) CP{sub even} state h{sup 0} (H{sup 0}) will always satisfy a theoretical upper (lower) bound that is reached for a critical value {mu}{sub c} of {mu} (the mass parameter controlling triple couplings among the doublet/triplet Higgses). Saturating the unitarity bounds, we find an upper bound m{sub h}{sup 0} or approx. {mu}{sub c} and {mu} < or approx. {mu}{sub c}. In the first regime the Higgs sector is typically very heavy, and only h{sup 0} that becomes SM-like could be accessible to the LHC. In contrast, in the second regime, somewhat overlooked in the literature, most of the Higgs sector is light. In particular, the heaviest state H{sup 0} becomes SM-like, the lighter states being the CP{sub odd} Higgs, the (doubly) charged Higgses, and a decoupled h{sup 0}, possibly leading to a distinctive phenomenology at the colliders.

  4. Validation of community models: 2. Development of a baseline using the Wang-Sheeley-Arge model

    NASA Astrophysics Data System (ADS)

    MacNeice, Peter

    2009-12-01

    This paper is the second in a series providing independent validation of community models of the outer corona and inner heliosphere. Here I present a comprehensive validation of the Wang-Sheeley-Arge (WSA) model. These results will serve as a baseline against which to compare the next generation of comparable forecasting models. The WSA model is used by a number of agencies to predict Solar wind conditions at Earth up to 4 days into the future. Given its importance to both the research and forecasting communities, it is essential that its performance be measured systematically and independently. I offer just such an independent and systematic validation. I report skill scores for the model's predictions of wind speed and interplanetary magnetic field (IMF) polarity for a large set of Carrington rotations. The model was run in all its routinely used configurations. It ingests synoptic line of sight magnetograms. For this study I generated model results for monthly magnetograms from multiple observatories, spanning the Carrington rotation range from 1650 to 2074. I compare the influence of the different magnetogram sources and performance at quiet and active times. I also consider the ability of the WSA model to forecast both sharp transitions in wind speed from slow to fast wind and reversals in the polarity of the radial component of the IMF. These results will serve as a baseline against which to compare future versions of the model as well as the current and future generation of magnetohydrodynamic models under development for forecasting use.

  5. Validation of Community Models: 2. Development of a Baseline, Using the Wang-Sheeley-Arge Model

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    This paper is the second in a series providing independent validation of community models of the outer corona and inner heliosphere. Here I present a comprehensive validation of the Wang-Sheeley-Arge (WSA) model. These results will serve as a baseline against which to compare the next generation of comparable forecasting models. The WSA model is used by a number of agencies to predict Solar wind conditions at Earth up to 4 days into the future. Given its importance to both the research and forecasting communities, it is essential that its performance be measured systematically and independently. I offer just such an independent and systematic validation. I report skill scores for the model's predictions of wind speed and interplanetary magnetic field (IMF) polarity for a large set of Carrington rotations. The model was run in all its routinely used configurations. It ingests synoptic line of sight magnetograms. For this study I generated model results for monthly magnetograms from multiple observatories, spanning the Carrington rotation range from 1650 to 2074. I compare the influence of the different magnetogram sources and performance at quiet and active times. I also consider the ability of the WSA model to forecast both sharp transitions in wind speed from slow to fast wind and reversals in the polarity of the radial component of the IMF. These results will serve as a baseline against which to compare future versions of the model as well as the current and future generation of magnetohydrodynamic models under development for forecasting use.

  6. Alaska North Slope Tundra Travel Model and Validation Study

    SciTech Connect

    Harry R. Bader; Jacynthe Guimond

    2006-03-01

    The Alaska Department of Natural Resources (DNR), Division of Mining, Land, and Water manages cross-country travel, typically associated with hydrocarbon exploration and development, on Alaska's arctic North Slope. This project is intended to provide natural resource managers with objective, quantitative data to assist decision making regarding opening of the tundra to cross-country travel. DNR designed standardized, controlled field trials, with baseline data, to investigate the relationships present between winter exploration vehicle treatments and the independent variables of ground hardness, snow depth, and snow slab thickness, as they relate to the dependent variables of active layer depth, soil moisture, and photosynthetically active radiation (a proxy for plant disturbance). Changes in the dependent variables were used as indicators of tundra disturbance. Two main tundra community types were studied: Coastal Plain (wet graminoid/moist sedge shrub) and Foothills (tussock). DNR constructed four models to address physical soil properties: two models for each main community type, one predicting change in depth of active layer and a second predicting change in soil moisture. DNR also investigated the limited potential management utility in using soil temperature, the amount of photosynthetically active radiation (PAR) absorbed by plants, and changes in microphotography as tools for the identification of disturbance in the field. DNR operated under the assumption that changes in the abiotic factors of active layer depth and soil moisture drive alteration in tundra vegetation structure and composition. Statistically significant differences in depth of active layer, soil moisture at a 15 cm depth, soil temperature at a 15 cm depth, and the absorption of photosynthetically active radiation were found among treatment cells and among treatment types. The models were unable to thoroughly investigate the interacting role between snow depth and disturbance due to a lack of variability in snow depth cover throughout the period of field experimentation. The amount of change in disturbance indicators was greater in the tundra communities of the Foothills than in those of the Coastal Plain. However the overall level of change in both community types was less than expected. In Coastal Plain communities, ground hardness and snow slab thickness were found to play an important role in change in active layer depth and soil moisture as a result of treatment. In the Foothills communities, snow cover had the most influence on active layer depth and soil moisture as a result of treatment. Once certain minimum thresholds for ground hardness, snow slab thickness, and snow depth were attained, it appeared that little or no additive effect was realized regarding increased resistance to disturbance in the tundra communities studied. DNR used the results of this modeling project to set a standard for maximum permissible disturbance of cross-country tundra travel, with the threshold set below the widely accepted standard of Low Disturbance levels (as determined by the U.S. Fish and Wildlife Service). DNR followed the modeling project with a validation study, which seemed to support the field trial conclusions and indicated that the standard set for maximum permissible disturbance exhibits a conservative bias in favor of environmental protection. Finally DNR established a quick and efficient tool for visual estimations of disturbance to determine when investment in field measurements is warranted. This Visual Assessment System (VAS) seemed to support the plot disturbance measurements taking during the modeling and validation phases of this project.

  7. Science Motivation Questionnaire II: Validation with Science Majors and Nonscience Majors

    ERIC Educational Resources Information Center

    Glynn, Shawn M.; Brickman, Peggy; Armstrong, Norris; Taasoobshirazi, Gita

    2011-01-01

    From the perspective of social cognitive theory, the motivation of students to learn science in college courses was examined. The students--367 science majors and 313 nonscience majors--responded to the Science Motivation Questionnaire II, which assessed five motivation components: intrinsic motivation, self-determination, self-efficacy, career…

  8. The Internal Validation of Level II and Level III Respiratory Therapy Examinations. Final Report.

    ERIC Educational Resources Information Center

    Jouett, Michael L.

    This project began with the delineation of the roles and functions of respiratory therapy personnel by the American Association for Respiratory Therapy. In Phase II, The Psychological Corporation used this delineation to develop six proficiency examinations, three at each of two levels. One exam at each level was designated for the purpose of the…

  9. Results of site validation experiments. Volume II. Supporting documents 5 through 14

    SciTech Connect

    Not Available

    1983-01-01

    Volume II contains the following supporting documents: Summary of Geologic Mapping of Underground Investigations; Logging of Vertical Coreholes - ''Double Box'' Area and Exploratory Drift; WIPP High Precision Gravity Survey; Basic Data Reports for Drillholes, Brine Content of Facility Internal Strata; Mineralogical Content of Facility Interval Strata; Location and Characterization of Interbedded Materials; Characterization of Aquifers at Shaft Locations; and Permeability of Facility Interval Strate.

  10. MODEL DRIVEN DEVELOPMENT OF DIGITAL LIBRARIES Validation, Analysis and Code Generation

    E-print Network

    de Lara, Juan

    MODEL DRIVEN DEVELOPMENT OF DIGITAL LIBRARIES Validation, Analysis and Code Generation Esther Science, University La Sapienza, Rome (Italy) malizia@di.uniroma1.it Keywords: Digital Library, Model model-driven approach for the formal construction and validation of Digital Libraries (DLs). We have

  11. Internship proposal "Modeling and validation of vortex production and jet formation"

    E-print Network

    Condat, Laurent

    Internship proposal "Modeling and validation of vortex production and jet formation" Introduction.e. moderate Reynolds and low Mach number flow. Objectives The aim of this internship is to model vortex. In addition, an experimental model validation is intended. Place of internship Gipsa-lab, Campus Grenoble

  12. Experiments for calibration and validation of plasticity and failure material modeling: 304L stainless steel.

    SciTech Connect

    Lee, Kenneth L.; Korellis, John S.; McFadden, Sam X.

    2006-01-01

    Experimental data for material plasticity and failure model calibration and validation were obtained from 304L stainless steel. Model calibration data were taken from smooth tension, notched tension, and compression tests. Model validation data were provided from experiments using thin-walled tube specimens subjected to path dependent combinations of internal pressure, extension, and torsion.

  13. Model of the expansion of H II region RCW 82

    SciTech Connect

    Krasnobaev, K. V.; Kotova, G. Yu.; Tagirova, R. R. E-mail: gviana2005@gmail.com

    2014-05-10

    This paper aims to resolve the problem of formation of young objects observed in the RCW 82 H II region. In the framework of a classical trigger model the estimated time of fragmentation is larger than the estimated age of the H II region. Thus the young objects could not have formed during the dynamical evolution of the H II region. We propose a new model that helps resolve this problem. This model suggests that the H II region RCW 82 is embedded in a cloud of limited size that is denser than the surrounding interstellar medium. According to this model, when the ionization-shock front leaves the cloud it causes the formation of an accelerating dense gas shell. In the accelerated shell, the effects of the Rayleigh-Taylor (R-T) instability dominate and the characteristic time of the growth of perturbations with the observed magnitude of about 3 pc is 0.14 Myr, which is less than the estimated age of the H II region. The total time t {sub ?}, which is the sum of the expansion time of the H II region to the edge of the cloud, the time of the R-T instability growth, and the free fall time, is estimated as 0.44 < t {sub ?} < 0.78 Myr. We conclude that the young objects in the H II region RCW 82 could be formed as a result of the R-T instability with subsequent fragmentation into large-scale condensations.

  14. An efficient atmospheric boundary layer model for GCMs: Its design, validation, and implementation into the GISS-GCM

    SciTech Connect

    Zinn, H.P.

    1993-12-31

    Climate prediction models need realistic descriptions of physical subgrid scale processes in order to provide reliable long-range global forecasts. This dissertation concerns the design, validation, and efficient implementation of a model for the Planetary Boundary Layer (PBL) for Global Circulation Models (GCMs). The project was motivated by the need for a better representation of the turbulent surface fluxes for heat and momentum. Special emphasis is placed in the PBL model on the realistic representation of the effects of thermal stratification and latitudinal variation. The new PBL model consists of a surface layer and a mixed layer with variable depth. The two domains are matched together with the conditions of constant momentum and heat flux. An algebraic solution to the mean momentum equations formulated as a symmetric rotating channel flow describes the mixed-layer velocity profile. Turbulent diffusion is modeled in the surface layer by a drag law and by a first-order closure model in the mixed layer and depends on thermal stratification. The coupled system is solved by an iterative method. In order to preserve the computational efficiency of the GISS-GCM, the new PBL model is implemented by means of look-up tables with the external parameters given by the bulk PBL Richardson number, PBL depth, neutral drag coefficient, and latitude. The diurnal cycle of PBL variables at some selected locations and their global distribution are compared between the GISS-GCM with the new PBL model (Model IIA) and Model II. The new PBL model responds more realistically to the external forcings than Model II. A multi-year simulation of Model IIA shows improved agreement with observed mean climatic quantities over Model II as seen in the mean January surface wind field and precipitation distribution. The new PBL model also affects the transient flow, as seen in the higher kinetic energy of the large-scale nonstationary eddies.

  15. A technique for global monitoring of net solar irradiance at the ocean surface. II - Validation

    NASA Technical Reports Server (NTRS)

    Chertock, Beth; Frouin, Robert; Gautier, Catherine

    1992-01-01

    The generation and validation of the first satellite-based long-term record of surface solar irradiance over the global oceans are addressed. The record is generated using Nimbus-7 earth radiation budget (ERB) wide-field-of-view plentary-albedo data as input to a numerical algorithm designed and implemented based on radiative transfer theory. The mean monthly values of net surface solar irradiance are computed on a 9-deg latitude-longitude spatial grid for November 1978-October 1985. The new data set is validated in comparisons with short-term, regional, high-resolution, satellite-based records. The ERB-based values of net surface solar irradiance are compared with corresponding values based on radiance measurements taken by the Visible-Infrared Spin Scan Radiometer aboard GOES series satellites. Errors in the new data set are estimated to lie between 10 and 20 W/sq m on monthly time scales.

  16. Validation of Community Models: Identifying Events in Space Weather Model Timelines

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    I develop and document a set of procedures which test the quality of predictions of solar wind speed and polarity of the interplanetary magnetic field (IMF) made by coupled models of the ambient solar corona and heliosphere. The Wang-Sheeley-Arge (WSA) model is used to illustrate the application of these validation procedures. I present an algorithm which detects transitions of the solar wind from slow to high speed. I also present an algorithm which processes the measured polarity of the outward directed component of the IMF. This removes high-frequency variations to expose the longer-scale changes that reflect IMF sector changes. I apply these algorithms to WSA model predictions made using a small set of photospheric synoptic magnetograms obtained by the Global Oscillation Network Group as input to the model. The results of this preliminary validation of the WSA model (version 1.6) are summarized.

  17. Validation of community models: Identifying events in space weather model timelines

    NASA Astrophysics Data System (ADS)

    MacNeice, Peter

    2009-06-01

    I develop and document a set of procedures which test the quality of predictions of solar wind speed and polarity of the interplanetary magnetic field (IMF) made by coupled models of the ambient solar corona and heliosphere. The Wang-Sheeley-Arge (WSA) model is used to illustrate the application of these validation procedures. I present an algorithm which detects transitions of the solar wind from slow to high speed. I also present an algorithm which processes the measured polarity of the outward directed component of the IMF. This removes high-frequency variations to expose the longer-scale changes that reflect IMF sector changes. I apply these algorithms to WSA model predictions made using a small set of photospheric synoptic magnetograms obtained by the Global Oscillation Network Group as input to the model. The results of this preliminary validation of the WSA model (version 1.6) are summarized.

  18. Validation study of a cell culture model of colorectal cancer.

    PubMed

    Luca, Tonia; Privitera, Giovanna; Lo Monaco, Maria; Prezzavento, Consolazione; Castorina, Sergio

    2007-01-01

    Colorectal cancer is a significant cause of morbidity and mortality in Western populations. Due to the fact that epithelial cells of colon have an important role in the pathophysiology of cancer, we set up a mechanical method combined with an enzymatic digestion of surgical resections derived from our Clinical Centre to obtain tumoral colon epithelium cell cultures. The cells proliferated under the chosen culture conditions and were maintained for several weeks, including subcultivation steps. We characterised the cell morphology by light and phase contrast microscopes and by immunoistochemistry analysis. Moreover, we also demonstrated the preservation of the secretory function of the cultured cells over the time. This validated model of primary epithelial cells from colon cancer will be used to understand the biological and pathological features of human tumoral colonic cells. This will be done by studying the expression of specific proteins in the tumor and analysing mutations of specific genes in each patient to relate each genetic signature to a precise pharmacological response. PMID:17687873

  19. Geoid model computation and validation over Alaska/Yukon

    NASA Astrophysics Data System (ADS)

    Li, X.; Huang, J.; Roman, D. R.; Wang, Y.; Veronneau, M.

    2012-12-01

    The Alaska and Yukon area consists of very complex and dynamic geology. It is featured by the two highest mountains in North America, Mount McKinely (20,320ft) in Alaska, USA and Mount Logan (19,541ft) in Yukon, Canada, along with the Alaska trench along the plate boundaries. On the one hand this complex geology gives rise to large horizontal geoid gradients across this area. On the other hand geoid time variation is much stronger than most of the other areas in the world due to tectonic movement, the post glacial rebound and ice melting effects in this region. This type of geology poses great challenges for the determination of North American geoid over this area, which demands proper gravity data coverage in both space and time on both the Alaska and Yukon sides. However, the coverage of the local gravity data is inhomogenous in this area. The terrestrial gravity is sparse in Alaska, and spans a century in time. In contrast, the terrestrial gravity is relatively well-distributed in Yukon but with data gaps. In this paper, various new satellite models along with the newly acquired airborne data will be incorporated to augment the middle-to-long wavelength geoid components. Initial tests show clear geoid improvements at the local GPS benchmarks in the Yukon area after crustal motion is accounted for. Similar approaches will be employed on the Alaska side for a better validation to determine a continuous vertical datum across US and Canada.

  20. Gravitropic responses of the Avena coleoptile in space and on clinostats. II. Is reciprocity valid?

    NASA Technical Reports Server (NTRS)

    Johnsson, A.; Brown, A. H.; Chapman, D. K.; Heathcote, D.; Karlsson, C.

    1995-01-01

    Experiments were undertaken to determine if the reciprocity rule is valid for gravitropic responses of oat coleoptiles in the acceleration region below 1 g. The rule predicts that the gravitropic response should be proportional to the product of the applied acceleration and the stimulation time. Seedlings were cultivated on 1 g centrifuges and transferred to test centrifuges to apply a transverse g-stimulation. Since responses occurred in microgravity, the uncertainties about the validity of clinostat simulation of weightlessness was avoided. Plants at two stages of coleoptile development were tested. Plant responses were obtained using time-lapse video recordings that were analyzed after the flight. Stimulus intensities and durations were varied and ranged from 0.1 to 1.0 g and from 2 to 130 min, respectively. For threshold g-doses the reciprocity rule was obeyed. The threshold dose was of the order of 55 g s and 120 g s, respectively, for two groups of plants investigated. Reciprocity was studied also at bending responses which are from just above the detectable level to about 10 degrees. The validity of the rule could not be confirmed for higher g-doses, chiefly because the data were more variable. It was investigated whether the uniformity of the overall response data increased when the gravitropic dose was defined as (gm x t) with m-values different from unity. This was not the case and the reciprocity concept is, therefore, valid also in the hypogravity region. The concept of gravitropic dose, the product of the transverse acceleration and the stimulation time, is also well-defined in the acceleration region studied. With the same hardware, tests were done on earth where responses occurred on clinostats. The results did not contradict the reciprocity rule but scatter in the data was large.

  1. EEG-neurofeedback for optimising performance. II: creativity, the performing arts and ecological validity.

    PubMed

    Gruzelier, John H

    2014-07-01

    As a continuation of a review of evidence of the validity of cognitive/affective gains following neurofeedback in healthy participants, including correlations in support of the gains being mediated by feedback learning (Gruzelier, 2014a), the focus here is on the impact on creativity, especially in the performing arts including music, dance and acting. The majority of research involves alpha/theta (A/T), sensory-motor rhythm (SMR) and heart rate variability (HRV) protocols. There is evidence of reliable benefits from A/T training with advanced musicians especially for creative performance, and reliable benefits from both A/T and SMR training for novice music performance in adults and in a school study with children with impact on creativity, communication/presentation and technique. Making the SMR ratio training context ecologically relevant for actors enhanced creativity in stage performance, with added benefits from the more immersive training context. A/T and HRV training have benefitted dancers. The neurofeedback evidence adds to the rapidly accumulating validation of neurofeedback, while performing arts studies offer an opportunity for ecological validity in creativity research for both creative process and product. PMID:24239853

  2. An independent verification and validation of the Future Theater Level Model conceptual model

    SciTech Connect

    Hartley, D.S. III; Kruse, K.L.; Martellaro, A.J.; Packard, S.L.; Thomas, B. Jr.; Turley, V.K.

    1994-08-01

    This report describes the methodology and results of independent verification and validation performed on a combat model in its design stage. The combat model is the Future Theater Level Model (FTLM), under development by The Joint Staff/J-8. J-8 has undertaken its development to provide an analysis tool that addresses the uncertainties of combat more directly than previous models and yields more rapid study results. The methodology adopted for this verification and validation consisted of document analyses. Included were detailed examination of the FTLM design documents (at all stages of development), the FTLM Mission Needs Statement, and selected documentation for other theater level combat models. These documents were compared to assess the FTLM as to its design stage, its purpose as an analytical combat model, and its capabilities as specified in the Mission Needs Statement. The conceptual design passed those tests. The recommendations included specific modifications as well as a recommendation for continued development. The methodology is significant because independent verification and validation have not been previously reported as being performed on a combat model in its design stage. The results are significant because The Joint Staff/J-8 will be using the recommendations from this study in determining whether to proceed with develop of the model.

  3. Modeling and experimental validation of unsteady impinging flames

    SciTech Connect

    Fernandes, E.C.; Leandro, R.E.

    2006-09-15

    This study reports on a joint experimental and analytical study of premixed laminar flames impinging onto a plate at controlled temperature, with special emphasis on the study of periodically oscillating flames. Six types of flame structures were found, based on parametric variations of nozzle-to-plate distance (H), jet velocity (U), and equivalence ratio (f). They were classified as conical, envelope, disc, cool central core, ring, and side-lifted flames. Of these, the disc, cool central core, and envelope flames were found to oscillate periodically, with frequency and sound pressure levels increasing with Re and decreasing with nozzle-to-plate distance. The unsteady behavior of these flames was modeled using the formulation derived by Durox et al. [D. Durox, T. Schuller, S. Candel, Proc. Combust. Inst. 29 (2002) 69-75] for the cool central core flames where the convergent burner acts as a Helmholtz resonator, driven by an external pressure fluctuation dependent on a velocity fluctuation at the burner mouth after a convective time delay {tau}. Based on this model, the present work shows that {tau} = [Re[2jtanh{sup -1}((2{delta}{omega}+(1+N)j{omega}{sup 2}-j{omega}{sub 0}{sup 2})/ (2{delta}{omega}+(1-N)j{omega}{sup 2}-j{omega}{sub 0}{sup 2}))]+2{pi}K]/{omega}, i.e., there is a relation between oscillation frequency ({omega}), burner acoustic characteristics ({omega}{sub 0},{delta}), and time delay {tau}, not explicitly dependent on N, the flame-flow normalized interaction coefficient [D. Durox, T. Schuller, S. Candel, Proc. Combust. Inst. 29 (2002) 69-75], because {partial_derivative}t/{partial_derivative}N = 0. Based on flame motion and noise analysis, K was found to physically represent the integer number of perturbations on flame surface or number of coherent structures on impinging jet. Additionally, assuming that {tau}={beta}H/U, where H is the nozzle-to-plate distance and U is the mean jet velocity, it is shown that {beta}{sub Disc}=1.8, {beta}{sub CCC}=1.03, and {beta}{sub Env}=1.0. A physical analysis of the proportionality constant {beta} showed that for the disc flames, {tau} corresponds to the ratio between H and the velocity of the coherent structures. In the case of envelope and cool central core flames, {tau} corresponds to the ratio between H and the mean jet velocity. The predicted frequency fits the experimental data, supporting the validity of the mathematical modeling, empirical formulation, and assumptions made. (author)

  4. Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Sebok, Angelia; Gore, Brian; Hooey, Becky

    2012-01-01

    We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures.

  5. Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition

    NASA Technical Reports Server (NTRS)

    Ewing, Anthony; Adams, Charles

    2004-01-01

    Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.

  6. Validation of the REBUS-3/RCT methodologies for EBR-II core-follow analysis

    SciTech Connect

    McKnight, R.D.

    1992-01-01

    One of the many tasks to be completed at EBR-2/FCF (Fuel Cycle Facility) regarding fuel cycle closure for the Integral Fast Reactor (IFR) is to develop and install the systems to be used for fissile material accountancy and control. The IFR fuel cycle and pyrometallurgical process scheme determine the degree of actinide of actinide buildup in the reload fuel assemblies. Inventories of curium, americium and neptunium in the fuel will affect the radiation and thermal environmental conditions at the fuel fabrication stations, the chemistry of reprocessing, and the neutronic performance of the core. Thus, it is important that validated calculational tools be put in place for accurately determining isotopic mass and neutronic inputs to FCF for both operational and material control and accountancy purposes. The primary goal of this work is to validate the REBUS-2/RCT codes as tools which can adequately compute the burnup and isotopic distribution in binary- and ternary-fueled Mark-3, Mark-4, and Mark-5 subassemblies. 6 refs.

  7. MATSIM -The Development and Validation of a Numerical Voxel Model based on the MATROSHKA Phantom

    NASA Astrophysics Data System (ADS)

    Beck, Peter; Rollet, Sofia; Berger, Thomas; Bergmann, Robert; Hajek, Michael; Latocha, Marcin; Vana, Norbert; Zechner, Andrea; Reitz, Guenther

    The AIT Austrian Institute of Technology coordinates the project MATSIM (MATROSHKA Simulation) in collaboration with the Vienna University of Technology and the German Aerospace Center. The aim of the project is to develop a voxel-based model of the MATROSHKA anthro-pomorphic torso used at the International Space Station (ISS) as foundation to perform Monte Carlo high-energy particle transport simulations for different irradiation conditions. Funded by the Austrian Space Applications Programme (ASAP), MATSIM is a co-investigation with the European Space Agency (ESA) ELIPS project MATROSHKA, an international collaboration of more than 18 research institutes and space agencies from all over the world, under the science and project lead of the German Aerospace Center. The MATROSHKA facility is designed to determine the radiation exposure of an astronaut onboard ISS and especially during an ex-travehicular activity. The numerical model developed in the frame of MATSIM is validated by reference measurements. In this report we give on overview of the model development and compare photon and neutron irradiations of the detector-equipped phantom torso with Monte Carlo simulations using FLUKA. Exposure to Co-60 photons was realized in the standard ir-radiation laboratory at Seibersdorf, while investigations with neutrons were performed at the thermal column of the Vienna TRIGA Mark-II reactor. The phantom was loaded with passive thermoluminescence dosimeters. In addition, first results of the calculated dose distribution within the torso are presented for a simulated exposure in low-Earth orbit.

  8. Modelling, Identification and Experimental Validation of a Hydraulic Manipulator Joint for Control

    E-print Network

    Papadopoulos, Evangelos

    Modelling, Identification and Experimental Validation of a Hydraulic Manipulator Joint for Control manipulator. Specialized hardware was designed and constructed for this purpose. The model was validated are manipulators with hydraulic actuators, due to their high force output to weight ratio, their inertance to fire

  9. The Validity of the Job Characteristics Model: A Review and Meta-Analysis.

    ERIC Educational Resources Information Center

    Fried, Yitzhak; Ferris, Gerald R.

    1987-01-01

    Assessed the validity of Hackman and Oldham's Job Characteristics Model by conducting a comprehensive review of nearly 200 relevant studies on the model as well as by applying meta-analytic procedures to much of the data. Available correlational results were reasonably valid and support the multidimensionality of job characteristics and their…

  10. Adaptation and Validation of an Agent Model of Functional State and Performance for Individuals

    E-print Network

    Treur, Jan

    functional state model to the individual and validation of the resulting model. First, human experiments have mostly qualitative theories from Psychology, but was not validated yet using human experiments been performed by taking a number of steps. First of all, an experiment with 31 human subjects has been

  11. Adaptive Cruise Control: Experimental Validation of Advanced Controllers on Scale-Model Cars

    E-print Network

    Ames, Aaron

    Adaptive Cruise Control: Experimental Validation of Advanced Controllers on Scale-Model Cars Aakar of correctness. In particular, safety constraints--maintaining a valid following distance from a lead car objectives in an optimal fashion. This methodology is demonstrated on scale-model cars, for which the CBF

  12. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  13. Knowledge Provenance: An Approach to Modeling and Maintaining The Evolution and Validity of Knowledge

    E-print Network

    Fox, Mark S.

    Knowledge Provenance: An Approach to Modeling and Maintaining The Evolution and Validity may be true or false, uncertain or dated, but no tool exists to discern the differences. In this paper to model and maintain the evolution and validity of web information/knowledge. The major questions

  14. Validation of the integration of CFD and SAS4A/SASSYS-1: Analysis of EBR-II shutdown heat removal test 17

    SciTech Connect

    Thomas, J. W.; Fanning, T. H.; Vilim, R.; Briggs, L. L.

    2012-07-01

    Recent analyses have demonstrated the need to model multidimensional phenomena, particularly thermal stratification in outlet plena, during safety analyses of loss-of-flow transients of certain liquid-metal cooled reactor designs. Therefore, Argonne's reactor systems safety code SAS4A/SASSYS-1 is being enhanced by integrating 3D computational fluid dynamics models of the plena. A validation exercise of the new tool is being performed by analyzing the protected loss-of-flow event demonstrated by the EBR-II Shutdown Heat Removal Test 17. In this analysis, the behavior of the coolant in the cold pool is modeled using the CFD code STAR-CCM+, while the remainder of the cooling system and the reactor core are modeled with SAS4A/SASSYS-1. This paper summarizes the code integration strategy and provides the predicted 3D temperature and velocity distributions inside the cold pool during SHRT-17. The results of the coupled analysis should be considered preliminary at this stage, as the exercise pointed to the need to improve the CFD model of the cold pool tank. (authors)

  15. Assessment of the Validity of the Double Superhelix Model for Reconstituted High Density Lipoproteins

    PubMed Central

    Jones, Martin K.; Zhang, Lei; Catte, Andrea; Li, Ling; Oda, Michael N.; Ren, Gang; Segrest, Jere P.

    2010-01-01

    For several decades, the standard model for high density lipoprotein (HDL) particles reconstituted from apolipoprotein A-I (apoA-I) and phospholipid (apoA-I/HDL) has been a discoidal particle ?100 ? in diameter and the thickness of a phospholipid bilayer. Recently, Wu et al. (Wu, Z., Gogonea, V., Lee, X., Wagner, M. A., Li, X. M., Huang, Y., Undurti, A., May, R. P., Haertlein, M., Moulin, M., Gutsche, I., Zaccai, G., Didonato, J. A., and Hazen, S. L. (2009) J. Biol. Chem. 284, 36605–36619) used small angle neutron scattering to develop a new model they termed double superhelix (DSH) apoA-I that is dramatically different from the standard model. Their model possesses an open helical shape that wraps around a prolate ellipsoidal type I hexagonal lyotropic liquid crystalline phase. Here, we used three independent approaches, molecular dynamics, EM tomography, and fluorescence resonance energy transfer spectroscopy (FRET) to assess the validity of the DSH model. (i) By using molecular dynamics, two different approaches, all-atom simulated annealing and coarse-grained simulation, show that initial ellipsoidal DSH particles rapidly collapse to discoidal bilayer structures. These results suggest that, compatible with current knowledge of lipid phase diagrams, apoA-I cannot stabilize hexagonal I phase particles of phospholipid. (ii) By using EM, two different approaches, negative stain and cryo-EM tomography, show that reconstituted apoA-I/HDL particles are discoidal in shape. (iii) By using FRET, reconstituted apoA-I/HDL particles show a 28–34-? intermolecular separation between terminal domain residues 40 and 240, a distance that is incompatible with the dimensions of the DSH model. Therefore, we suggest that, although novel, the DSH model is energetically unfavorable and not likely to be correct. Rather, we conclude that all evidence supports the likelihood that reconstituted apoA-I/HDL particles, in general, are discoidal in shape. PMID:20974855

  16. Importance of Sea Ice for Validating Global Climate Models

    NASA Technical Reports Server (NTRS)

    Geiger, Cathleen A.

    1997-01-01

    Reproduction of current day large-scale physical features and processes is a critical test of global climate model performance. Without this benchmark, prognoses of future climate conditions are at best speculation. A fundamental question relevant to this issue is, which processes and observations are both robust and sensitive enough to be used for model validation and furthermore are they also indicators of the problem at hand? In the case of global climate, one of the problems at hand is to distinguish between anthropogenic and naturally occuring climate responses. The polar regions provide an excellent testing ground to examine this problem because few humans make their livelihood there, such that anthropogenic influences in the polar regions usually spawn from global redistribution of a source originating elsewhere. Concomitantly, polar regions are one of the few places where responses to climate are non-anthropogenic. Thus, if an anthropogenic effect has reached the polar regions (e.g. the case of upper atmospheric ozone sensitivity to CFCs), it has most likely had an impact globally but is more difficult to sort out from local effects in areas where anthropogenic activity is high. Within this context, sea ice has served as both a monitoring platform and sensitivity parameter of polar climate response since the time of Fridtjof Nansen. Sea ice resides in the polar regions at the air-sea interface such that changes in either the global atmospheric or oceanic circulation set up complex non-linear responses in sea ice which are uniquely determined. Sea ice currently covers a maximum of about 7% of the earth's surface but was completely absent during the Jurassic Period and far more extensive during the various ice ages. It is also geophysically very thin (typically <10 m in Arctic, <3 m in Antarctic) compared to the troposphere (roughly 10 km) and deep ocean (roughly 3 to 4 km). Because of these unique conditions, polar researchers regard sea ice as one of the more important features to monitor in terms of heat, mass, and momentum transfer between the air and sea and furthermore, the impact of such responses to global climate.

  17. Type-II seesaw mass models and baryon asymmetry

    E-print Network

    Amal Kr. Sarma; H. Zeen Devi; N. Nimai Singh

    2006-12-12

    We compute and also compare the contributions of canonical and noncanonical mass terms towards baryon asymmetry by considering type-II seesaw mass models of neutrinos: Degenerare(3 varieties), Normal hierarchical and Inverted hierarchical(2 varieties). We have shown that for particular choices of parameter '$\\gamma$'(the so-called discriminator) for different neutrino mass models, the baryon asymmetry is largely dominated by canonical term. Within such type-II seesaw scenario, we find normal hierarchical mass model as the most favourable choice of nature.

  18. Development of a Hydraulic Manipulator Servoactuator Model: Simulation and Experimental Validation

    E-print Network

    Papadopoulos, Evangelos

    Development of a Hydraulic Manipulator Servoactuator Model: Simulation and Experimental Validation Abstract In this paper, modelling and identification of a hydraulic servoactuator system is presented, leakage, and load dynamics. System parameters are identified based on a high-performance hydraulic

  19. Development and Validation of a Computer Model for Energy-Efficient Shaded Fenestration Design 

    E-print Network

    Oh, Kie Whan

    2000-01-01

    The goal of this study is to develop and validate a computerized model for an energy­ efficient fenestration system that can easily be incorporated into the architectural design process. This model is for the thermal analysis of a shaded...

  20. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the repeatability of such catastrophe losses in the country. The validation process also included collaboration between Aon Benfield and its client in order to consider the insurance market penetration in Algeria estimated approximately at 5%. Thus, we believe that the applied approach led towards the production of an earthquake model for Algeria that is scientifically sound and reliable from one side and market and client oriented on the other side.

  1. TILTING SATURN. II. NUMERICAL MODEL Douglas P. Hamilton

    E-print Network

    Hamilton, Douglas P.

    TILTING SATURN. II. NUMERICAL MODEL Douglas P. Hamilton Department of Astronomy, University@boulder.swri.edu Receivved 2003 December 30; accepted 2004 July 15 ABSTRACT We argue that the gas giants Jupiter and Saturn of Saturn's rotation to that of Neptune's orbit. Strong support for this model comes from (1) a near- match

  2. Tilting Saturn II. Numerical Model Douglas P. Hamilton

    E-print Network

    Hamilton, Douglas P.

    Tilting Saturn II. Numerical Model Douglas P. Hamilton Astronomy Department, University of Maryland and Saturn were both formed with their rotation axes nearly perpendicular to their orbital planes of Saturn's rotation to that of Neptune's orbit. Strong support for this model comes from i) a near match

  3. Aqueous Solution Vessel Thermal Model Development II

    SciTech Connect

    Buechler, Cynthia Eileen

    2015-10-28

    The work presented in this report is a continuation of the work described in the May 2015 report, “Aqueous Solution Vessel Thermal Model Development”. This computational fluid dynamics (CFD) model aims to predict the temperature and bubble volume fraction in an aqueous solution of uranium. These values affect the reactivity of the fissile solution, so it is important to be able to calculate them and determine their effects on the reaction. Part A of this report describes some of the parameter comparisons performed on the CFD model using Fluent. Part B describes the coupling of the Fluent model with a Monte-Carlo N-Particle (MCNP) neutron transport model. The fuel tank geometry is the same as it was in the May 2015 report, annular with a thickness-to-height ratio of 0.16. An accelerator-driven neutron source provides the excitation for the reaction, and internal and external water cooling channels remove the heat. The model used in this work incorporates the Eulerian multiphase model with lift, wall lubrication, turbulent dispersion and turbulence interaction. The buoyancy-driven flow is modeled using the Boussinesq approximation, and the flow turbulence is determined using the k-? Shear-Stress-Transport (SST) model. The dispersed turbulence multiphase model is employed to capture the multiphase turbulence effects.

  4. Transient PVT measurements and model predictions for vessel heat transfer. Part II.

    SciTech Connect

    Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.; Evans, Gregory Herbert; Rice, Steven F.

    2010-07-01

    Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models in which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.

  5. A Formal Algorithm for Verifying the Validity of Clustering Results Based on Model Checking

    PubMed Central

    Huang, Shaobin; Cheng, Yuan; Lang, Dapeng; Chi, Ronghua; Liu, Guofeng

    2014-01-01

    The limitations in general methods to evaluate clustering will remain difficult to overcome if verifying the clustering validity continues to be based on clustering results and evaluation index values. This study focuses on a clustering process to analyze crisp clustering validity. First, we define the properties that must be satisfied by valid clustering processes and model clustering processes based on program graphs and transition systems. We then recast the analysis of clustering validity as the problem of verifying whether the model of clustering processes satisfies the specified properties with model checking. That is, we try to build a bridge between clustering and model checking. Experiments on several datasets indicate the effectiveness and suitability of our algorithms. Compared with traditional evaluation indices, our formal method can not only indicate whether the clustering results are valid but, in the case the results are invalid, can also detect the objects that have led to the invalidity. PMID:24608823

  6. Some guidance on preparing validation plans for the DART Full System Models.

    SciTech Connect

    Gray, Genetha Anne; Hough, Patricia Diane; Hills, Richard Guy

    2009-03-01

    Planning is an important part of computational model verification and validation (V&V) and the requisite planning document is vital for effectively executing the plan. The document provides a means of communicating intent to the typically large group of people, from program management to analysts to test engineers, who must work together to complete the validation activities. This report provides guidelines for writing a validation plan. It describes the components of such a plan and includes important references and resources. While the initial target audience is the DART Full System Model teams in the nuclear weapons program, the guidelines are generally applicable to other modeling efforts. Our goal in writing this document is to provide a framework for consistency in validation plans across weapon systems, different types of models, and different scenarios. Specific details contained in any given validation plan will vary according to application requirements and available resources.

  7. Examining the Reliability and Validity of Clinician Ratings on the Five-Factor Model Score Sheet

    PubMed Central

    Few, Lauren R.; Miller, Joshua D.; Morse, Jennifer Q.; Yaggi, Kirsten E.; Reynolds, Sarah K.; Pilkonis, Paul A.

    2013-01-01

    Despite substantial research use, measures of the five-factor model (FFM) are infrequently used in clinical settings due, in part, to issues related to administration time and a reluctance to use self-report instruments. The current study examines the reliability and validity of the Five-Factor Model Score Sheet (FFMSS), which is a 30-item clinician rating form designed to assess the five domains and 30 facets of one conceptualization of the FFM. Studied in a sample of 130 outpatients, clinical raters demonstrated reasonably good interrater reliability across personality profiles and the domains manifested good internal consistency with the exception of Neuroticism. The FFMSS ratings also evinced expected relations with self-reported personality traits (e.g., FFMSS Extraversion and Schedule for Nonadaptive and Adaptive Personality Positive Temperament) and consensus-rated personality disorder symptoms (e.g., FFMSS Agreeableness and Narcissistic Personality Disorder). Finally, on average, the FFMSS domains were able to account for approximately 50% of the variance in domains of functioning (e.g., occupational, parental) and were even able to account for variance after controlling for Axis I and Axis II pathology. Given these findings, it is believed that the FFMSS holds promise for clinical use. PMID:20519735

  8. Crazy like a fox. Validity and ethics of animal models of human psychiatric disease.

    PubMed

    Rollin, Michael D H; Rollin, Bernard E

    2014-04-01

    Animal models of human disease play a central role in modern biomedical science. Developing animal models for human mental illness presents unique practical and philosophical challenges. In this article we argue that (1) existing animal models of psychiatric disease are not valid, (2) attempts to model syndromes are undermined by current nosology, (3) models of symptoms are rife with circular logic and anthropomorphism, (4) any model must make unjustified assumptions about subjective experience, and (5) any model deemed valid would be inherently unethical, for if an animal adequately models human subjective experience, then there is no morally relevant difference between that animal and a human. PMID:24534739

  9. The Texas Model for Content and Curricular Validity.

    ERIC Educational Resources Information Center

    Smisko, Ann; Twing, Jon S.; Denny, Patricia

    2000-01-01

    Describes the Texas test development process in detail, showing how each test development step is linked to the "Standards for Educational and Psychological Testing." The routine use of this process provides evidence of the content and curricular validity of the Texas Assessment of Academic Skills. (SLD)

  10. Predictive Bayesian neural network models of MHC class II peptide binding.

    PubMed

    Burden, Frank R; Winkler, David A

    2005-06-01

    We used Bayesian regularized neural networks to model data on the MHC class II-binding affinity of peptides. Training data consisted of sequences and binding data for nonamer (nine amino acid) peptides. Independent test data consisted of sequences and binding data for peptides of length II-binding activity of peptides depends only on the highest ranked embedded nonamer and that reverse sequences of active nonamers are inactive. We also internally validated the models by using 30% of the training data in an internal test set. We obtained robust models, with near identical statistics for multiple training runs. We determined how predictive our models were using statistical tests and area under the Receiver Operating Characteristic (ROC) graphs (A(ROC)). Most models gave training A(ROC) values close to 1.0 and test set A(ROC) values >0.8. We also used both amino acid indicator variables (bin20) and property-based descriptors to generate models for MHC class II-binding of peptides. The property-based descriptors were more parsimonious than the indicator variable descriptors, making them applicable to larger peptides, and their design makes them able to generalize to unknown peptides outside of the training space. None of the external test data sets contained any of the nonamer sequences in the training sets. Consequently, the models attempted to predict the activity of truly unknown peptides not encountered in the training sets. Our models were well able to tackle the difficult problem of correctly predicting the MHC class II-binding activities of a majority of the test set peptides. Exceptions to the assumption that nonamer motif activities were invariant to the peptide in which they were embedded, together with the limited coverage of the test data, and the fuzziness of the classification procedure, are likely explanations for some misclassifications. PMID:15878832

  11. Toward a model of drug relapse: An assessment of the validity of the reinstatement procedure

    PubMed Central

    Epstein, David H.; Preston, Kenzie L.; Stewart, Jane; Shaham, Yavin

    2006-01-01

    Background and Rationale The reinstatement model is widely used animal model of relapse to drug addiction. However, the model’s validity is open to question. Objective We assess the reinstatement model in terms of criterion and construct validity. Research highlights and Conclusions We find that the reinstatement model has adequate criterion validity in the broad sense of the term, as evidenced by the fact that reinstatement in laboratory animals is induced by conditions reported to provoke relapse in humans. The model’s criterion validity in the narrower sense, as a medication screen, seems promising for relapse to heroin, nicotine, and alcohol. For relapse to cocaine, criterion validity has not yet established, primarily because clinical studies have examined medication’s effects on reductions in cocaine intake rather than relapse during abstinence. The model’s construct validity faces more substantial challenges and is yet to be established, but we argue that some of the criticisms of the model in this regard may have been overstated. PMID:17019567

  12. Rorschach score validation as a model for 21st-century personality assessment.

    PubMed

    Bornstein, Robert F

    2012-01-01

    Recent conceptual and methodological innovations have led to new strategies for documenting the construct validity of test scores, including performance-based test scores. These strategies have the potential to generate more definitive evidence regarding the validity of scores derived from the Rorschach Inkblot Method (RIM) and help resolve some long-standing controversies regarding the clinical utility of the Rorschach. After discussing the unique challenges in studying the Rorschach and why research in this area is important given current trends in scientific and applied psychology, I offer 3 overarching principles to maximize the construct validity of RIM scores, arguing that (a) the method that provides RIM validation measures plays a key role in generating outcome predictions; (b) RIM variables should be linked with findings from neighboring subfields; and (c) rigorous RIM score validation includes both process-focused and outcome-focused assessments. I describe a 4-step strategy for optimal RIM score derivation (formulating hypotheses, delineating process links, generating outcome predictions, and establishing limiting conditions); and a 4-component template for RIM score validation (establishing basic psychometrics, documenting outcome-focused validity, assessing process-focused validity, and integrating outcome- and process-focused validity data). The proposed framework not only has the potential to enhance the validity and utility of the RIM, but might ultimately enable the RIM to become a model of test score validation for 21st-century personality assessment. PMID:22176264

  13. Hysteresis modeling and experimental validation of a magnetorheological damper

    NASA Astrophysics Data System (ADS)

    Bai, Xian-Xu; Chen, Peng; Qian, Li-Jun; Zhu, An-Ding

    2015-04-01

    In this paper, for modeling the MR dampers, based on the phenomenological model, a normalized phenomenological model is derived through incorporating a "normalization" concept and a restructured model is proposed and realized also with incorporation of the "normalization" concept. In order to demonstrate, a multi-islands genetic algorithm (GA) is employed to identify the parameters of the restructured model, the normalized phenomenological model as well as the phenomenological model. The research results indicate that, as compared with the phenomenological model and the normalized phenomenological model, (1) the restructured model not only can effectively decrease the number of the model parameters and reduce the complexity of the model, but also can describe the nonlinear hysteretic behavior of MR dampers more accurately, and (2) the normalized phenomenological model can improve the model efficiency as compared with the phenomenological model, although not as good as the restructured model.

  14. Validity of jitter measures in non-quasi-periodic voices. Part II: the effect of noise.

    PubMed

    Manfredi, Claudia; Giordano, Andrea; Schoentgen, Jean; Fraj, Samia; Bocchi, Leonardo; Dejonckere, Philippe

    2011-07-01

    In this paper the effect of noise on both perceptual and automatic evaluation of the glottal cycle length in irregular voice signals (sustained vowels) is studied. The reliability of four tools for voice analysis (MDVP, Praat, AMPEX, and BioVoice) is compared to visual inspection made by trained clinicians using two measures of voice signal irregularity: the jitter (J) and the coefficient of variation of the fundamental frequency (F0CV). The purpose is also to test to what extent of irregularity trained raters are capable of determining visually the glottal cycle length as compared to dedicated software tools. For a perfect control of the amount of jitter and noise put in, data consist of synthesized sustained vowels corrupted by increasing jitter and noise. Both jitter and noise can be varied to the desired extent according to built-in functions. All the tools give almost reliable measurements up to 15% of jitter, for low or moderate noise, while only few of them are reliable for higher jitter and noise levels and would thus be suited for perturbation measures in strongly irregular voice signals. As shown in Part I of this work, for low noise levels the results obtained by visual inspection from expert raters are comparable or better than those obtained with the tools presented here, at the expense of a larger amount of time devoted to searching visually for the glottal cycle lengths in the signal waveform. In this paper it is shown that results rapidly deteriorate with increasing noise. Hence, the use of a robust tool for voice analysis can give valid support to clinicians in term of reliability, reproducibility of results, and time-saving. PMID:21609247

  15. On the validity of 3D polymer gel dosimetry: II. Physico-chemical effects

    NASA Astrophysics Data System (ADS)

    Vandecasteele, Jan; De Deene, Yves

    2013-01-01

    This study quantifies some major physico-chemical factors that influence the validity of MRI (PAGAT) polymer gel dosimetry: temperature history (pre-, during and post-irradiation), oxygen exposure (post-irradiation) and volumetric effects (experiment with phantom in which a small test tube is inserted). Present results confirm the effects of thermal history prior to irradiation. By exposing a polymer gel sample to a linear temperature gradient of ˜2.8 °C cm-1 and following the dose deviation as a function of post-irradiation time new insights into temporal variations were added. A clear influence of the temperature treatment on the measured dose distribution is seen during the first hours post-irradiation (resulting in dose deviations up to 12%). This effect diminishes to 5% after 54 h post-irradiation. Imposing a temperature offset (maximum 6 °C for 3 h) during and following irradiation on a series of calibration phantoms results in only a small dose deviation of maximum 4%. Surprisingly, oxygen diffusing in a gel dosimeter up to 48 h post-irradiation was shown to have no effect. Volumetric effects were studied by comparing the dose distribution in a homogeneous phantom compared to the dose distribution in a phantom in which a small test tube was inserted. This study showed that the dose measured inside the test tube was closer to the ion chamber measurement in comparison to the reference phantom without test tube by almost 7%. It is demonstrated that physico-chemical effects are not the major causes for the dose discrepancies encountered in the reproducibility study discussed in the concurrent paper (Vandecasteele and De Deene 2013a Phys. Med. Biol. 58 19-42). However, it is concluded that these physico-chemical effects are important factors that should be addressed to further improve the dosimetric accuracy of 3D MRI polymer gel dosimetry. Both authors contributed equally to this study.

  16. Nyala and Bushbuck II: A Harvesting Model.

    ERIC Educational Resources Information Center

    Fay, Temple H.; Greeff, Johanna C.

    1999-01-01

    Adds a cropping or harvesting term to the animal overpopulation model developed in Part I of this article. Investigates various harvesting strategies that might suggest a solution to the overpopulation problem without actually culling any animals. (ASK)

  17. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  18. The Sandia MEMS Passive Shock Sensor : FY08 testing for functionality, model validation, and technology readiness.

    SciTech Connect

    Walraven, Jeremy Allen; Blecke, Jill; Baker, Michael Sean; Clemens, Rebecca C.; Mitchell, John Anthony; Brake, Matthew Robert; Epp, David S.; Wittwer, Jonathan W.

    2008-10-01

    This report summarizes the functional, model validation, and technology readiness testing of the Sandia MEMS Passive Shock Sensor in FY08. Functional testing of a large number of revision 4 parts showed robust and consistent performance. Model validation testing helped tune the models to match data well and identified several areas for future investigation related to high frequency sensitivity and thermal effects. Finally, technology readiness testing demonstrated the integrated elements of the sensor under realistic environments.

  19. Model transparency and validation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force--7.

    PubMed

    Eddy, David M; Hollingworth, William; Caro, J Jaime; Tsevat, Joel; McDonald, Kathryn M; Wong, John B

    2012-01-01

    Trust and confidence are critical to the success of health care models. There are two main methods for achieving this: transparency (people can see how the model is built) and validation (how well the model reproduces reality). This report describes recommendations for achieving transparency and validation developed by a taskforce appointed by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making. Recommendations were developed iteratively by the authors. A nontechnical description--including model type, intended applications, funding sources, structure, intended uses, inputs, outputs, other components that determine function, and their relationships, data sources, validation methods, results, and limitations--should be made available to anyone. Technical documentation, written in sufficient detail to enable a reader with necessary expertise to evaluate the model and potentially reproduce it, should be made available openly or under agreements that protect intellectual property, at the discretion of the modelers. Validation involves face validity (wherein experts evaluate model structure, data sources, assumptions, and results), verification or internal validity (check accuracy of coding), cross validity (comparison of results with other models analyzing the same problem), external validity (comparing model results with real-world results), and predictive validity (comparing model results with prospectively observed events). The last two are the strongest form of validation. Each section of this article contains a number of recommendations that were iterated among the authors, as well as among the wider modeling taskforce, jointly set up by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making. PMID:22999134

  20. Distinguishing 'new' from 'old' organic carbon in reclaimed coal mine sites using thermogravimetry: II. Field validation

    SciTech Connect

    Maharaj, S.; Barton, C.D.; Karathanasis, T.A.D.; Rowe, H.D.; Rimmer, S.M.

    2007-04-15

    Thermogravimetry was used under laboratory conditions to differentiate 'new' and 'old' organic carbon (c) by using grass litter, coal, and limestone to represent the different C fractions. Thermogravimetric and derivative thermogravimetry curves showed pyrolysis peaks at distinctively different temperatures, with the peak for litter occurring at 270 to 395{sup o}C, for coal at 415 to 520 {sup o}C, and for limestone at 700 to 785{sup o}C. To validate this method in a field setting, we studied four reforested coal mine sites in Kentucky representing a chronosequence since reclamation: 0 and 2 years located at Bent Mountain and 3 and 8 years located at the Starfire mine. A nonmined mature (approximate to 80 years old) stand at Robinson Forest, Kentucky, was selected as a reference location. Results indicated a general peak increase in the 270 to 395{sup o}C region with increased time, signifying an increase in the 'new' organic matter (OM) fraction. For the Bent Mountain site, the OM fraction increased from 0.03 to 0.095% between years 0 and 2, whereas the Starfire site showed an increase from 0.095 to 1.47% between years 3 and 8. This equates to a C sequestration rate of 2.92 Mg ha{sup -1} yr{sup -1} for 'new' OM in the upper 10-cm layer during the 8 years of reclamation on eastern Kentucky reclaimed coal mine sites. Results suggest that stable isotopes and elemental data can be used as proxy tools for qualifying soil organic C (SOC) changes over time on the reclaimed coal mine sites but cannot be used to determine the exact SOC accumulation rate. However, results suggested that the thermogravimetric and derivative thermogravimetry methods can be used to quantify SOC accumulation and has the potential to be a more reliable, cost-effective, and rapid means to determine the new organic C fraction in mixed geological material, especially in areas dominated by coal and carbonate materials.

  1. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  2. Validating a model of colon colouration using an Evolution Strategy with adaptive approximations

    E-print Network

    Rowe, Jon

    light interaction with the tissue, is aimed at correlating the histology of the colon and its colours is characterised by the rearrangement of underly- ing histology. Once developed, the model has to be validated for correctness. The validation has been implemented as an optimisation problem, and evolutionary techniques have

  3. Exact Analysis of Squared Cross-Validity Coefficient in Predictive Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2009-01-01

    In regression analysis, the notion of population validity is of theoretical interest for describing the usefulness of the underlying regression model, whereas the presumably more important concept of population cross-validity represents the predictive effectiveness for the regression equation in future research. It appears that the inference…

  4. Understanding Student Teachers' Behavioural Intention to Use Technology: Technology Acceptance Model (TAM) Validation and Testing

    ERIC Educational Resources Information Center

    Wong, Kung-Teck; Osman, Rosma bt; Goh, Pauline Swee Choo; Rahmat, Mohd Khairezan

    2013-01-01

    This study sets out to validate and test the Technology Acceptance Model (TAM) in the context of Malaysian student teachers' integration of their technology in teaching and learning. To establish factorial validity, data collected from 302 respondents were tested against the TAM using confirmatory factor analysis (CFA), and structural equation…

  5. An update to experimental models for validating computer technology Marvin V. Zelkowitz

    E-print Network

    Zelkowitz, Marvin V.

    Experimentation Technology evaluation a b s t r a c t In 1998 a survey was published on the extent to whichAn update to experimental models for validating computer technology Marvin V. Zelkowitz Department software engineering papers validate the claims made in those papers. The survey looked at publications

  6. Validation of mechanical models for reinforced concrete structures: Presentation of the French project ``Benchmark des Poutres de la Rance''

    NASA Astrophysics Data System (ADS)

    L'Hostis, V.; Brunet, C.; Poupard, O.; Petre-Lazar, I.

    2006-11-01

    Several ageing models are available for the prediction of the mechanical consequences of rebar corrosion. They are used for service life prediction of reinforced concrete structures. Concerning corrosion diagnosis of reinforced concrete, some Non Destructive Testing (NDT) tools have been developed, and have been in use for some years. However, these developments require validation on existing concrete structures. The French project “Benchmark des Poutres de la Rance” contributes to this aspect. It has two main objectives: (i) validation of mechanical models to estimate the influence of rebar corrosion on the load bearing capacity of a structure, (ii) qualification of the use of the NDT results to collect information on steel corrosion within reinforced-concrete structures. Ten French and European institutions from both academic research laboratories and industrial companies contributed during the years 2004 and 2005. This paper presents the project that was divided into several work packages: (i) the reinforced concrete beams were characterized from non-destructive testing tools, (ii) the mechanical behaviour of the beams was experimentally tested, (iii) complementary laboratory analysis were performed and (iv) finally numerical simulations results were compared to the experimental results obtained with the mechanical tests.

  7. A free wake vortex lattice model for vertical axis wind turbines: Modeling, verification and validation

    NASA Astrophysics Data System (ADS)

    Meng, Fanzhong; Schwarze, Holger; Vorpahl, Fabian; Strobel, Michael

    2014-12-01

    Since the 1970s several research activities had been carried out on developing aerodynamic models for Vertical Axis Wind Turbines (VAWTs). In order to design large VAWTs of MW scale, more accurate aerodynamic calculation is required to predict their aero-elastic behaviours. In this paper, a 3D free wake vortex lattice model for VAWTs is developed, verified and validated. Comparisons to the experimental results show that the 3D free wake vortex lattice model developed is capable of making an accurate prediction of the general performance and the instantaneous aerodynamic forces on the blades. The comparison between momentum method and the vortex lattice model shows that free wake vortex models are needed for detailed loads calculation and for calculating highly loaded rotors.

  8. A multi-source satellite data approach for modelling Lake Turkana water level: calibration and validation using satellite altimetry data

    NASA Astrophysics Data System (ADS)

    Velpuri, N. M.; Senay, G. B.; Asante, K. O.

    2012-01-01

    Lake Turkana is one of the largest desert lakes in the world and is characterized by high degrees of inter- and intra-annual fluctuations. The hydrology and water balance of this lake have not been well understood due to its remote location and unavailability of reliable ground truth datasets. Managing surface water resources is a great challenge in areas where in-situ data are either limited or unavailable. In this study, multi-source satellite-driven data such as satellite-based rainfall estimates, modelled runoff, evapotranspiration, and a digital elevation dataset were used to model Lake Turkana water levels from 1998 to 2009. Due to the unavailability of reliable lake level data, an approach is presented to calibrate and validate the water balance model of Lake Turkana using a composite lake level product of TOPEX/Poseidon, Jason-1, and ENVISAT satellite altimetry data. Model validation results showed that the satellite-driven water balance model can satisfactorily capture the patterns and seasonal variations of the Lake Turkana water level fluctuations with a Pearson's correlation coefficient of 0.90 and a Nash-Sutcliffe Coefficient of Efficiency (NSCE) of 0.80 during the validation period (2004-2009). Model error estimates were within 10% of the natural variability of the lake. Our analysis indicated that fluctuations in Lake Turkana water levels are mainly driven by lake inflows and over-the-lake evaporation. Over-the-lake rainfall contributes only up to 30% of lake evaporative demand. During the modelling time period, Lake Turkana showed seasonal variations of 1-2 m. The lake level fluctuated in the range up to 4 m between the years 1998-2009. This study demonstrated the usefulness of satellite altimetry data to calibrate and validate the satellite-driven hydrological model for Lake Turkana without using any in-situ data. Furthermore, for Lake Turkana, we identified and outlined opportunities and challenges of using a calibrated satellite-driven water balance model for (i) quantitative assessment of the impact of basin developmental activities on lake levels and for (ii) forecasting lake level changes and their impact on fisheries. From this study, we suggest that globally available satellite altimetry data provide a unique opportunity for calibration and validation of hydrologic models in ungauged basins.

  9. A multi-source satellite data approach for modelling Lake Turkana water level: Calibration and validation using satellite altimetry data

    USGS Publications Warehouse

    Velpuri, N.M.; Senay, G.B.; Asante, K.O.

    2012-01-01

    Lake Turkana is one of the largest desert lakes in the world and is characterized by high degrees of interand intra-annual fluctuations. The hydrology and water balance of this lake have not been well understood due to its remote location and unavailability of reliable ground truth datasets. Managing surface water resources is a great challenge in areas where in-situ data are either limited or unavailable. In this study, multi-source satellite-driven data such as satellite-based rainfall estimates, modelled runoff, evapotranspiration, and a digital elevation dataset were used to model Lake Turkana water levels from 1998 to 2009. Due to the unavailability of reliable lake level data, an approach is presented to calibrate and validate the water balance model of Lake Turkana using a composite lake level product of TOPEX/Poseidon, Jason-1, and ENVISAT satellite altimetry data. Model validation results showed that the satellitedriven water balance model can satisfactorily capture the patterns and seasonal variations of the Lake Turkana water level fluctuations with a Pearson's correlation coefficient of 0.90 and a Nash-Sutcliffe Coefficient of Efficiency (NSCE) of 0.80 during the validation period (2004-2009). Model error estimates were within 10% of the natural variability of the lake. Our analysis indicated that fluctuations in Lake Turkana water levels are mainly driven by lake inflows and over-the-lake evaporation. Over-the-lake rainfall contributes only up to 30% of lake evaporative demand. During the modelling time period, Lake Turkana showed seasonal variations of 1-2m. The lake level fluctuated in the range up to 4m between the years 1998-2009. This study demonstrated the usefulness of satellite altimetry data to calibrate and validate the satellite-driven hydrological model for Lake Turkana without using any in-situ data. Furthermore, for Lake Turkana, we identified and outlined opportunities and challenges of using a calibrated satellite-driven water balance model for (i) quantitative assessment of the impact of basin developmental activities on lake levels and for (ii) forecasting lake level changes and their impact on fisheries. From this study, we suggest that globally available satellite altimetry data provide a unique opportunity for calibration and validation of hydrologic models in ungauged basins. ?? Author(s) 2012.

  10. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    NASA Technical Reports Server (NTRS)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL performs better during the warm months, while during the winter time the discrepancies with radar measurements tends to maximum values. A stable behavior of the 183-WSL algorithm is demonstrated over the whole study period with an overall overestimation for rain rates intensities lower than 1 millimeter per hour. This threshold is crucial especially in wintertime where the low precipitation regime is difficult to be classified.

  11. Model transparency and validation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force-7.

    PubMed

    Eddy, David M; Hollingworth, William; Caro, J Jaime; Tsevat, Joel; McDonald, Kathryn M; Wong, John B

    2012-01-01

    Trust and confidence are critical to the success of health care models. There are two main methods for achieving this: transparency (people can see how the model is built) and validation (how well it reproduces reality). This report describes recommendations for achieving transparency and validation, developed by a task force appointed by the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the Society for Medical Decision Making (SMDM). Recommendations were developed iteratively by the authors. A nontechnical description should be made available to anyone-including model type and intended applications; funding sources; structure; inputs, outputs, other components that determine function, and their relationships; data sources; validation methods and results; and limitations. Technical documentation, written in sufficient detail to enable a reader with necessary expertise to evaluate the model and potentially reproduce it, should be made available openly or under agreements that protect intellectual property, at the discretion of the modelers. Validation involves face validity (wherein experts evaluate model structure, data sources, assumptions, and results), verification or internal validity (check accuracy of coding), cross validity (comparison of results with other models analyzing same problem), external validity (comparing model results to real-world results), and predictive validity (comparing model results with prospectively observed events). The last two are the strongest form of validation. Each section of this paper contains a number of recommendations that were iterated among the authors, as well as the wider modeling task force jointly set up by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making. PMID:22990088

  12. Implementing the Ecosystem Model: Phase II.

    ERIC Educational Resources Information Center

    Schuh, John H.

    1978-01-01

    The ecosystem model was used to assess student perceptions of certain aspects of residential life at a large university. Over 70 percent of questionnaires were returned. From the data, aspects of the environment were changed according to student recommendations. A great need for more information communication was found. (RPG)

  13. Instrumental Conditioning II: Modeling Action Selection

    E-print Network

    Niv, Yael

    in terms of? · cashing this out: the credit assignment problem · Algorithms: Reinforcement learning (RL) 2 D) It depends #12;5 S0 S2S1 4 0 1 2 modeling instrumental conditioning What will happen when the rat error as everything is predictable C) he will experience a positive prediction error D) It depends Actor

  14. Validation of winter chill models using historic records of walnut phenology Eike Luedeling a,

    E-print Network

    Zhang, Minghua

    Validation of winter chill models using historic records of walnut phenology Eike Luedeling a that evolved in temperate or cool subtropical climates, such as peaches, cherries, apples and walnuts, because Hours, Utah Model, Positive Utah Model and Dynamic Model) to explain walnut phenology in California

  15. Oxygen and seizure dynamics: II. Computational modeling.

    PubMed

    Wei, Yina; Ullah, Ghanim; Ingram, Justin; Schiff, Steven J

    2014-07-15

    Electrophysiological recordings show intense neuronal firing during epileptic seizures leading to enhanced energy consumption. However, the relationship between oxygen metabolism and seizure patterns has not been well studied. Recent studies have developed fast and quantitative techniques to measure oxygen microdomain concentration during seizure events. In this article, we develop a biophysical model that accounts for these experimental observations. The model is an extension of the Hodgkin-Huxley formalism and includes the neuronal microenvironment dynamics of sodium, potassium, and oxygen concentrations. Our model accounts for metabolic energy consumption during and following seizure events. We can further account for the experimental observation that hypoxia can induce seizures, with seizures occurring only within a narrow range of tissue oxygen pressure. We also reproduce the interplay between excitatory and inhibitory neurons seen in experiments, accounting for the different oxygen levels observed during seizures in excitatory vs. inhibitory cell layers. Our findings offer a more comprehensive understanding of the complex interrelationship among seizures, ion dynamics, and energy metabolism. PMID:24671540

  16. Description of a Website Resource for Turbulence Modeling Verification and Validation

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Smith, Brian R.; Huang, George P.

    2010-01-01

    The activities of the Turbulence Model Benchmarking Working Group - which is a subcommittee of the American Institute of Aeronautics and Astronautics (AIAA) Fluid Dynamics Technical Committee - are described. The group s main purpose is to establish a web-based repository for Reynolds-averaged Navier-Stokes turbulence model documentation, including verification and validation cases. This turbulence modeling resource has been established based on feedback from a survey on what is needed to achieve consistency and repeatability in turbulence model implementation and usage, and to document and disseminate information on new turbulence models or improvements to existing models. The various components of the website are described in detail: description of turbulence models, turbulence model readiness rating system, verification cases, validation cases, validation databases, and turbulence manufactured solutions. An outline of future plans of the working group is also provided.

  17. Model-Based Verification and Validation of Spacecraft Avionics

    NASA Technical Reports Server (NTRS)

    Khan, Mohammed Omair

    2012-01-01

    Our simulation was able to mimic the results of 30 tests on the actual hardware. This shows that simulations have the potential to enable early design validation - well before actual hardware exists. Although simulations focused around data processing procedures at subsystem and device level, they can also be applied to system level analysis to simulate mission scenarios and consumable tracking (e.g. power, propellant, etc.). Simulation engine plug-in developments are continually improving the product, but handling time for time-sensitive operations (like those of the remote engineering unit and bus controller) can be cumbersome.

  18. Principle and validation of modified hysteretic models for magnetorheological dampers

    NASA Astrophysics Data System (ADS)

    Bai, Xian-Xu; Chen, Peng; Qian, Li-Jun

    2015-08-01

    Magnetorheological (MR) dampers, semi-active actuators for vibration and shock control systems, have attracted increasing attention during the past two decades. However, it is difficult to establish a precise mathematical model for the MR dampers and their control systems due to their intrinsic strong nonlinear hysteretic behavior. A phenomenological model based on the Bouc-Wen model can be used to effectively describe the nonlinear hysteretic behavior of the MR dampers, but the structure of the phenomenological model is complex and the Bouc-Wen model is functionally redundant. In this paper, based on the phenomenological model, (1) a normalized phenomenological model is derived through incorporating a ‘normalization’ concept, and (2) a restructured model, also incorporating the ‘normalization’ concept, is proposed and realized. In order to demonstrate this, a multi-islands genetic algorithm (GA) is employed to identify the parameters of the restructured model, the normalized phenomenological model, and the phenomenological model. The performance of the three models for describing and predicting the damping force characteristics of the MR dampers are compared and analyzed using the identified parameters. The research results indicate that, as compared with the phenomenological model and the normalized phenomenological model, (1) the restructured model can not only effectively decrease the number of the model parameters and reduce the complexity of the model, but can also describe the nonlinear hysteretic behavior of MR dampers more accurately, and (2) the meanings of several model parameters of the restructured model are clearer and the initial ranges of the model parameters are more explicit, which is of significance for parameter identification.

  19. Some Hamiltonian models of friction II

    SciTech Connect

    Egli, Daniel; Gang Zhou

    2012-10-15

    In the present paper we consider the motion of a very heavy tracer particle in a medium of a very dense, non-interacting Bose gas. We prove that, in a certain mean-field limit, the tracer particle will be decelerated and come to rest somewhere in the medium. Friction is caused by emission of Cerenkov radiation of gapless modes into the gas. Mathematically, a system of semilinear integro-differential equations, introduced in Froehlich et al. ['Some hamiltonian models of friction,' J. Math. Phys. 52(8), 083508 (2011)], describing a tracer particle in a dispersive medium is investigated, and decay properties of the solution are proven. This work is an extension of Froehlich et al. ['Friction in a model of hamiltonian dynamics,' Commun. Math. Phys. 315(2), 401-444 (2012)]; it is an extension because no weak coupling limit for the interaction between tracer particle and medium is assumed. The technical methods used are dispersive estimates and a contraction principle.

  20. Validation of and enhancements to an operating-speed-based geometric design consistency evaluation model 

    E-print Network

    Collins, Kent Michael

    1995-01-01

    This thesis documents efforts to validate two elements related to an operating-speed-based geometric design consistency evaluation procedure: (1) the speed reduction estimation ability of the model, and (2) assumptions about acceleration...

  1. BRE large compartment fire tests – characterising post-flashover fires for model validation 

    E-print Network

    Welch, Stephen; Jowsey, Allan; Deeny, Susan; Morgan, Richard; Torero, Jose L

    2007-01-01

    Reliable and comprehensive measurement data from large-scale fire tests is needed for validation of computer fire models, but is subject to various uncertainties, including radiation errors in temperature measurement. Here, ...

  2. Climatically Diverse Data Set for Flat-Plate PV Module Model Validations (Presentation)

    SciTech Connect

    Marion, B.

    2013-05-01

    Photovoltaic (PV) module I-V curves were measured at Florida, Colorado, and Oregon locations to provide data for the validation and development of models used for predicting the performance of PV modules.

  3. A Composite Model for Indoor GNSS Signals: Characterization, Experimental Validation and Simulation

    E-print Network

    Calgary, University of

    , the problem of characterizing and simulating indoor Global Navigation Satellite Systems (GNSS) signals and the modernization of the Global Positioning System (GPS) has further improved the accuracy and availability1 A Composite Model for Indoor GNSS Signals: Characterization, Experimental Validation

  4. Biomarker Discovery and Validation for Proteomics and Genomics: Modeling And Systematic Analysis 

    E-print Network

    Atashpazgargari, Esmaeil

    2014-08-27

    Discovery and validation of protein biomarkers with high specificity is the main challenge of current proteomics studies. Different mass spectrometry models are used as shotgun tools for discovery of biomarkers which is usually done on a small...

  5. Community-Based Participatory Research Conceptual Model: Community Partner Consultation and Face Validity.

    PubMed

    Belone, Lorenda; Lucero, Julie E; Duran, Bonnie; Tafoya, Greg; Baker, Elizabeth A; Chan, Domin; Chang, Charlotte; Greene-Moton, Ella; Kelley, Michele A; Wallerstein, Nina

    2016-01-01

    A national community-based participatory research (CBPR) team developed a conceptual model of CBPR partnerships to understand the contribution of partnership processes to improved community capacity and health outcomes. With the model primarily developed through academic literature and expert consensus building, we sought community input to assess face validity and acceptability. Our research team conducted semi-structured focus groups with six partnerships nationwide. Participants validated and expanded on existing model constructs and identified new constructs based on "real-world" praxis, resulting in a revised model. Four cross-cutting constructs were identified: trust development, capacity, mutual learning, and power dynamics. By empirically testing the model, we found community face validity and capacity to adapt the model to diverse contexts. We recommend partnerships use and adapt the CBPR model and its constructs, for collective reflection and evaluation, to enhance their partnering practices and achieve their health and research goals. PMID:25361792

  6. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  7. The hydrodynamical models of the cometary compact H II region

    E-print Network

    Zhu, Feng-Yao; Li, Juan; Zhang, Jiang-Shui; Wang, Jun-Zhi

    2015-01-01

    We have developed a full numerical method to study the gas dynamics of cometary ultra-compact (UC) H II regions, and associated photodissociation regions (PDRs). The bow-shock and champagne-flow models with a $40.9/21.9 M_\\odot$ star are simulated. In the bow-shock models, the massive star is assumed to move through dense ($n=8000~cm^{-3}$) molecular material with a stellar velocity of $15~km~s^{-1}$. In the champagne-flow models, an exponential distribution of density with a scale height of 0.2 pc is assumed. The profiles of the [Ne II] 12.81\\mum and $H_2~S(2)$ lines from the ionized regions and PDRs are compared for two sets of models. In champagne-flow models, emission lines from the ionized gas clearly show the effect of acceleration along the direction toward the tail due to the density gradient. The kinematics of the molecular gas inside the dense shell is mainly due to the expansion of the H II region. However, in bow-shock models the ionized gas mainly moves in the same direction as the stellar motion...

  8. Predictive data modeling of human type II diabetes related statistics

    NASA Astrophysics Data System (ADS)

    Jaenisch, Kristina L.; Jaenisch, Holger M.; Handley, James W.; Albritton, Nathaniel G.

    2009-04-01

    During the course of routine Type II treatment of one of the authors, it was decided to derive predictive analytical Data Models of the daily sampled vital statistics: namely weight, blood pressure, and blood sugar, to determine if the covariance among the observed variables could yield a descriptive equation based model, or better still, a predictive analytical model that could forecast the expected future trend of the variables and possibly eliminate the number of finger stickings required to montior blood sugar levels. The personal history and analysis with resulting models are presented.

  9. Prevalence of depression and validation of the Beck Depression Inventory-II and the Children's Depression Inventory-Short amongst HIV-positive adolescents in Malawi

    PubMed Central

    Kim, Maria H; Mazenga, Alick C; Devandra, Akash; Ahmed, Saeed; Kazembe, Peter N; Yu, Xiaoying; Nguyen, Chi; Sharp, Carla

    2014-01-01

    Introduction There is a remarkable dearth of evidence on mental illness in adolescents living with HIV/AIDS, particularly in the African setting. Furthermore, there are few studies in sub-Saharan Africa validating the psychometric properties of diagnostic and screening tools for depression amongst adolescents. The primary aim of this cross-sectional study was to estimate the prevalence of depression amongst a sample of HIV-positive adolescents in Malawi. The secondary aim was to develop culturally adapted Chichewa versions of the Beck Depression Inventory-II (BDI-II) and Children's Depression Inventory-II-Short (CDI-II-S) and conduct a psychometric evaluation of these measures by evaluating their performance against a structured depression assessment using the Children's Rating Scale, Revised (CDRS-R). Study design Cross-sectional study. Methods We enrolled 562 adolescents, 12–18 years of age from two large public HIV clinics in central and southern Malawi. Participants completed two self-reports, the BDI-II and CDI-II-S, followed by administration of the CDRS-R by trained clinicians. Sensitivity, specificity and positive and negative predictive values for various BDI-II and CDI-II-S cut-off scores were calculated with receiver operating characteristics analysis. The area under the curve (AUC) was also calculated. Internal consistency was measured by standardized Cronbach's alpha coefficient, and correlation between self-reports and CDRS-R by Spearman's correlation. Results Prevalence of depression as measured by the CDRS-R was 18.9%. Suicidal ideation was expressed by 7.1% (40) using the BDI-II. The AUC for the BDI-II was 0.82 (95% CI 0.78–0.89) and for the CDI-II-S was 0.75 (95% CI 0.70–0.80). A score of ?13 in BDI-II achieved sensitivity of >80%, and a score of ?17 had a specificity of >80%. The Cronbach's alpha was 0.80 (BDI-II) and 0.66 (CDI-II-S). The correlation between the BDI-II and CDRS-R was 0.42 (p<0.001) and between the CDI-II-S and CDRS-R was 0.37 (p<0.001). Conclusions This study demonstrates that the BDI-II has sound psychometric properties in an outpatient setting among HIV-positive adolescents in Malawi. The high prevalence of depression amongst HIV-positive Malawian adolescents noted in this study underscores the need for the development of comprehensive services for HIV-positive adolescents. PMID:25085002

  10. GCR Environmental Models III: GCR Model Validation and Propagated Uncertainties in Effective Dose

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Xu, Xiaojing; Blattnig, Steve R.; Norman, Ryan B.

    2014-01-01

    This is the last of three papers focused on quantifying the uncertainty associated with galactic cosmic rays (GCR) models used for space radiation shielding applications. In the first paper, it was found that GCR ions with Z>2 and boundary energy below 500 MeV/nucleon induce less than 5% of the total effective dose behind shielding. This is an important finding since GCR model development and validation have been heavily biased toward Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer measurements below 500 MeV/nucleon. Weights were also developed that quantify the relative contribution of defined GCR energy and charge groups to effective dose behind shielding. In the second paper, it was shown that these weights could be used to efficiently propagate GCR model uncertainties into effective dose behind shielding. In this work, uncertainties are quantified for a few commonly used GCR models. A validation metric is developed that accounts for measurements uncertainty, and the metric is coupled to the fast uncertainty propagation method. For this work, the Badhwar-O'Neill (BON) 2010 and 2011 and the Matthia GCR models are compared to an extensive measurement database. It is shown that BON2011 systematically overestimates heavy ion fluxes in the range 0.5-4 GeV/nucleon. The BON2010 and BON2011 also show moderate and large errors in reproducing past solar activity near the 2000 solar maximum and 2010 solar minimum. It is found that all three models induce relative errors in effective dose in the interval [-20%, 20%] at a 68% confidence level. The BON2010 and Matthia models are found to have similar overall uncertainty estimates and are preferred for space radiation shielding applications.

  11. Particulate dispersion apparatus for the validation of plume models 

    E-print Network

    Bala, William D

    2001-01-01

    higher than used in previous studies attempting to test the hypothesis that the Fritz-Zwicke-Meister dispersion model is an improvement over the ISC3-ST model now commonly used to predict downwind concentrations from industrial sources....

  12. Validation and assessment of integer programming sensor placement models.

    SciTech Connect

    Uber, James G.; Hart, William Eugene; Watson, Jean-Paul; Phillips, Cynthia Ann; Berry, Jonathan W.

    2005-02-01

    We consider the accuracy of predictions made by integer programming (IP) models of sensor placement for water security applications. We have recently shown that IP models can be used to find optimal sensor placements for a variety of different performance criteria (e.g. minimize health impacts and minimize time to detection). However, these models make a variety of simplifying assumptions that might bias the final solution. We show that our IP modeling assumptions are similar to models developed for other sensor placement methodologies, and thus IP models should give similar predictions. However, this discussion highlights that there are significant differences in how temporal effects are modeled for sensor placement. We describe how these modeling assumptions can impact sensor placements.

  13. SDSS-II: Determination of shape and color parameter coefficients for SALT-II fit model

    SciTech Connect

    Dojcsak, L.; Marriner, J.; /Fermilab

    2010-08-01

    In this study we look at the SALT-II model of Type IA supernova analysis, which determines the distance moduli based on the known absolute standard candle magnitude of the Type IA supernovae. We take a look at the determination of the shape and color parameter coefficients, {alpha} and {beta} respectively, in the SALT-II model with the intrinsic error that is determined from the data. Using the SNANA software package provided for the analysis of Type IA supernovae, we use a standard Monte Carlo simulation to generate data with known parameters to use as a tool for analyzing the trends in the model based on certain assumptions about the intrinsic error. In order to find the best standard candle model, we try to minimize the residuals on the Hubble diagram by calculating the correct shape and color parameter coefficients. We can estimate the magnitude of the intrinsic errors required to obtain results with {chi}{sup 2}/degree of freedom = 1. We can use the simulation to estimate the amount of color smearing as indicated by the data for our model. We find that the color smearing model works as a general estimate of the color smearing, and that we are able to use the RMS distribution in the variables as one method of estimating the correct intrinsic errors needed by the data to obtain the correct results for {alpha} and {beta}. We then apply the resultant intrinsic error matrix to the real data and show our results.

  14. Discrete-Time Dataflow Models for Visual Simulation in Ptolemy II

    E-print Network

    California at Berkeley, University of

    Discrete-Time Dataflow Models for Visual Simulation in Ptolemy II by Chamberlain Fong Research;_____________________________________________________________________ Discrete Time Dataflow Models for Visual Simulation in Ptolemy II ii Abstract The Discrete Time (DT) domain in Ptolemy II is a timed extension of the Synchronous Dataflow (SDF) domain. Although not completely backward

  15. The Validity of Value-Added Models: An Allegory

    ERIC Educational Resources Information Center

    Martineau, Joseph A.

    2010-01-01

    Value-added models have become popular fixes for various accountability schemes aimed at measuring teacher effectiveness. Value-added models may resolve some of the issues in accountability models, but they bring their own set of challenges to the table. Unfortunately, political and emotional considerations sometimes keep one from examining…

  16. Validation of Auroral Oval Models Using DMSP SSUSI

    NASA Astrophysics Data System (ADS)

    Jones, J. C.

    2013-12-01

    A study was performed to determine the accuracy of current operationally-ready auroral oval boundary models to assist in the determination of the quality of various approaches to auroral oval boundary specification. Four models were evaluated in this study: OVATION Prime, OVATION, Hardy and an ensemble based on the combination of these models. These auroral oval boundary models were compared to the Defense Meteorological Satellite Program (DMSP) Special Sensor Ultraviolet Spectrographic Imager (SSUSI) Auroral Boundary Environmental Data Record (EDR). Results of the study showed the solar wind-driven OVATION Prime model produced the most consistent results compared to the SSUSI data. The magnetometer-driven Hardy model produced the next most consistent results. The in-situ measurement/climatology/look-up table model, OVATION, produced the least consistent results. In terms of magnitude of difference, the Hardy model had the lowest mean error but the variability of the error was quite large (1 to 5 degrees). The OVATION Prime model had the next lowest mean error with a much more consistent peak in error near 5 degrees in geomagnetic latitude. The OVATION model had a wide spread of error (1 to 8 degrees). The ensemble model developed for this study performed very well. Most of the correlation coefficients for the ensemble model equatorward boundary were near .8 with mean absolute errors reduced to 1 degree in geomagnetic latitude.

  17. Estimation and Q-Matrix Validation for Diagnostic Classification Models

    ERIC Educational Resources Information Center

    Feng, Yuling

    2013-01-01

    Diagnostic classification models (DCMs) are structured latent class models widely discussed in the field of psychometrics. They model subjects' underlying attribute patterns and classify subjects into unobservable groups based on their mastery of attributes required to answer the items correctly. The effective implementation of DCMs depends…

  18. Differential validation of the US-TEC model

    NASA Astrophysics Data System (ADS)

    Araujo-Pradere, E. A.; Fuller-Rowell, T. J.; Spencer, P. S. J.; Minter, C. F.

    2007-06-01

    This paper presents a validation and accuracy assessment of the total electron content (TEC) from US-TEC, a new product presented by the Space Environment Center over the contiguous United States (CONUS). US-TEC is a real-time operational implementation of the MAGIC code and provides TEC maps every 15 min and the line-of-sight electron content between any point within the CONUS and all GPS satellites in view. Validation of TEC is difficult since there are no absolute or true values of TEC. All methods of obtaining TEC, for instance, from GPS, ocean surface monitors (TOPEX), and lightning detectors (FORTE), have challenges that limit their accuracy. GPS data have interfrequency biases; TOPEX also has biases, and data are collected only over the oceans; and FORTE can eliminate biases, but because of the lower operating frequency, the signals suffer greater bending on the rays. Because of the difficulty in obtaining an absolute unbiased TEC measurement, a "differential" accuracy estimate has been performed. The method relies on the fact that uninterrupted GPS data along a particular receiver-satellite link with no cycle slips are very precise. The phase difference (scaled to TEC units) from one epoch to the next can be determined with an accuracy of less than 0.01 TEC units. This fact can be utilized to estimate the uncertainty in the US-TEC vertical and slant path maps. By integrating through US-TEC inversion maps at two different times, the difference in the slant TEC can be compared with the direct phase difference in the original RINEX data file for nine receivers not used in the US-TEC calculations. The results of this study, for the period of April-September 2004, showed an average root mean square error of 2.4 TEC units, which is equivalent to less than 40 cm of signal delay at the GPS L1 frequency. The accuracy estimates from this "differential" method are similar to the results from a companion paper utilizing an "absolute" validation method by comparing with FORTE data.

  19. An immune system-tumour interactions model with discrete time delay: Model analysis and validation

    NASA Astrophysics Data System (ADS)

    Piotrowska, Monika Joanna

    2016-05-01

    In this article a generalised mathematical model describing the interactions between malignant tumour and immune system with discrete time delay incorporated into the system is considered. Time delay represents the time required to generate an immune response due to the immune system activation by cancer cells. The basic mathematical properties of the considered model, including the global existence, uniqueness, non-negativity of the solutions, the stability of steady sates and the possibility of the existence of the stability switches, are investigated when time delay is treated as a bifurcation parameter. The model is validated with the sets of the experimental data and additional numerical simulations are performed to illustrate, extend, interpret and discuss the analytical results in the context of the tumour progression.

  20. Modeling the Object-Oriented Space Through Validated Measures

    NASA Technical Reports Server (NTRS)

    Neal, Ralph D.

    1996-01-01

    In order to truly understand software and the software development process, software measurement must be better understood. A beginning step toward a better understanding of software measurement is the categorization of the measurements by some meaningful taxonomy. The most meaningful taxonomy would capture the basic nature of the subject oriented (O-O) space. The interesting characteristics of object oriented software offer a starting point for such a categorization of measures. A taxonomy has been developed based on fourteen characteristics of object-oriented software gathered from the literature This taxonomy allows us to easily see gaps and redundancies in the O-O measures. The taxonomy also clearly differentiates among taxa so that there is no ambiguity as to the taxon to which a measure belongs. The taxonomy has been populated with thirty-two measures that have been validated in the narrow sense of Fenton, using measurement theory with Zuse's augmentation.

  1. Global and Regional Ecosystem Modeling: Databases of Model Drivers and Validation Measurements

    NASA Astrophysics Data System (ADS)

    Olson, R. J.; Hibbard, K.; Kittel, T. G.; Scurlock, J. M.

    2001-05-01

    Understanding regional-scale ecosystem responses to changing environmental conditions is important both as a scientific question and as the basis for making policy decisions. The confidence in projecting ecosystem responses using regional ecological models depends on how well the field data used to develop the model represent the region of interest, how well the environmental driving variables represent the region of interest, and how well regional model predictions agree with observed data for the region. Recently, confidence in model predictions has increased as a result of the availability of well-documented compilations of data to run and validate models. Results from the Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) illustrate the value of developing common driver data and the Ecosystem Model-Data Intercomparison (EMDI) exercise demonstrates the usefulness of an extensive collection of field measurements to compare to model predictions. VEMAP is a multi-institutional, international effort addressing responses of biogeography and biogeochemistry models to environmental variability to determine their sensitivity to changing climate and elevated atmospheric carbon dioxide concentrations. Climate, climate change scenarios, soil properties, and potential natural vegetation data sets were prepared as common boundary conditions and driving variables for the 3200 5 grid cells covering the United States. The highly structured nature of the intercomparison allowed rigorous analysis of results. Recent EMDI workshops have provided a venue for 17 modeling groups to compare their model outputs against net primary productivity (NPP) data for 2,523 sites and 5,164 0.5 grid cells distributed worldwide. Results showed general agreement between model predictions and field measurements but with obvious differences that indicated areas for potential data and model improvement. Data from both the VEMAP and the EMDI projects are available from the Oak Ridge National Laboratory Distributed Active Archive Center (http://www.daac.ornl.gov/).

  2. Modelling Ar II spectral emission from the ASTRAL helicon plasma

    NASA Astrophysics Data System (ADS)

    Munoz Burgos, Jorge; Boivin, Robert; Loch, Stuart; Kamar, Ola; Ballance, Connor; Pindzola, Mitch

    2008-11-01

    We describe our spectral modeling of ArII emission from the ASTRAL helicon plasma at Auburn University. Collisional-radiative theory is used to model the emitted spectrum, with account being taken for the density and temperature variation along the line of sight. This study has two main aims. Firstly to test the atomic data used in the model and secondly to identify spectral line ratios in the 200 nm - 1000 nm range that could be used as temperature diagnostics. Using the temperature at which Ar II emission starts to be seen we have been able to test recent ionization and recombination data. Using selected spectral lines we were then able to test the importance of the continuum-coupling effects included in the most recent Ar+ electron impact excitation data. Selected spectral line ratios have been identified that show a strong temperature variation and have potential as a temperature diagnostic.

  3. Motivating the Additional Use of External Validity: Examining Transportability in a Model of Glioblastoma Multiforme

    PubMed Central

    Singleton, Kyle W.; Speier, William; Bui, Alex AT; Hsu, William

    2014-01-01

    Despite the growing ubiquity of data in the medical domain, it remains difficult to apply results from experimental and observational studies to additional populations suffering from the same disease. Many methods are employed for testing internal validity; yet limited effort is made in testing generalizability, or external validity. The development of disease models often suffers from this lack of validity testing and trained models frequently have worse performance on different populations, rendering them ineffective. In this work, we discuss the use of transportability theory, a causal graphical model examination, as a mechanism for determining what elements of a data resource can be shared or moved between a source and target population. A simplified Bayesian model of glioblastoma multiforme serves as the example for discussion and preliminary analysis. Examination over data collection hospitals from the TCGA dataset demonstrated improvement of prediction in a transported model over a baseline model. PMID:25954466

  4. Data & model conditioning for multivariate systematic uncertainty in model calibration, validation, and extrapolation.

    SciTech Connect

    Romero, Vicente Jose

    2010-03-01

    This paper discusses implications and appropriate treatment of systematic uncertainty in experiments and modeling. Systematic uncertainty exists when experimental conditions, and/or measurement bias errors, and/or bias contributed by post-processing the data, are constant over the set of experiments but the particular values of the conditions and/or biases are unknown to within some specified uncertainty. Systematic uncertainties in experiments do not automatically show up in the output data, unlike random uncertainty which is revealed when multiple experiments are performed. Therefore, the output data must be properly 'conditioned' to reflect important sources of systematic uncertainty in the experiments. In industrial scale experiments the systematic uncertainty in experimental conditions (especially boundary conditions) is often large enough that the inference error on how the experimental system maps inputs to outputs is often quite substantial. Any such inference error and uncertainty thereof also has implications in model validation and calibration/conditioning; ignoring systematic uncertainty in experiments can lead to 'Type X' error in these procedures. Apart from any considerations of modeling and simulation, reporting of uncertainty associated with experimental results should include the effects of any significant systematic uncertainties in the experiments. This paper describes and illustrates the treatment of multivariate systematic uncertainties of interval and/or probabilistic natures, and combined cases. The paper also outlines a practical and versatile 'real-space' framework and methodology within which experimental and modeling uncertainties (correlated and uncorrelated, systematic and random, aleatory and epistemic) are treated to mitigate risk in model validation, calibration/conditioning, hierarchical modeling, and extrapolative prediction.

  5. Social Validity of the Critical Incident Stress Management Model for School-Based Crisis Intervention

    ERIC Educational Resources Information Center

    Morrison, Julie Q.

    2007-01-01

    The Critical Incident Stress Management (CISM) model for crisis intervention was developed for use with emergency service personnel. Research regarding the use of the CISM model has been conducted among civilians and high-risk occupation groups with mixed results. The purpose of this study is to examine the social validity of the CISM model for…

  6. Validation of a dynamic model for a dehumidifier wood drying kiln

    SciTech Connect

    Sun, Z.F.; Carrington, C.G.; Bannister, P.

    1999-04-01

    The validation of a dehumidifier wood drying kiln model established previously has been conducted by using the performance data for a commercial scale kiln. The good agreement between the modelled and measured performance results shows that the model can be used for the design and analysis of dehumidifier wood drying kilns.

  7. The Weird World, and Equally Weird Measurement Models: Reactive Indicators and the Validity Revolution

    ERIC Educational Resources Information Center

    Hayduk, Leslie A.; Robinson, Hannah Pazderka; Cummings, Greta G.; Boadu, Kwame; Verbeek, Eric L.; Perks, Thomas A.

    2007-01-01

    Researchers using structural equation modeling (SEM) aspire to learn about the world by seeking models with causal specifications that match the causal forces extant in the world. This quest for a model matching existing worldly causal forces constitutes an ontology that orients, or perhaps reorients, thinking about measurement validity. This…

  8. Spatial calibration and temporal validation of flow for regional scale hydrologic modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Physically based regional scale hydrologic modeling is gaining importance for planning and management of water resources. Calibration and validation of such regional scale model is necessary before applying it for scenario assessment. However, in most regional scale hydrologic modeling, flow validat...

  9. Tsunami Generation by Submarine Mass Failure. I: Modeling, Experimental Validation, and Sensitivity Analyses

    E-print Network

    Grilli, Stéphan T.

    Tsunami Generation by Submarine Mass Failure. I: Modeling, Experimental Validation, and Sensitivity with a two-dimensional 2D fully nonlinear potential flow FNPF model for tsunami generation by two idealized a simple wavemaker formalism, and prescribed as a boundary condition in the FNPF model. Tsunami amplitudes

  10. Shape memory polymer filled honeycomb model and experimental validation

    NASA Astrophysics Data System (ADS)

    Beblo, R. V.; Puttmann, J. P.; Joo, J. J.; Reich, G. W.

    2015-02-01

    An analytical model predicting the in-plane Young’s and shear moduli of a shape memory polymer filled honeycomb composite is presented. By modeling the composite as a series of rigidly attached beams, the mechanical advantage of the load distributed on each beam by the infill is accounted for. The model is compared to currently available analytical models as well as experimental data. The model correlates extremely well with experimental data for empty honeycomb and when the polymer is above its glass transition temperature. Below the glass transition temperature, rule of mixtures is shown to be more accurate as bending is no longer the dominant mode of deformation. The model is also derived for directions other than the typical x and y allowing interpolation of the stiffness of the composite in any direction.

  11. Atmospheric Dispersion Model Validation in Low Wind Conditions

    SciTech Connect

    Sawyer, Patrick

    2007-11-01

    Atmospheric plume dispersion models are used for a variety of purposes including emergency planning and response to hazardous material releases, determining force protection actions in the event of a Weapons of Mass Destruction (WMD) attack and for locating sources of pollution. This study provides a review of previous studies that examine the accuracy of atmospheric plume dispersion models for chemical releases. It considers the principles used to derive air dispersion plume models and looks at three specific models currently in use: Aerial Location of Hazardous Atmospheres (ALOHA), Emergency Prediction Information Code (EPIcode) and Second Order Closure Integrated Puff (SCIPUFF). Results from this study indicate over-prediction bias by the EPIcode and SCIPUFF models and under-prediction bias by the ALOHA model. The experiment parameters were for near field dispersion (less than 100 meters) in low wind speed conditions (less than 2 meters per second).

  12. Modeling and Simulation of Longitudinal Dynamics for LER-HER PEP II Rings

    SciTech Connect

    Rivetta, Claudio; Mastorides, T.; Fox, J.D.; Teytelman, D.; Van Winkle, D.; /SLAC

    2007-03-06

    A time domain modeling and simulation tool for beam-cavity interactions in LER and HER rings at PEP II are presented. The motivation for this tool is to explore the stability margins and performance limits of PEP II RF systems at higher currents and upgraded RF configurations. It also serves as test bed for new control algorithms and can define the ultimate limits of the architecture. The time domain program captures the dynamical behavior of the beam-cavity interaction based on a reduced model. The ring current is represented by macro-bunches. Multiple RF station in the ring are represented via one or two macro-cavities. Each macro-cavity captures the overall behavior of all the 2 or 4 cavity RF station. Station models include nonlinear elements in the klystron and signal processing. This allows modeling the principal longitudinal impedance control loops interacting with the longitudinal beam model. Validation of simulation tool is in progress by comparing the measured growth rates for both LER and HER rings with simulation results. The simulated behavior of both machines at high currents are presented comparing different control strategies and the effect of non-linear klystrons in the growth rates.

  13. Web-page on UrQMD Model Validation

    E-print Network

    A. Galoyan; J. Ritman; V. Uzhinsky

    2006-05-18

    A WEB-page containing materials of comparing experimental data and UrQMD model calculations has been designed. The page provides its user with a variety of tasks solved with the help of the model, accuracy and/or quality of experimental data description, and so on. The page can be useful for new experimental data analysis, or new experimental research planning. The UrQMD model is cited in more than 272 publications. Only 44 of them present original calculations. Their main results on the model are presented on the page.

  14. JaCVAM-organized international validation study of the in vivo rodent alkaline comet assay for detection of genotoxic carcinogens: II. Summary of definitive validation study results.

    PubMed

    Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Beevers, Carol; De Boeck, Marlies; Burlinson, Brian; Hobbs, Cheryl A; Kitamoto, Sachiko; Kraynak, Andrew R; McNamee, James; Nakagawa, Yuzuki; Pant, Kamala; Plappert-Helbig, Ulla; Priestley, Catherine; Takasawa, Hironao; Wada, Kunio; Wirnitzer, Uta; Asano, Norihide; Escobar, Patricia A; Lovell, David; Morita, Takeshi; Nakajima, Madoka; Ohno, Yasuo; Hayashi, Makoto

    2015-07-01

    The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this exercise was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The study protocol was optimized in the pre-validation studies, and then the definitive (4th phase) validation study was conducted in two steps. In the 1st step, assay reproducibility was confirmed among laboratories using four coded reference chemicals and the positive control ethyl methanesulfonate. In the 2nd step, the predictive capability was investigated using 40 coded chemicals with known genotoxic and carcinogenic activity (i.e., genotoxic carcinogens, genotoxic non-carcinogens, non-genotoxic carcinogens, and non-genotoxic non-carcinogens). Based on the results obtained, the in vivo comet assay is concluded to be highly capable of identifying genotoxic chemicals and therefore can serve as a reliable predictor of rodent carcinogenicity. PMID:26212295

  15. A comprehensive validation toolbox for regional ocean models - Outline, implementation and application to the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Jandt, Simon; Laagemaa, Priidik; Janssen, Frank

    2014-05-01

    The systematic and objective comparison between output from a numerical ocean model and a set of observations, called validation in the context of this presentation, is a beneficial activity at several stages, starting from early steps in model development and ending at the quality control of model based products delivered to customers. Even though the importance of this kind of validation work is widely acknowledged it is often not among the most popular tasks in ocean modelling. In order to ease the validation work a comprehensive toolbox has been developed in the framework of the MyOcean-2 project. The objective of this toolbox is to carry out validation integrating different data sources, e.g. time-series at stations, vertical profiles, surface fields or along track satellite data, with one single program call. The validation toolbox, implemented in MATLAB, features all parts of the validation process - ranging from read-in procedures of datasets to the graphical and numerical output of statistical metrics of the comparison. The basic idea is to have only one well-defined validation schedule for all applications, in which all parts of the validation process are executed. Each part, e.g. read-in procedures, forms a module in which all available functions of this particular part are collected. The interface between the functions, the module and the validation schedule is highly standardized. Functions of a module are set up for certain validation tasks, new functions can be implemented into the appropriate module without affecting the functionality of the toolbox. The functions are assigned for each validation task in user specific settings, which are externally stored in so-called namelists and gather all information of the used datasets as well as paths and metadata. In the framework of the MyOcean-2 project the toolbox is frequently used to validate the forecast products of the Baltic Sea Marine Forecasting Centre. Hereby the performance of any new product version is compared with the previous version. Although, the toolbox is mainly tested for the Baltic Sea yet, it can easily be adapted to different datasets and parameters, regardless of the geographic region. In this presentation the usability of the toolbox is demonstrated along with several results of the validation process.

  16. Flashing liquid jets and two-phase droplet dispersion II. Comparison and validation of droplet size and rainout formulations.

    PubMed

    Witlox, Henk; Harper, Mike; Bowen, Phil; Cleary, Vincent

    2007-04-11

    Loss of containment often results in flashing releases of hazardous chemicals into the atmosphere. Rainout of these chemicals reduces airborne concentrations, but can also lead to extended cloud duration because of re-evaporation of the rained-out liquid. Therefore, for hazard assessment one must use models which accurately predict both the amount of rainout and its rate of re-evaporation. However, the findings of a literature survey reveal weaknesses in the state-of-the-art for modelling the sub-processes of droplet atomisation, atmospheric expansion, two-phase dispersion, rainout, pool formation and re-evaporation. A recent joint industry project has implemented recommendations from this survey, deriving from scaled water experiments droplet size correlations for conditions ranging from negative to high superheat. This experimental programme is reported in more detail in a separate companion paper. As a whole these correlations describe a tri-linear function of droplet size (expressed as Sauter mean diameter) as a function of superheat. This function describes the regimes of non-flashing, the transition between non-flashing and flashing, and fully flashing. The new correlations have been compared with previous correlations recommended by the Dutch Yellow Book and CCPS Books. The correlations are validated against published experiments including the STEP experiments (flashing propane jets), experiments by the Von Karman Institute (flashing R134-A jets), and water and butane experiments carried out by Ecole des Mines and INERIS. The rainout calculations by the dispersion model have been validated against a subset of the CCPS experiments (flashing jets of water, CFC-11, chlorine, cyclohexane, monomethylamine). PMID:16911856

  17. Chemical kinetics parameters and model validation for the gasification of PCEA nuclear graphite

    SciTech Connect

    El-Genk, Mohamed S; Tournier, Jean-Michel; Contescu, Cristian I

    2014-01-01

    A series of gasification experiments, using two right cylinder specimens (~ 12.7 x 25.4 mm and 25.4 x 25.4 mm) of PCEA nuclear graphite in ambient airflow, measured the total gasification flux at weight losses up to 41.5% and temperatures (893-1015 K) characteristics of those for in-pores gasification Mode (a) and in-pores diffusion-limited Mode (b). The chemical kinetics parameters for the gasification of PCEA graphite are determined using a multi-parameters optimization algorithm from the measurements of the total gasification rate and transient weight loss in experiments. These parameters are: (i) the pre-exponential rate coefficients and the Gaussian distributions and values of specific activation energies for adsorption of oxygen and desorption of CO gas; (ii) the specific activation energy and pre-exponential rate coefficient for the breakup of stable un-dissociated C(O2) oxygen radicals to form stable (CO) complexes; (iii) the specific activation energy and pre-exponential coefficient for desorption of CO2 gas and; (iv) the initial surface area of reactive free sites per unit mass. This area is consistently 13.5% higher than that for nuclear graphite grades of NBG-25 and IG-110 and decreases inversely proportional with the square root of the initial mass of the graphite specimens in the experiments. Experimental measurements successfully validate the chemical-reactions kinetics model that calculates continuous Arrhenius curves of the total gasification flux and the production rates of CO and CO2 gases. The model results at different total weight losses agree well with measurements and expand beyond the temperatures in the experiments to the diffusion-limited mode of gasification. Also calculated are the production rates of CO and CO2 gases and their relative contributions to the total gasification rate in the experiments as functions of temperature, for total weight losses of 5% and 10%.

  18. Summary of EASM Turbulence Models in CFL3D With Validation Test Cases

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.

    2003-01-01

    This paper summarizes the Explicit Algebraic Stress Model in k-omega form (EASM-ko) and in k-epsilon form (EASM-ke) in the Reynolds-averaged Navier-Stokes code CFL3D. These models have been actively used over the last several years in CFL3D, and have undergone some minor modifications during that time. Details of the equations and method for coding the latest versions of the models are given, and numerous validation cases are presented. This paper serves as a validation archive for these models.

  19. How much certainty is enough? Validation of a nutrient retention model for prioritizing watershed conservation in North Carolina

    NASA Astrophysics Data System (ADS)

    Hamel, P.; Chaplin-Kramer, R.; Benner, R.

    2013-12-01

    Context Quantifying ecosystems services, nature's benefits to people, is an area of active research in water resource management. Increasingly, water utilities and basin management authorities are interested in optimizing watershed scale conservation strategies to mitigate the economic and environmental impacts of land-use and hydrological changes. While many models are available to represent hydrological processes in a spatially explicit way, large uncertainties remain associated with i) the biophysical outputs of these models (e.g., nutrient concentration at a given location), and ii) the service valuation method to support specific decisions (e.g., targeting conservation areas based on their contribution to retaining nutrient). Better understanding these uncertainties and their impact on the decision process is critical for establishing credibility of such models in a planning context. Methods To address this issue in an emerging payments for watershed services program in the Cape Fear watershed, North Carolina, USA, we tested and validated the use of a nutrient retention model (InVEST) for targeting conservation activities. Specifically, we modeled water yield and nutrient transport throughout the watershed and valued the retention service provided by forested areas. Observed flow and water quality data at multiple locations allowed calibration of the model at the watershed level as well as the subwatershed level. By comparing the results from each model parameterization, we were able to assess the uncertainties related to both the model structure and parameter estimation. Finally, we assessed the use of the model for climate scenario simulation by characterizing its ability to represent inter-annual variability. Results and discussion The spatial analyses showed that the two calibration approaches could yield distinct parameter sets, both for the water yield and the nutrient model. These results imply a difference in the absolute nutrient concentration predicted by the models in the validation period. However, they did not significantly impact the identification of priority areas for conservation activities, which is the level of confidence necessary to support a decision in this particular context. In addition, the temporal analyses suggested that the model could adequately capture inter-annual changes, which increases confidence for the use of the model in a context of climate change. Our approach shows the importance of assessing uncertainties in the context of decision-making, with errors in the biophysical component being less of a concern when comparing among different regions in a watershed or in scenario simulations. These results have major implications in the field of ecosystem services, where the importance of communicating uncertainties is an often unappreciated. While further work is needed to generalize the results of the Cape Fear study, the approach also has the potential to validate the use of the model in ungauged basins.

  20. A Macro Model of Training and Development: Validation.

    ERIC Educational Resources Information Center

    Al-Khayyat, Ridha M.; Elgamal, Mahmoud A.

    1997-01-01

    A macro model of training and development includes input (training and development climate), process, and output (individual/organizational change) indicators. A test of the model with 387 Kuwaiti bank employees supported these indicators. Managers' perceptions of training and development and the organization's return on investment were…

  1. Hydroforming Of Patchwork Blanks — Numerical Modeling And Experimental Validation

    NASA Astrophysics Data System (ADS)

    Lamprecht, Klaus; Merklein, Marion; Geiger, Manfred

    2005-08-01

    In comparison to the commonly applied technology of tailored blanks the concept of patchwork blanks offers a number of additional advantages. Potential application areas for patchwork blanks in automotive industry are e.g. local reinforcements of automotive closures, structural reinforcements of rails and pillars as well as shock towers. But even if there is a significant application potential for patchwork blanks in automobile production, industrial realization of this innovative technique is decelerated due to a lack of knowledge regarding the forming behavior and the numerical modeling of patchwork blanks. Especially for the numerical simulation of hydroforming processes, where one part of the forming tool is replaced by a fluid under pressure, advanced modeling techniques are required to ensure an accurate prediction of the blanks' forming behavior. The objective of this contribution is to provide an appropriate model for the numerical simulation of patchwork blanks' forming processes. Therefore, different finite element modeling techniques for patchwork blanks are presented. In addition to basic shell element models a combined finite element model consisting of shell and solid elements is defined. Special emphasis is placed on the modeling of the weld seam. For this purpose the local mechanical properties of the weld metal, which have been determined by means of Martens-hardness measurements and uniaxial tensile tests, are integrated in the finite element models. The results obtained from the numerical simulations are compared to experimental data from a hydraulic bulge test. In this context the focus is laid on laser- and spot-welded patchwork blanks.

  2. Kohlberg's Moral Development Model: Cohort Influences on Validity.

    ERIC Educational Resources Information Center

    Bechtel, Ashleah

    An overview of Kohlberg's theory of moral development is presented; three interviews regarding the theory are reported, and the author's own moral development is compared to the model; finally, a critique of the theory is addressed along with recommendations for future enhancement. Lawrence Kohlberg's model of moral development, also referred to…

  3. Development and Validation of an Animal Susceptibility Model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An individual animal’s stress level is the summation of stresses from three areas: the environment, animal, and management. A model was developed to predict the susceptibility of an individual animal to heat stress. The model utilizes a hierarchal knowledge-based fuzzy inference system with 11 anim...

  4. Validation and Verification of LADEE Models and Software

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen

    2013-01-01

    The Lunar Atmosphere Dust Environment Explorer (LADEE) mission will orbit the moon in order to measure the density, composition and time variability of the lunar dust environment. The ground-side and onboard flight software for the mission is being developed using a Model-Based Software methodology. In this technique, models of the spacecraft and flight software are developed in a graphical dynamics modeling package. Flight Software requirements are prototyped and refined using the simulated models. After the model is shown to work as desired in this simulation framework, C-code software is automatically generated from the models. The generated software is then tested in real time Processor-in-the-Loop and Hardware-in-the-Loop test beds. Travelling Road Show test beds were used for early integration tests with payloads and other subsystems. Traditional techniques for verifying computational sciences models are used to characterize the spacecraft simulation. A lightweight set of formal methods analysis, static analysis, formal inspection and code coverage analyses are utilized to further reduce defects in the onboard flight software artifacts. These techniques are applied early and often in the development process, iteratively increasing the capabilities of the software and the fidelity of the vehicle models and test beds.

  5. EPIC and APEX: Model use, calibration, and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Environmental Policy Integrated Climate (EPIC) and Agricultural Policy/Environmental eXtender (APEX) models have been developed to assess a wide variety of agricultural water resource, water quality, and other environmental problems. The EPIC model is designed to be applied at a field-scale leve...

  6. Singlet Dark Matter in Type II Two Higgs Doublet Model

    E-print Network

    Yi Cai; Tong Li

    2013-08-24

    Inspired by the dark matter searches in the low mass region, we study the Type II two Higgs doublet model with a light gauge singlet WIMP stabilized by a Z_2 symmetry. The real singlet is required to only couple to the non-Standard Model Higgs. We investigate singlet candidates with different spins as well as isospin violating effect. The parameter space favored by LHC data in two Higgs doublet model and hadronic uncertainties in WIMP-nucleon elastic scattering are also taken into account. We find only the scalar singlet in the isospin conserving case leads to a major overlap with the region of interests of most direct detection experiments.

  7. Development and Validation of a Tokamak Skin Effect Transformer model

    E-print Network

    Romero, J A; Coda, S; Felici, F; Garrido, I

    2012-01-01

    A control oriented, lumped parameter model for the tokamak transformer including the slow flux penetration in the plasma (skin effect transformer model) is presented. The model does not require detailed or explicit information about plasma profiles or geometry. Instead, this information is lumped in system variables, parameters and inputs. The model has an exact mathematical structure built from energy and flux conservation theorems, predicting the evolution and non linear interaction of the plasma current and internal inductance as functions of the primary coil currents, plasma resistance, non-inductive current drive and the loop voltage at a specific location inside the plasma (equilibrium loop voltage). Loop voltage profile in the plasma is substituted by a three-point discretization, and ordinary differential equations are used to predict the equilibrium loop voltage as function of the boundary and resistive loop voltages. This provides a model for equilibrium loop voltage evolution, which is reminiscent ...

  8. Nonlinear convective pulsation models of type II Cepheids

    NASA Astrophysics Data System (ADS)

    Smolec, Radoslaw

    2015-08-01

    We present a grid of nonlinear convective pulsation models of type-II Cepheids: BL Her stars, W Vir stars and RV Tau stars. The models cover a wide range of masses, luminosities, effective temperatures and chemical compositions. The most interesting result is detection of deterministic chaos in the models. Different routes to chaos are detected (period doubling, intermittent route) as well as variety of phenomena intrinsic to chaotic dynamics (periodic islands within chaotic bands, crisis bifurcation, type-I and type-III intermittency). Some of the phenomena (period doubling in BL Her and in RV Tau stars, irregular pulsation of RV Tau stars) are well known in the pulsation of type-II Cepheids. Prospects of discovering the other are briefly discussed. Transition from BL Her type pulsation through W Vir type till RV Tau type is analysed. In the most luminous models a dynamical instability is detected, which indicates that pulsation driven mass loss is important process occurring in type-II Cepheids.

  9. Spatial Statistical Procedures to Validate Input Data in Energy Models

    SciTech Connect

    Johannesson, G.; Stewart, J.; Barr, C.; Brady Sabeff, L.; George, R.; Heimiller, D.; Milbrandt, A.

    2006-01-01

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the abovementioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  10. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems.

    PubMed

    Silva, Lenardo C; Almeida, Hyggo O; Perkusich, Angelo; Perkusich, Mirko

    2015-01-01

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage. PMID:26528982

  11. J-Integral modeling and validation for GTS reservoirs.

    SciTech Connect

    Martinez-Canales, Monica L.; Nibur, Kevin A.; Lindblad, Alex J.; Brown, Arthur A.; Ohashi, Yuki; Zimmerman, Jonathan A.; Huestis, Edwin; Hong, Soonsung; Connelly, Kevin; Margolis, Stephen B.; Somerday, Brian P.; Antoun, Bonnie R.

    2009-01-01

    Non-destructive detection methods can reliably certify that gas transfer system (GTS) reservoirs do not have cracks larger than 5%-10% of the wall thickness. To determine the acceptability of a reservoir design, analysis must show that short cracks will not adversely affect the reservoir behavior. This is commonly done via calculation of the J-Integral, which represents the energetic driving force acting to propagate an existing crack in a continuous medium. J is then compared against a material's fracture toughness (J{sub c}) to determine whether crack propagation will occur. While the quantification of the J-Integral is well established for long cracks, its validity for short cracks is uncertain. This report presents the results from a Sandia National Laboratories project to evaluate a methodology for performing J-Integral evaluations in conjunction with its finite element analysis capabilities. Simulations were performed to verify the operation of a post-processing code (J3D) and to assess the accuracy of this code and our analysis tools against companion fracture experiments for 2- and 3-dimensional geometry specimens. Evaluation is done for specimens composed of 21-6-9 stainless steel, some of which were exposed to a hydrogen environment, for both long and short cracks.

  12. Natural Interaction Metaphors for Functional Validations of Virtual Car Models.

    PubMed

    Moehring, Mathias; Froehlich, Bernd

    2011-02-01

    Natural Interaction is a key requirement for the virtual validation of functional aspects in automotive product development processes. It is the metaphor people encounter in reality: the direct manipulation of objects by their hands. To enable this kind of interaction, we propose a pseudo-physical metaphor that is plausible enough to provide realistic interaction and robust enough to meet the needs of industrial applications. Our analysis of the most common object types in automotive scenarios guided the development of a set of grasping heuristics to support robust finger-based interaction of multiple hands and users. The objects' reaction to the users' finger motions is based on pseudo-physical simulations, also taking various types of constrained objects into account. In dealing with real-world scenarios, we had to introduce Normal Proxies, which extend objects with appropriate normals for improved grasp detection. An expert review revealed that our metaphors allow for an intuitive and reliable assessment of several functionalities of objects found in a car interior. Follow-up user studies showed that overall task performance and usability are similar for CAVE- and HMD-environments. For larger objects and more gross manipulation, using the CAVE without employing a virtual hand representation is preferred, but for more fine-grained manipulation and smaller objects the HMD turns out to be beneficial. PMID:21301024

  13. Nearshore Tsunami Inundation Model Validation: Toward Sediment Transport Applications

    USGS Publications Warehouse

    Apotsos, Alex; Buckley, Mark; Gelfenbaum, Guy; Jaffe, Bruce; Vatvani, Deepak

    2011-01-01

    Model predictions from a numerical model, Delft3D, based on the nonlinear shallow water equations are compared with analytical results and laboratory observations from seven tsunami-like benchmark experiments, and with field observations from the 26 December 2004 Indian Ocean tsunami. The model accurately predicts the magnitude and timing of the measured water levels and flow velocities, as well as the magnitude of the maximum inundation distance and run-up, for both breaking and non-breaking waves. The shock-capturing numerical scheme employed describes well the total decrease in wave height due to breaking, but does not reproduce the observed shoaling near the break point. The maximum water levels observed onshore near Kuala Meurisi, Sumatra, following the 26 December 2004 tsunami are well predicted given the uncertainty in the model setup. The good agreement between the model predictions and the analytical results and observations demonstrates that the numerical solution and wetting and drying methods employed are appropriate for modeling tsunami inundation for breaking and non-breaking long waves. Extension of the model to include sediment transport may be appropriate for long, non-breaking tsunami waves. Using available sediment transport formulations, the sediment deposit thickness at Kuala Meurisi is predicted generally within a factor of 2.

  14. A forward model-based validation of cardiovascular system identification

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Cohen, R. J.

    2001-01-01

    We present a theoretical evaluation of a cardiovascular system identification method that we previously developed for the analysis of beat-to-beat fluctuations in noninvasively measured heart rate, arterial blood pressure, and instantaneous lung volume. The method provides a dynamical characterization of the important autonomic and mechanical mechanisms responsible for coupling the fluctuations (inverse modeling). To carry out the evaluation, we developed a computational model of the cardiovascular system capable of generating realistic beat-to-beat variability (forward modeling). We applied the method to data generated from the forward model and compared the resulting estimated dynamics with the actual dynamics of the forward model, which were either precisely known or easily determined. We found that the estimated dynamics corresponded to the actual dynamics and that this correspondence was robust to forward model uncertainty. We also demonstrated the sensitivity of the method in detecting small changes in parameters characterizing autonomic function in the forward model. These results provide confidence in the performance of the cardiovascular system identification method when applied to experimental data.

  15. On verification and validation of spring fabric model

    NASA Astrophysics Data System (ADS)

    Gao, Zheng; Shi, Qiangqiang; Yang, Yiyang; Li, Xiaolin

    2015-04-01

    An enhanced spring-mass model has been developed to mimic the complex behavior of parachute canopy in the air flow. Given the Young's modulus and Poisson's ratio, the model has the ability to duplicate the realistic strain and stress of the elastic membrane by including the angular deformation energy in the triangulated mesh. The numerical results verify the effectiveness of the proposed model and demonstrate its convergent property. In addition, GPU-based parallel computing techniques are applied to accelerate the computational speed and increase the resolution of numerical results. Advisor

  16. Animal models of post-traumatic stress disorder: face validity

    PubMed Central

    Goswami, Sonal; Rodríguez-Sierra, Olga; Cascardi, Michele; Paré, Denis

    2013-01-01

    Post-traumatic stress disorder (PTSD) is a debilitating condition that develops in a proportion of individuals following a traumatic event. Despite recent advances, ethical limitations associated with human research impede progress in understanding PTSD. Fortunately, much effort has focused on developing animal models to help study the pathophysiology of PTSD. Here, we provide an overview of animal PTSD models where a variety of stressors (physical, psychosocial, or psychogenic) are used to examine the long-term effects of severe trauma. We emphasize models involving predator threat because they reproduce human individual differences in susceptibility to, and in the long-term consequences of, psychological trauma. PMID:23754973

  17. Comparison of occlusal contact areas of class I and class II molar relationships at finishing using three-dimensional digital models

    PubMed Central

    Lee, Hyejoon; Kim, Minji

    2015-01-01

    Objective This study compared occlusal contact areas of ideally planned set-up and accomplished final models against the initial in class I and II molar relationships at finishing. Methods Evaluations were performed for 41 post-orthodontic treatment cases, of which 22 were clinically diagnosed as class I and the remainder were diagnosed as full cusp class II. Class I cases had four first premolars extracted, while class II cases had maxillary first premolars extracted. Occlusal contact areas were measured using a three-dimensional scanner and RapidForm 2004. Independent t-tests were used to validate comparison values between class I and II finishings. Repeated measures analysis of variance was used to compare initial, set up, and final models. Results Molars from cases in the class I finishing for the set-up model showed significantly greater contact areas than those from class II finishing (p < 0.05). The final model class I finishing showed significantly larger contact areas for the second molars (p < 0.05). The first molars of the class I finishing for the final model showed a tendency to have larger contact areas than those of class II finishing, although the difference was not statistically significant (p = 0.078). Conclusions In set-up models, posterior occlusal contact was better in class I than in class II finishing. In final models, class I finishing tended to have larger occlusal contact areas than class II finishing. PMID:26023539

  18. Assessing Motivation to Learn Chemistry: Adaptation and Validation of Science Motivation Questionnaire II with Greek Secondary School Students

    ERIC Educational Resources Information Center

    Salta, Katerina; Koulougliotis, Dionysios

    2015-01-01

    In educational research, the availability of a validated version of an original instrument in a different language offers the possibility for valid measurements obtained within the specific educational context and in addition it provides the opportunity for valid cross-cultural comparisons. The present study aimed to adapt the Science Motivation…

  19. Theoretical models for Type I and Type II supernova

    SciTech Connect

    Woosley, S.E.; Weaver, T.A.

    1985-01-01

    Recent theoretical progress in understanding the origin and nature of Type I and Type II supernovae is discussed. New Type II presupernova models characterized by a variety of iron core masses at the time of collapse are presented and the sensitivity to the reaction rate /sup 12/C(..cap alpha..,..gamma..)/sup 16/O explained. Stars heavier than about 20 M/sub solar/ must explode by a ''delayed'' mechanism not directly related to the hydrodynamical core bounce and a subset is likely to leave black hole remnants. The isotopic nucleosynthesis expected from these massive stellar explosions is in striking agreement with the sun. Type I supernovae result when an accreting white dwarf undergoes a thermonuclear explosion. The critical role of the velocity of the deflagration front in determining the light curve, spectrum, and, especially, isotopic nucleosynthesis in these models is explored. 76 refs., 8 figs.

  20. THE FERNALD DOSIMETRY RECONSTRUCTION PROJECT Environmental Pathways -Models and Validation

    E-print Network

    . . . . . . . . . . 26 Deposition Measurements Using Gummed Film 27 Soil Data for Locations Near the FMPC .. .. 28 AND GROUNDWATER TRANSPORT. 34 Surface Water Overview . . . . . . . . . . . 34 Modeling Approach to Surface Water Transport . .. 36 Groundwater Transport . . . . . . . . . . . . . . 37 EXPOSURE PATHWAYS

  1. The fear-avoidance model of chronic pain: validation and age analysis using structural equation modeling.

    PubMed

    Cook, Andrew J; Brawer, Peter A; Vowles, Kevin E

    2006-04-01

    The cognitive-behavioral, fear-avoidance (FA) model of chronic pain (Vlaeyen JWS, Kole-Snijders AMJ, Boeren RGB, van Eek H. Fear of movement/(re)injury in chronic low back pain and its relation to behavioral performance. Pain 1995a;62:363-72) has found broad empirical support, but its multivariate, predictive relationships have not been uniformly validated. Applicability of the model across age groups of chronic pain patients has also not been tested. Goals of this study were to validate the predictive relationships of the multivariate FA model using structural equation modeling and to evaluate the factor structure of the Tampa Scale of Kinesiophobia (TSK), levels of pain-related fear, and fit of the FA model across three age groups: young (< or =40), middle-aged (41-54), and older (> or =55) adults. A heterogeneous sample of 469 chronic pain patients provided ratings of catastrophizing, pain-related fear, depression, perceived disability, and pain severity. Using a confirmatory approach, a 2-factor, 13-item structure of the TSK provided the best fit and was invariant across age groups. Older participants were found to have lower TSK fear scores than middle-aged participants for both factors (FA, Harm). A modified version of the Vlaeyen JWS, Kole-Snijders AMJ, Boeren RGB, van Eek H (Fear of movement/(re)injury in chronic low back pain and its relation to behavioral performance. Pain 1995a;62:363-72.) FA model provided a close fit to the data (chi(2)(29)=42.0, p>0.05, GFI=0.98, AGFI=0.97, CFI=0.99, RMSEA=0.031 (90% CI 0.000-0.050), p close fit=0.95). Multigroup analyses revealed significant differences in structural weights for older vs. middle-aged participants. For older chronic pain patients, a stronger mediating role for pain-related fear was supported. Results are consistent with a FA model of chronic pain, while indicating some important age group differences in this model and in levels of pain-related fear. Longitudinal testing of the multivariate model is recommended. PMID:16495008

  2. Experimental validation of different modeling approaches for solid particle receivers.

    SciTech Connect

    Khalsa, Siri Sahib S.; Amsbeck, Lars , Spain and Stuttgart, Germany); Roger, Marc , Spain and Stuttgart, Germany); Siegel, Nathan Phillip; Kolb, Gregory J.; Buck, Reiner , Spain and Stuttgart, Germany); Ho, Clifford Kuofei

    2009-07-01

    Solid particle receivers have the potential to provide high-temperature heat for advanced power cycles, thermochemical processes, and thermal storage via direct particle absorption of concentrated solar energy. This paper presents two different models to evaluate the performance of these systems. One model is a detailed computational fluid dynamics model using FLUENT that includes irradiation from the concentrated solar flux, two-band re-radiation and emission within the cavity, discrete-phase particle transport and heat transfer, gas-phase convection, wall conduction, and radiative and convective heat losses. The second model is an easy-to-use and fast simulation code using Matlab that includes solar and thermal radiation exchange between the particle curtain, cavity walls, and aperture, but neglects convection. Both models were compared to unheated particle flow tests and to on-sun heating tests. Comparisons between measured and simulated particle velocities, opacity, particle volume fractions, particle temperatures, and thermal efficiencies were found to be in good agreement. Sensitivity studies were also performed with the models to identify parameters and modifications to improve the performance of the solid particle receiver.

  3. Material model validation for laser shock peening process simulation

    NASA Astrophysics Data System (ADS)

    Amarchinta, H. K.; Grandhi, R. V.; Langer, K.; Stargel, D. S.

    2009-01-01

    Advanced mechanical surface enhancement techniques have been used successfully to increase the fatigue life of metallic components. These techniques impart deep compressive residual stresses into the component to counter potentially damage-inducing tensile stresses generated under service loading. Laser shock peening (LSP) is an advanced mechanical surface enhancement technique used predominantly in the aircraft industry. To reduce costs and make the technique available on a large-scale basis for industrial applications, simulation of the LSP process is required. Accurate simulation of the LSP process is a challenging task, because the process has many parameters such as laser spot size, pressure profile and material model that must be precisely determined. This work focuses on investigating the appropriate material model that could be used in simulation and design. In the LSP process material is subjected to strain rates of 106 s-1, which is very high compared with conventional strain rates. The importance of an accurate material model increases because the material behaves significantly different at such high strain rates. This work investigates the effect of multiple nonlinear material models for representing the elastic-plastic behavior of materials. Elastic perfectly plastic, Johnson-Cook and Zerilli-Armstrong models are used, and the performance of each model is compared with available experimental results.

  4. Validation status of the VARGOW oil reservoir model

    SciTech Connect

    Mayer, D.W.; Arnold, E.M.; Bowen, W.M.; Gutknecht, P.J.

    1980-10-01

    VARGOW, a variable gas-oil-water reservoir model, provides recovery estimates suitable for assessing various reservoir production policies and regulations. Data were collected for a number of reservoirs. From this data base, three reservoirs approximating the model assumptions were selected for model testing purposes. For all three reservoirs, it has been possible to simulate the observed pressures in both interpolative and extrapolative modes. Simulating the gas/oil ratio (GOR) has not been as successful, however. The VARGOW model will predict physically unrealistic results if the reservoir being simulated is not initially at the bubble point pressure of the reservoir fluid. If the discovery pressure is slightly above the bubble point, adjustments to initial conditions can be made using a method that has been outlined in this report. If the discovery pressure is considerably above the bubble point, it is recommended that an undersaturated reservoir model be employed until the bubble point is reached. For simulating reservoirs whose discovery pressure is below the bubble point, the VARGOW model must be modified.

  5. Electromagnetic scattering from grassland Part II: Measurement and modeling results

    E-print Network

    Stiles, James Marion; Ulaby, F. T.; Sarabandi, K.

    2000-01-01

    AND REMOTE SENSING, VOL. 38, NO. 1, JANUARY 2000 349 Electromagnetic Scattering from Grassland—Part II: Measurement and Modeling Results James M. Stiles, Kamal Sarabandi, Senior Member, IEEE, and Fawwaz T. Ulaby, Fellow, IEEE Abstract..., whichincludedportionsofaboutsixrowsofwheatplants.The backscattering estimate is poor for these data (i.e., the data are noisy), because the extreme dependence on both azimuth and elevation angle reduces the number of independent samples STILES et al.: ELECTROMAGNETIC SCATTERING FROM GRASSLAND...

  6. External validation of multivariable prediction models: a systematic review of methodological conduct and reporting

    PubMed Central

    2014-01-01

    Background Before considering whether to use a multivariable (diagnostic or prognostic) prediction model, it is essential that its performance be evaluated in data that were not used to develop the model (referred to as external validation). We critically appraised the methodological conduct and reporting of external validation studies of multivariable prediction models. Methods We conducted a systematic review of articles describing some form of external validation of one or more multivariable prediction models indexed in PubMed core clinical journals published in 2010. Study data were extracted in duplicate on design, sample size, handling of missing data, reference to the original study developing the prediction models and predictive performance measures. Results 11,826 articles were identified and 78 were included for full review, which described the evaluation of 120 prediction models. in participant data that were not used to develop the model. Thirty-three articles described both the development of a prediction model and an evaluation of its performance on a separate dataset, and 45 articles described only the evaluation of an existing published prediction model on another dataset. Fifty-seven percent of the prediction models were presented and evaluated as simplified scoring systems. Sixteen percent of articles failed to report the number of outcome events in the validation datasets. Fifty-four percent of studies made no explicit mention of missing data. Sixty-seven percent did not report evaluating model calibration whilst most studies evaluated model discrimination. It was often unclear whether the reported performance measures were for the full regression model or for the simplified models. Conclusions The vast majority of studies describing some form of external validation of a multivariable prediction model were poorly reported with key details frequently not presented. The validation studies were characterised by poor design, inappropriate handling and acknowledgement of missing data and one of the most key performance measures of prediction models i.e. calibration often omitted from the publication. It may therefore not be surprising that an overwhelming majority of developed prediction models are not used in practice, when there is a dearth of well-conducted and clearly reported (external validation) studies describing their performance on independent participant data. PMID:24645774

  7. Development and validation of a tokamak skin effect transformer model

    NASA Astrophysics Data System (ADS)

    Romero, J. A.; Moret, J.-M.; Coda, S.; Felici, F.; Garrido, I.

    2012-02-01

    A lumped parameter, state space model for a tokamak transformer including the slow flux penetration in the plasma (skin effect transformer model) is presented. The model does not require detailed or explicit information about plasma profiles or geometry. Instead, this information is lumped in system variables, parameters and inputs. The model has an exact mathematical structure built from energy and flux conservation theorems, predicting the evolution and non-linear interaction of plasma current and internal inductance as functions of the primary coil currents, plasma resistance, non-inductive current drive and the loop voltage at a specific location inside the plasma (equilibrium loop voltage). Loop voltage profile in the plasma is substituted by a three-point discretization, and ordinary differential equations are used to predict the equilibrium loop voltage as a function of the boundary and resistive loop voltages. This provides a model for equilibrium loop voltage evolution, which is reminiscent of the skin effect. The order and parameters of this differential equation are determined empirically using system identification techniques. Fast plasma current modulation experiments with random binary signals have been conducted in the TCV tokamak to generate the required data for the analysis. Plasma current was modulated under ohmic conditions between 200 and 300 kA with 30 ms rise time, several times faster than its time constant L/R ? 200 ms. A second-order linear differential equation for equilibrium loop voltage is sufficient to describe the plasma current and internal inductance modulation with 70% and 38% fit parameters, respectively. The model explains the most salient features of the plasma current transients, such as the inverse correlation between plasma current ramp rates and internal inductance changes, without requiring detailed or explicit information about resistivity profiles. This proves that a lumped parameter modelling approach can be used to predict the time evolution of bulk plasma properties such as plasma inductance or current with reasonable accuracy; at least under ohmic conditions without external heating and current drive sources.

  8. Validation of Travel-Time based Nonlinear Bioreactive Transport Models under Flow and Transport Dynamics

    NASA Astrophysics Data System (ADS)

    Sanz Prat, A.; Lu, C.; Cirpka, O. A.

    2014-12-01

    Travel-time based models are presented as an alternative to traditional spatially explicit models to solve nonlinear reactive-transport problems. The main advantage of the travel-time approach is that it does not require multi-dimensional characterization of physical and chemical parameters, and transport is one-dimensional. Spatial dimensions are replaced by groundwater travel time, defined as the time required by a water particle to reach an observation point or the outflow boundary, respectively. The fundamental hypothesis is that locations of the same groundwater age exhibit the same reactive-species concentrations. This is true in strictly advective-reactive transport in steady-state flows if the coefficients of reactions are uniform and the concentration is uniform over the inflow boundary. We hypothesize that the assumption still holds when adding some dispersion in coupled flow and transport dynamics. We compare a two-dimensional, spatially explicit, bioreactive, advective-dispersive transport model, considered as "virtual truth", with three 1-D travel-time based models which differ by the conceptualization of longitudinal dispersion: (i) neglecting dispersive mixing altogether, (ii) introducing a local-scale longitudinal dispersivity constant in time and space, and (iii) using an effective longitudinal dispersivity that increases linearly with distance. We consider biodegradation of organic matter catalyzed by non-competitive inhibitive microbial populations. The simulated inflow contains oxygen, nitrate, and DOC. The domain contains growing aerobic and denitrifying bacteria, the latter being inhibited by oxygen. This system is computed in 1-D, and in 2-D heterogeneous domains. We conclude that the conceptualization of nonlinear bioreactive transport in complex multi-dimensional domains by quasi 1-D travel-time models is valid for steady-state flow if the reactants are introduced over a wide cross-section, flow is at quasi-steady state, and dispersive mixing is adequately parameterization. First results considering diurnal fluctuations of flow rate of the uniform velocity field point out to match of concentrations with respect to average travel-time the estimated "virtual truth".

  9. On the validity of travel-time based nonlinear bioreactive transport models in steady-state flow

    NASA Astrophysics Data System (ADS)

    Sanz-Prat, Alicia; Lu, Chuanhe; Finkel, Michael; Cirpka, Olaf A.

    2015-04-01

    Travel-time based models simplify the description of reactive transport by replacing the spatial coordinates with the groundwater travel time, posing a quasi one-dimensional (1-D) problem and potentially rendering the determination of multidimensional parameter fields unnecessary. While the approach is exact for strictly advective transport in steady-state flow if the reactive properties of the porous medium are uniform, its validity is unclear when local-scale mixing affects the reactive behavior. We compare a two-dimensional (2-D), spatially explicit, bioreactive, advective-dispersive transport model, considered as "virtual truth", with three 1-D travel-time based models which differ in the conceptualization of longitudinal dispersion: (i) neglecting dispersive mixing altogether, (ii) introducing a local-scale longitudinal dispersivity constant in time and space, and (iii) using an effective longitudinal dispersivity that increases linearly with distance. The reactive system considers biodegradation of dissolved organic carbon, which is introduced into a hydraulically heterogeneous domain together with oxygen and nitrate. Aerobic and denitrifying bacteria use the energy of the microbial transformations for growth. We analyze six scenarios differing in the variance of log-hydraulic conductivity and in the inflow boundary conditions (constant versus time-varying concentration). The concentrations of the 1-D models are mapped to the 2-D domain by means of the kinematic (for case i), and mean groundwater age (for cases ii & iii), respectively. The comparison between concentrations of the "virtual truth" and the 1-D approaches indicates extremely good agreement when using an effective, linearly increasing longitudinal dispersivity in the majority of the scenarios, while the other two 1-D approaches reproduce at least the concentration tendencies well. At late times, all 1-D models give valid approximations of two-dimensional transport. We conclude that the conceptualization of nonlinear bioreactive transport in complex multidimensional domains by quasi 1-D travel-time models is valid for steady-state flow fields if the reactants are introduced over a wide cross-section, flow is at quasi steady state, and dispersive mixing is adequately parametrized.

  10. Validation of road vehicle and traffic emission models - A review and meta-analysis

    NASA Astrophysics Data System (ADS)

    Smit, Robin; Ntziachristos, Leonidas; Boulter, Paul

    2010-08-01

    Road transport is often the main source of air pollution in urban areas, and there is an increasing need to estimate its contribution precisely so that pollution-reduction measures (e.g. emission standards, scrapage programs, traffic management, ITS) are designed and implemented appropriately. This paper presents a meta-analysis of 50 studies dealing with the validation of various types of traffic emission model, including 'average speed', 'traffic situation', 'traffic variable', 'cycle variable', and 'modal' models. The validation studies employ measurements in tunnels, ambient concentration measurements, remote sensing, laboratory tests, and mass-balance techniques. One major finding of the analysis is that several models are only partially validated or not validated at all. The mean prediction errors are generally within a factor of 1.3 of the observed values for CO 2, within a factor of 2 for HC and NO x, and within a factor of 3 for CO and PM, although differences as high as a factor of 5 have been reported. A positive mean prediction error for NO x (i.e. overestimation) was established for all model types and practically all validation techniques. In the case of HC, model predictions have been moving from underestimation to overestimation since the 1980s. The large prediction error for PM may be associated with different PM definitions between models and observations (e.g. size, measurement principle, exhaust/non-exhaust contribution). Statistical analyses show that the mean prediction error is generally not significantly different ( p < 0.05) when the data are categorised according to model type or validation technique. Thus, there is no conclusive evidence that demonstrates that more complex models systematically perform better in terms of prediction error than less complex models. In fact, less complex models appear to perform better for PM. Moreover, the choice of validation technique does not systematically affect the result, with the exception of a CO underprediction when the validation is based on ambient concentration measurements and inverse modelling. The analysis identified two vital elements currently lacking in traffic emissions modelling: 1) guidance on the allowable error margins for different applications/scales, and 2) estimates of prediction errors. It is recommended that current and future emission models incorporate the capability to quantify prediction errors, and that clear guidelines are developed internationally with respect to expected accuracy.

  11. Atomic Data and Spectral Model for Fe II

    NASA Astrophysics Data System (ADS)

    Bautista, Manuel A.; Fivet, Vanessa; Ballance, Connor; Quinet, Pascal; Ferland, Gary; Mendoza, Claudio; Kallman, Timothy R.

    2015-08-01

    We present extensive calculations of radiative transition rates and electron impact collision strengths for Fe ii. The data sets involve 52 levels from the 3d7, 3d64s, and 3{d}54{s}2 configurations. Computations of A-values are carried out with a combination of state-of-the-art multiconfiguration approaches, namely the relativistic Hartree-Fock, Thomas-Fermi-Dirac potential, and Dirac-Fock methods, while the R-matrix plus intermediate coupling frame transformation, Breit-Pauli R-matrix, and Dirac R-matrix packages are used to obtain collision strengths. We examine the advantages and shortcomings of each of these methods, and estimate rate uncertainties from the resulting data dispersion. We proceed to construct excitation balance spectral models, and compare the predictions from each data set with observed spectra from various astronomical objects. We are thus able to establish benchmarks in the spectral modeling of [Fe ii] emission in the IR and optical regions as well as in the UV Fe ii absorption spectra. Finally, we provide diagnostic line ratios and line emissivities for emission spectroscopy as well as column densities for absorption spectroscopy. All atomic data and models are available online and through the AtomPy atomic data curation environment.

  12. Validation of a regional Indonesian Seas model based on a comparison between model and INSTANT transports

    NASA Astrophysics Data System (ADS)

    Rosenfield, D.; Kamenkovich, V.; O'Driscoll, K.; Sprintall, J.

    2010-08-01

    The International Nusantara Stratification and Transport (INSTANT) program measured currents through multiple Indonesian Seas passages simultaneously over a three-year period (from January 2004 to December 2006). The Indonesian Seas region has presented numerous challenges for numerical modelers — the Indonesian Throughflow (ITF) must pass over shallow sills, into deep basins, and through narrow constrictions on its way from the Pacific to the Indian Ocean. As an important region in the global climate puzzle, a number of models have been used to try and best simulate this throughflow. In an attempt to validate our model, we present a comparison between the transports calculated from our model and those calculated from the INSTANT in situ measurements at five passages within the Indonesian Seas (Labani Channel, Lifamatola Passage, Lombok Strait, Ombai Strait, and Timor Passage). Our Princeton Ocean Model (POM) based regional Indonesian Seas model was originally developed to analyze the influence of bottom topography on the temperature and salinity distributions in the Indonesian seas region, to disclose the path of the South Pacific Water from the continuation of the New Guinea Coastal Current entering the region of interest up to the Lifamatola Passage, and to assess the role of the pressure head in driving the ITF and in determining its total transport. Previous studies found that this model reasonably represents the general long-term flow (seasons) through this region. The INSTANT transports were compared to the results of this regional model over multiple timescales. Overall trends are somewhat represented but changes on timescales shorter than seasonal (three months) and longer than annual were not considered in our model. Normal velocities through each passage during every season are plotted. Daily volume transports and transport-weighted temperature and salinity are plotted and seasonal averages are tabulated.

  13. Root zone water quality model (RZWQM2): Model use, calibration and validation

    USGS Publications Warehouse

    Ma, Liwang; Ahuja, Lajpat; Nolan, B.T.; Malone, Robert; Trout, Thomas; Qi, Z.

    2012-01-01

    The Root Zone Water Quality Model (RZWQM2) has been used widely for simulating agricultural management effects on crop production and soil and water quality. Although it is a one-dimensional model, it has many desirable features for the modeling community. This article outlines the principles of calibrating the model component by component with one or more datasets and validating the model with independent datasets. Users should consult the RZWQM2 user manual distributed along with the model and a more detailed protocol on how to calibrate RZWQM2 provided in a book chapter. Two case studies (or examples) are included in this article. One is from an irrigated maize study in Colorado to illustrate the use of field and laboratory measured soil hydraulic properties on simulated soil water and crop production. It also demonstrates the interaction between soil and plant parameters in simulated plant responses to water stresses. The other is from a maize-soybean rotation study in Iowa to show a manual calibration of the model for crop yield, soil water, and N leaching in tile-drained soils. Although the commonly used trial-and-error calibration method works well for experienced users, as shown in the second example, an automated calibration procedure is more objective, as shown in the first example. Furthermore, the incorporation of the Parameter Estimation Software (PEST) into RZWQM2 made the calibration of the model more efficient than a grid (ordered) search of model parameters. In addition, PEST provides sensitivity and uncertainty analyses that should help users in selecting the right parameters to calibrate.

  14. Validating and Applying Numerical Models for Current Energy Capture Devices

    NASA Astrophysics Data System (ADS)

    Hirlinger, C. Y.; James, S. C.; Cardenas, M. P.

    2014-12-01

    With the growing focus on renewable energy, there is increased interest in modeling and optimizing current energy capture (CEC) devices. The interaction of multiple wakes from CEC devices can affect optimal placement strategy, and issues of environmental impacts on sediment transport and large-scale flow should be examined. Numerical models of four flume-scale experiments were built using Sandia National Laboratories' Environmental Fluid Dynamics Code (SNL-EFDC.) Model predictions were calibrated against measured velocities to estimate flow and turbine parameters. The velocity deficit was most sensitive to ?md, the dimensionless Smagorinsky constant related to horizontal momentum diffusion, and to CPB, the dimensionless partial blockage coefficient accounting for the physical displacement of fluid due to turbine blockage. Calibration to four data sets showed ?md ranged from 0.3 to 1.0 while CPB ranged from 40 to 300. Furthermore, results of parameter estimation indicated centerline velocity data were insufficient to uniquely identify the turbulence, flow, and device parameters; cross-channel velocity measurements at multiple locations downstream yielded important calibration information and it is likely that vertical velocity profiles would also be useful to the calibration effort. In addition to flume scale models, a full-scale implementation of a CEC device at Roza Canal in Yakima, WA was developed. The model was analyzed to find an appropriate grid size and to understand the sensitivity of downstream velocity profiles to horizontal momentum diffusion and partial blockage coefficients. Preliminary results generally showed that as CPB increased the wake was enhanced vertically.

  15. A Hardware Model Validation Tool for Use in Complex Space Systems

    NASA Technical Reports Server (NTRS)

    Davies, Misty Dawn; Gundy-Burlet, Karen L.; Limes, Gregory L.

    2010-01-01

    One of the many technological hurdles that must be overcome in future missions is the challenge of validating as-built systems against the models used for design. We propose a technique composed of intelligent parameter exploration in concert with automated failure analysis as a scalable method for the validation of complex space systems. The technique is impervious to discontinuities and linear dependencies in the data, and can handle dimensionalities consisting of hundreds of variables over tens of thousands of experiments.

  16. Updated Delft Mass Transport model DMT-2: computation and validation

    NASA Astrophysics Data System (ADS)

    Hashemi Farahani, Hassan; Ditmar, Pavel; Inacio, Pedro; Klees, Roland; Guo, Jing; Guo, Xiang; Liu, Xianglin; Zhao, Qile; Didova, Olga; Ran, Jiangjun; Sun, Yu; Tangdamrongsub, Natthachet; Gunter, Brian; Riva, Ricardo; Steele-Dunne, Susan

    2014-05-01

    A number of research centers compute models of mass transport in the Earth's system using primarily K-Band Ranging (KBR) data from the Gravity Recovery And Climate Experiment (GRACE) satellite mission. These models typically consist of a time series of monthly solutions, each of which is defined in terms of a set of spherical harmonic coefficients up to degree 60-120. One of such models, the Delft Mass Transport, release 2 (DMT-2), is computed at the Delft University of Technology (The Netherlands) in collaboration with Wuhan University. An updated variant of this model has been produced recently. A unique feature of the computational scheme designed to compute DMT-2 is the preparation of an accurate stochastic description of data noise in the frequency domain using an Auto-Regressive Moving-Average (ARMA) model, which is derived for each particular month. The benefits of such an approach are a proper frequency-dependent data weighting in the data inversion and an accurate variance-covariance matrix of noise in the estimated spherical harmonic coefficients. Furthermore, the data prior to the inversion are subject to an advanced high-pass filtering, which makes use of a spatially-dependent weighting scheme, so that noise is primarily estimated on the basis of data collected over areas with minor mass transport signals (e.g., oceans). On the one hand, this procedure efficiently suppresses noise, which are caused by inaccuracies in satellite orbits and, on the other hand, preserves mass transport signals in the data. Finally, the unconstrained monthly solutions are filtered using a Wiener filter, which is based on estimates of the signal and noise variance-covariance matrices. In combination with a proper data weighting, this noticeably improves the spatial resolution of the monthly gravity models and the associated mass transport models.. For instance, the computed solutions allow long-term negative trends to be clearly seen in sufficiently small regions notorious for rapid mass transport losses, such us the Kangerdlugssuaq and Jakobshavn glaciers in the Greenland ice sheet, as well as the Aral Sea in the Central Asia. The updated variant of DMT-2 has been extensively tested and compared with alternative models. A number of regions/processes have been considered for that purpose. In particular, this model has been applied to estimate mass variations in Greenland and Antarctica (both total and for individual ice drainage systems), as well as to improve a hydrological model of the Rhine River basin. Furthermore, a time-series of degree-1 coefficients has been derived from the DMT-2 model using the method of Swenson et al. (2008). The obtained results are in a good agreement both with alternative GRACE-based models and with independent data, which confirms a high quality of the updated variant of DMT-2.

  17. Modelling and validation of magnetorheological brake responses using parametric approach

    NASA Astrophysics Data System (ADS)

    Z, Zainordin A.; A, Abdullah M.; K, Hudha

    2013-12-01

    Magnetorheological brake (MR Brake) is one x-by-wire systems which performs better than conventional brake systems. MR brake consists of a rotating disc that is immersed with Magnetorheological Fluid (MR Fluid) in an enclosure of an electromagnetic coil. The applied magnetic field will increase the yield strength of the MR fluid where this fluid was used to decrease the speed of the rotating shaft. The purpose of this paper is to develop a mathematical model to represent MR brake with a test rig. The MR brake model is developed based on actual torque characteristic which is coupled with motion of a test rig. Next, the experimental are performed using MR brake test rig and obtained three output responses known as angular velocity response, torque response and load displacement response. Furthermore, the MR brake was subjected to various current. Finally, the simulation results of MR brake model are then verified with experimental results.

  18. Shuttle Space Suit: Fabric/LCVG Model Validation. Chapter 8

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tweed, J.; Zeitlin, C.; Kim, M.-H. Y.; Anderson, B. M.; Cucinotta, F. A.; Ware, J.; Persans, A. E.

    2003-01-01

    A detailed space suit computational model is being developed at the Langley Research Center for radiation exposure evaluation studies. The details of the construction of the space suit are critical to estimation of exposures and assessing the risk to the astronaut on EVA. Past evaluations of space suit shielding properties assumed the basic fabric layup (Thermal Micrometeoroid Garment, fabric restraints, and pressure envelope) and LCVG could be homogenized as a single layer overestimating the protective properties over 60 percent of the fabric area. The present space suit model represents the inhomogeneous distributions of LCVG materials (mainly the water filled cooling tubes). An experimental test is performed using a 34-MeV proton beam and high-resolution detectors to compare with model-predicted transmission factors. Some suggestions are made on possible improved construction methods to improve the space suit s protection properties.

  19. Survey Instrument Validity Part II: Validation of a Survey Instrument Examining Athletic Trainers' Knowledge and Practice Beliefs Regarding Exertional Heat Stroke

    ERIC Educational Resources Information Center

    Burton, Laura J.; Mazerolle, Stephanie M.

    2011-01-01

    Objective: The purpose of this article is to discuss the process of developing and validating an instrument to investigate an athletic trainer's attitudes and behaviors regarding the recognition and treatment of exertional heat stroke. Background: Following up from our initial paper, which discussed the process of survey instrument design and…

  20. Leading for quality in healthcare: development and validation of a competency model.

    PubMed

    Garman, Andrew; Scribner, Linda

    2011-01-01

    Increased attention to healthcare quality and impending changes due to health reform are calling for healthcare leaders at all levels to strengthen their skills in leading quality improvement initiatives. To address this need, the National Association for Healthcare Quality spearheaded the development and validation of a competency model to support healthcare leaders in assessing their strengths and planning appropriate steps for development. Initial development took place over the course of several days of meetings by an advisory panel of quality professionals. The draft model was then validated via electronic survey of a national sample of 883 quality professionals. Follow-up analyses indicated that the model was content valid for each of the target samples and also distinguished differing levels of job scope and experience. The resulting model contains six domains spanning three organizational levels. PMID:22201200

  1. Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William

    2006-01-01

    The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.

  2. Experimental testing procedures and dynamic model validation for vanadium redox flow battery storage system

    NASA Astrophysics Data System (ADS)

    Baccino, Francesco; Marinelli, Mattia; Nørgård, Per; Silvestro, Federico

    2014-05-01

    The paper aims at characterizing the electrochemical and thermal parameters of a 15 kW/320 kWh vanadium redox flow battery (VRB) installed in the SYSLAB test facility of the DTU Risø Campus and experimentally validating the proposed dynamic model realized in Matlab-Simulink. The adopted testing procedure consists of analyzing the voltage and current values during a power reference step-response and evaluating the relevant electrochemical parameters such as the internal resistance. The results of different tests are presented and used to define the electrical characteristics and the overall efficiency of the battery system. The test procedure has general validity and could also be used for other storage technologies. The storage model proposed and described is suitable for electrical studies and can represent a general model in terms of validity. Finally, the model simulation outputs are compared with experimental measurements during a discharge-charge sequence.

  3. CFD Modeling Needs and What Makes a Good Supersonic Combustion Validation Experiment

    NASA Technical Reports Server (NTRS)

    Gaffney, Richard L., Jr.; Cutler, Andrew D.

    2005-01-01

    If a CFD code/model developer is asked what experimental data he wants to validate his code or numerical model, his answer will be: "Everything, everywhere, at all times." Since this is not possible, practical, or even reasonable, the developer must understand what can be measured within the limits imposed by the test article, the test location, the test environment and the available diagnostic equipment. At the same time, it is important for the expermentalist/diagnostician to understand what the CFD developer needs (as opposed to wants) in order to conduct a useful CFD validation experiment. If these needs are not known, it is possible to neglect easily measured quantities at locations needed by the developer, rendering the data set useless for validation purposes. It is also important for the experimentalist/diagnostician to understand what the developer is trying to validate so that the experiment can be designed to isolate (as much as possible) the effects of a particular physical phenomena that is associated with the model to be validated. The probability of a successful validation experiment can be greatly increased if the two groups work together, each understanding the needs and limitations of the other.

  4. Modeling HCCI using CFD and Detailed Chemistry with Experimental Validation and a Focus on CO Emissions

    SciTech Connect

    Hessel, R; Foster, D; Aceves, S; Flowers, D; Pitz, B; Dec, J; Sjoberg, M; Babajimopoulos, A

    2007-04-23

    Multi-zone CFD simulations with detailed kinetics were used to model engine experiments performed on a diesel engine that was converted for single cylinder, HCCI operation, here using iso-octane as the fuel. The modeling goals were to validate the method (multi-zone combustion modeling) and the reaction mechanism (LLNL 857 species iso-octane), both of which performed very well. The purpose of this paper is to document the validation findings and to set the ground work for further analysis of the results by first looking at CO emissions characteristics with varying equivalence ratio.

  5. Finite State Machines and Modal Models in Ptolemy II Edward A. Lee

    E-print Network

    Finite State Machines and Modal Models in Ptolemy II Edward A. Lee Electrical Engineering, and Toyota. #12;Finite State Machines and Modal Models in Ptolemy II Edward A. Lee eal This report describes the usage and semantics of finite-state machines (FSMs) and modal models in Ptolemy II

  6. The role of global cloud climatologies in validating numerical models

    NASA Technical Reports Server (NTRS)

    HARSHVARDHAN

    1993-01-01

    The purpose of this work is to estimate sampling errors of area-time averaged rain rate due to temporal samplings by satellites. In particular, the sampling errors of the proposed low inclination orbit satellite of the Tropical Rainfall Measuring Mission (TRMM) (35 deg inclination and 350 km altitude), one of the sun synchronous polar orbiting satellites of NOAA series (98.89 deg inclination and 833 km altitude), and two simultaneous sun synchronous polar orbiting satellites--assumed to carry a perfect passive microwave sensor for direct rainfall measurements--will be estimated. This estimate is done by performing a study of the satellite orbits and the autocovariance function of the area-averaged rain rate time series. A model based on an exponential fit of the autocovariance function is used for actual calculations. Varying visiting intervals and total coverage of averaging area on each visit by the satellites are taken into account in the model. The data are generated by a General Circulation Model (GCM). The model has a diurnal cycle and parameterized convective processes. A special run of the GCM was made at NASA/GSFC in which the rainfall and precipitable water fields were retained globally for every hour of the run for the whole year.

  7. Toward Validation of the Genius Discipline-Specific Literacy Model

    ERIC Educational Resources Information Center

    Ellis, Edwin S.; Wills, Stephen; Deshler, Donald D.

    2011-01-01

    An analysis of the rationale and theoretical foundations of the Genius Discipline-specific Literacy Model and its use of SMARTvisuals to cue information-processing skills and strategies and focus attention on essential informational elements in high-frequency topics in history and the English language arts are presented. Quantitative data…

  8. DEVELOPMENT AND VALIDATION OF A MECHANISTIC GROUND SPRAYER MODEL

    EPA Science Inventory

    In the last ten years the Spray Drift Task Force (SDTF), U.S. Environmental Protection Agency (EPA), USDA Agricultural Research Service, and USDA Forest Service cooperated in the refinement and evaluation of a mechanistically-based aerial spray model (contained within AGDISP and ...

  9. Intellectual Competence and Academic Performance: Preliminary Validation of a Model

    ERIC Educational Resources Information Center

    Chamorro-Premuzic, Tomas; Arteche, Adriane

    2008-01-01

    The present study provides a preliminary empirical test of [Chamorro-Premuzic, T., & Furnham, A. (2004). A possible model to understand the personality-intelligence interface. "British Journal of Psychology," 95, 249-264], [Chamorro-Premuzic, T., & Furnham, A. (2006a). Intellectual competence and the intelligent personality: A third way in…

  10. FIELD VALIDATION OF EXPOSURE ASSESSMENT MODELS. VOLUME 1. DATA

    EPA Science Inventory

    This is the first of two volumes describing work done to evaluate the PAL-DS model, a Gaussian diffusion code modified to account for dry deposition and settling. This first volume describes the experimental techniques employed to dispense, collect, and measure depositing (zinc s...

  11. Validation of multivariate model of leaf ionome is fundamentally confounded.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The multivariable signature model reported by Baxter et al. (1) to predict Fe and P homeostasis in Arabidopsis is fundamentally flawed for two reasons: 1) The initial experiments identified a correlation between trace metal (Mn, Co, Zn, Mo, Cd) signature and “Fe-deficiency,” which was used to train ...

  12. FACES IV and the Circumplex Model: Validation Study

    ERIC Educational Resources Information Center

    Olson, David

    2011-01-01

    Family Adaptability and Cohesion Evaluation Scale (FACES) IV was developed to tap the full continuum of the cohesion and flexibility dimensions from the Circumplex Model of Marital and Family Systems. Six scales were developed, with two balanced scales and four unbalanced scales designed to tap low and high cohesion (disengaged and enmeshed) and…

  13. Elasto-dynamic analysis of a gear pump-Part III: Experimental validation procedure and model extension to helical gears

    NASA Astrophysics Data System (ADS)

    Mucchi, E.; Dalpiaz, G.

    2015-01-01

    This work concerns external gear pumps for automotive applications, which operate at high speed and low pressure. In previous works of the authors (Part I and II, [1,2]), a non-linear lumped-parameter kineto-elastodynamic model for the prediction of the dynamic behaviour of external gear pumps was presented. It takes into account the most important phenomena involved in the operation of this kind of machine. The two main sources of noise and vibration are considered: pressure pulsation and gear meshing. The model has been used in order to foresee the influence of working conditions and design modifications on vibration generation. The model's experimental validation is a difficult task. Thus, Part III proposes a novel methodology for the validation carried out by the comparison of simulations and experimental results concerning forces and moments: it deals with the external and inertial components acting on the gears, estimated by the model, and the reactions and inertial components on the pump casing and the test plate, obtained by measurements. The validation is carried out comparing the level of the time synchronous average in the time domain and the waterfall maps in the frequency domain, with particular attention to identify system resonances. The validation results are satisfactory globally, but discrepancies are still present. Moreover, the assessed model has been properly modified for the application to a new virtual pump prototype with helical gears in order to foresee gear accelerations and dynamic forces. Part IV is focused on improvements in the modelling and analysis of the phenomena bound to the pressure evolution around the gears in order to achieve results closer to the measured values. As a matter of fact, the simulation results have shown that a variable meshing stiffness has a notable contribution on the dynamic behaviour of the pump but this is not as important as the pressure phenomena. As a consequence, the original model was modified with the aim at improving the calculation of pressure forces and torques. The improved pressure formulation includes several phenomena not considered in the previous one, such as the variable pressure evolution at input and output ports, as well as an accurate description of the trapped volume and its connections with high and low pressure chambers. The importance of these improvements are highlighted by comparison with experimental results, showing satisfactory matching.

  14. Flexible Programmes in Higher Professional Education: Expert Validation of a Flexible Educational Model

    ERIC Educational Resources Information Center

    Schellekens, Ad; Paas, Fred; Verbraeck, Alexander; van Merrienboer, Jeroen J. G.

    2010-01-01

    In a preceding case study, a process-focused demand-driven approach for organising flexible educational programmes in higher professional education (HPE) was developed. Operations management and instructional design contributed to designing a flexible educational model by means of discrete-event simulation. Educational experts validated the model

  15. Validating the FAO AquaCrop model for irrigated and water deficient field maize

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Accurate crop development models are important tools in evaluating the effects of water deficits on crop yield or productivity. The FAO AquaCrop model, predicting crop productivity and water requirement under water-limiting conditions, was calibrated and validated for maize (Zea mays L.) using six ...

  16. A phenomenological model and validation of shortening-induced force depression during muscle contractions

    E-print Network

    A phenomenological model and validation of shortening-induced force depression during muscle contractions Craig P. McGowan a,Ã, Richard R. Neptune a , Walter Herzog b a Department of Mechanical 2009 Keywords: History-dependent properties Forward dynamics simulation Muscle model Power a b s t r

  17. Epithelial Gaps in a Rodent Model of Inflammatory Bowel Disease: A Quantitative Validation Study

    E-print Network

    Alberta, University of

    Epithelial Gaps in a Rodent Model of Inflammatory Bowel Disease: A Quantitative Validation Study of the gastrointestinal tract. Epithelial gaps in the small intestine of patients and rodents have been demonstrated using of small intestine to proteins, smaller solutes, and water in rodent models.7,8 Furthermore, myosin light

  18. Validity of First-Order Approximations to Describe Parameter Uncertainty in Soil Hydrologic Models

    E-print Network

    Vrugt, Jasper A.

    hydrologic models. In this study, the posterior distribution of parameters in soil water retention- water hydrology; Kool and Parker [1988] in unsaturated soil water flow; Kuczera and Parent [1988Validity of First-Order Approximations to Describe Parameter Uncertainty in Soil Hydrologic Models

  19. Examining the Reliability and Validity of Clinician Ratings on the Five-Factor Model Score Sheet

    ERIC Educational Resources Information Center

    Few, Lauren R.; Miller, Joshua D.; Morse, Jennifer Q.; Yaggi, Kirsten E.; Reynolds, Sarah K.; Pilkonis, Paul A.

    2010-01-01

    Despite substantial research use, measures of the five-factor model (FFM) are infrequently used in clinical settings due, in part, to issues related to administration time and a reluctance to use self-report instruments. The current study examines the reliability and validity of the Five-Factor Model Score Sheet (FFMSS), which is a 30-item…

  20. Development, parameterization, and validation of a visco-plastic material model for sand with different

    E-print Network

    Grujicic, Mica

    63 Development, parameterization, and validation of a visco-plastic material model for sand and an elastic­visco-plastic material model for sand recently proposed by Tong and Tuan in which these effects]. This is the main reason that the objective of the present work was to attempt to develop a visco-plastic material

  1. Preliminary Validation Using in vivo Measures of a Macroscopic Electrical Model of the Heart

    E-print Network

    Coudière, Yves

    Preliminary Validation Using in vivo Measures of a Macroscopic Electrical Model of the Heart Maxime Antipolis, France 2 National Institutes of Health, National Heart Lung and Blood Institute, Laboratory of the cardiac electrical activity in a canine heart coupled with simulations done using macroscopic models

  2. Validation of mixed model-regression procedure for association genetics in rice

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Mixed models for association genetics of outcrossing plant species such as maize have been developed recently, but validation of selected markers associated with agronomic traits in different populations has not been extensively studied. Moreover, the mixed models developed for outcrossing may not b...

  3. Aeroservoelastic Model Validation and Test Data Analysis of the F/A-18 Active Aeroelastic Wing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Prazenica, Richard J.

    2003-01-01

    Model validation and flight test data analysis require careful consideration of the effects of uncertainty, noise, and nonlinearity. Uncertainty prevails in the data analysis techniques and results in a composite model uncertainty from unmodeled dynamics, assumptions and mechanics of the estimation procedures, noise, and nonlinearity. A fundamental requirement for reliable and robust model development is an attempt to account for each of these sources of error, in particular, for model validation, robust stability prediction, and flight control system development. This paper is concerned with data processing procedures for uncertainty reduction in model validation for stability estimation and nonlinear identification. F/A-18 Active Aeroelastic Wing (AAW) aircraft data is used to demonstrate signal representation effects on uncertain model development, stability estimation, and nonlinear identification. Data is decomposed using adaptive orthonormal best-basis and wavelet-basis signal decompositions for signal denoising into linear and nonlinear identification algorithms. Nonlinear identification from a wavelet-based Volterra kernel procedure is used to extract nonlinear dynamics from aeroelastic responses, and to assist model development and uncertainty reduction for model validation and stability prediction by removing a class of nonlinearity from the uncertainty.

  4. The EvacSim Pedestrian Evacuation Agent Model: Development and Validation Sen g Murphy12

    E-print Network

    Sreenan, Cormac J.

    The EvacSim Pedestrian Evacuation Agent Model: Development and Validation Seán Óg Murphy12.brown@cs.ucc.ie, cjs@cs.ucc.ie Keywords: evacuation simulation, model evaluation, pedestrian, emergency, real time Abstract EvacSim is a multi-agent building evacuation simulation featuring pedestrian occupant agents

  5. An Empirically Based Method of Q-Matrix Validation for the DINA Model: Development and Applications

    ERIC Educational Resources Information Center

    de la Torre, Jimmy

    2008-01-01

    Most model fit analyses in cognitive diagnosis assume that a Q matrix is correct after it has been constructed, without verifying its appropriateness. Consequently, any model misfit attributable to the Q matrix cannot be addressed and remedied. To address this concern, this paper proposes an empirically based method of validating a Q matrix used…

  6. Cross-Cultural Validation of the Preventive Health Model for Colorectal Cancer Screening: An Australian Study

    ERIC Educational Resources Information Center

    Flight, Ingrid H.; Wilson, Carlene J.; McGillivray, Jane; Myers, Ronald E.

    2010-01-01

    We investigated whether the five-factor structure of the Preventive Health Model for colorectal cancer screening, developed in the United States, has validity in Australia. We also tested extending the model with the addition of the factor Self-Efficacy to Screen using Fecal Occult Blood Test (SESFOBT). Randomly selected men and women aged between…

  7. Repeated holdout Cross-Validation of Model to Estimate Risk of Lyme Disease by Landscape Attributes

    EPA Science Inventory

    We previously modeled Lyme disease (LD) risk at the landscape scale; here we evaluate the model's overall goodness-of-fit using holdout validation. Landscapes were characterized within road-bounded analysis units (AU). Observed LD cases (obsLD) were ascertained per AU. Data were ...

  8. Calibration and validation of a non-point source pollution model

    E-print Network

    Grunwald, Sabine

    Calibration and validation of a non-point source pollution model S. Grunwalda,* , L.D. Nortonb. The objective of this study was to investigate the performance of the Agricultural Non-Point Source Pollution; Non-point source pollution; Rainfall-runoff modeling; Surface runoff; Sediment yield Agricultural

  9. Use of Maple Seeding Canopy Reflectance Dataset for Validation of SART/LEAFMOD Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    Bond, Barbara J.; Peterson, David L.

    1999-01-01

    This project was a collaborative effort by researchers at ARC, OSU and the University of Arizona. The goal was to use a dataset obtained from a previous study to "empirically validate a new canopy radiative-transfer model (SART) which incorporates a recently-developed leaf-level model (LEAFMOD)". The document includes a short research summary.

  10. Proximal watershed validation of a remote sensing-based streamflow estimation model

    E-print Network

    Texas at San Antonio, University of

    Proximal watershed validation of a remote sensing- based streamflow estimation model Blake P model in four regionally proximate south-central Texas watersheds. Sandies Creek watershed (1420 km2 additional proximal watersheds of varying spatial dimension (860 km2 ­ 2940 km2 ), soils, land cover

  11. FIELD VALIDATION OF THE DNDC MODEL FOR GREENHOUSE GAS EMISSIONS IN EAST ASIA CROPPING SYSTEMS.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Validations of the DeNitrification-DeComposition (DNDC) model against field data sets of trace gases (CH4, N2O and NO) emitted from cropping systems in Japan, China and Thailand were conducted. The model simulated results were in agreement with seasonal N2O emissions from a lowland soil in Japan fro...

  12. An integrated model of the TOPAZ-II electromagnetic pump

    SciTech Connect

    El-Genk, M.S.; Paramonov, D.V. . Inst. of Space Nuclear Power Studies)

    1994-11-01

    A detailed model of the electromagnetic pump of the TOPAZ-II space nuclear reactor power system is developed and compared with experimental data. The magnetic field strength in the pump depends not only on the current supplied by the pump thermionic fuel elements in the reactor core but also on the temperature of the coolant, the magnetic coil, and the pump structure. All electric and thermal properties of the coolant, wall material of the pump ducts, and electric leads are taken to be temperature dependent. The model predictions are in good agreement with experimental data.

  13. The Fitness Landscape of HIV-1 Gag: Advanced Modeling Approaches and Validation of Model Predictions by In

    E-print Network

    Ferguson, Andrew

    The Fitness Landscape of HIV-1 Gag: Advanced Modeling Approaches and Validation of Model a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here

  14. A MODEL STUDY OF TRANSVERSE MODE COUPLING INSTABILITY AT NATIONAL SYNCHROTRON LIGHT SOURCE-II (NSLS-II).

    SciTech Connect

    BLEDNYKH, A.; WANG, J.M.

    2005-05-15

    The vertical impedances of the preliminary designs of National Synchrotron Light Source II (NSLS-II) Mini Gap Undulators (MGU) are calculated by means of GdfidL code. The Transverse Mode Coupling Instability (TMCI) thresholds corresponding to these impedances are estimated using an analytically solvable model.

  15. Competitive Adsorption of Cd(II), Cr(VI), and Pb(II) onto Nanomaghemite: A Spectroscopic and Modeling Approach.

    PubMed

    Komárek, Michael; Koretsky, Carla M; Stephen, Krishna J; Alessi, Daniel S; Chrastný, Vladislav

    2015-11-01

    A combined modeling and spectroscopic approach is used to describe Cd(II), Cr(VI), and Pb(II) adsorption onto nanomaghemite and nanomaghemite coated quartz. A pseudo-second order kinetic model fitted the adsorption data well. The sorption capacity of nanomaghemite was evaluated using a Langmuir isotherm model, and a diffuse double layer surface complexation model (DLM) was developed to describe metal adsorption. Adsorption mechanisms were assessed using X-ray photoelectron spectroscopy and X-ray absorption spectroscopy. Pb(II) adsorption occurs mainly via formation of inner-sphere complexes, whereas Cr(VI) likely adsorbs mainly as outer-sphere complexes and Cd(II) as a mixture of inner- and outer-sphere complexes. The simple DLM describes well the pH-dependence of single adsorption edges. However, it fails to adequately capture metal adsorption behavior over broad ranges of ionic strength or metal-loading on the sorbents. For systems with equimolar concentrations of Pb(II), Cd(II), and Cr(VI). Pb(II) adsorption was reasonably well predicted by the DLM, but predictions were poorer for Cr(VI) and Cd(II). This study demonstrates that a simple DLM can describe well the adsorption of the studied metals in mixed sorbate-sorbent systems, but only under narrow ranges of ionic strength or metal loading. The results also highlight the sorption potential of nanomaghemite for metals in complex systems. PMID:26457556

  16. Microbial dormancy improves development and experimental validation of ecosystem model

    PubMed Central

    Wang, Gangsheng; Jagadamma, Sindhu; Mayes, Melanie A; Schadt, Christopher W; Megan Steinweg, J; Gu, Lianhong; Post, Wilfred M

    2015-01-01

    Climate feedbacks from soils can result from environmental change followed by response of plant and microbial communities, and/or associated changes in nutrient cycling. Explicit consideration of microbial life-history traits and functions may be necessary to predict climate feedbacks owing to changes in the physiology and community composition of microbes and their associated effect on carbon cycling. Here we developed the microbial enzyme-mediated decomposition (MEND) model by incorporating microbial dormancy and the ability to track multiple isotopes of carbon. We tested two versions of MEND, that is, MEND with dormancy (MEND) and MEND without dormancy (MEND_wod), against long-term (270 days) carbon decomposition data from laboratory incubations of four soils with isotopically labeled substrates. MEND_wod adequately fitted multiple observations (total C–CO2 and 14C–CO2 respiration, and dissolved organic carbon), but at the cost of significantly underestimating the total microbial biomass. MEND improved estimates of microbial biomass by 20–71% over MEND_wod. We also quantified uncertainties in parameters and model simulations using the Critical Objective Function Index method, which is based on a global stochastic optimization algorithm, as well as model complexity and observational data availability. Together our model extrapolations of the incubation study show that long-term soil incubations with experimental data for multiple carbon pools are conducive to estimate both decomposition and microbial parameters. These efforts should provide essential support to future field- and global-scale simulations, and enable more confident predictions of feedbacks between environmental change and carbon cycling. PMID:25012899

  17. Microbial dormancy improves development and experimental validation of ecosystem model.

    PubMed

    Wang, Gangsheng; Jagadamma, Sindhu; Mayes, Melanie A; Schadt, Christopher W; Steinweg, J Megan; Gu, Lianhong; Post, Wilfred M

    2015-01-01

    Climate feedbacks from soils can result from environmental change followed by response of plant and microbial communities, and/or associated changes in nutrient cycling. Explicit consideration of microbial life-history traits and functions may be necessary to predict climate feedbacks owing to changes in the physiology and community composition of microbes and their associated effect on carbon cycling. Here we developed the microbial enzyme-mediated decomposition (MEND) model by incorporating microbial dormancy and the ability to track multiple isotopes of carbon. We tested two versions of MEND, that is, MEND with dormancy (MEND) and MEND without dormancy (MEND_wod), against long-term (270 days) carbon decomposition data from laboratory incubations of four soils with isotopically labeled substrates. MEND_wod adequately fitted multiple observations (total C-CO2 and (14)C-CO2 respiration, and dissolved organic carbon), but at the cost of significantly underestimating the total microbial biomass. MEND improved estimates of microbial biomass by 20-71% over MEND_wod. We also quantified uncertainties in parameters and model simulations using the Critical Objective Function Index method, which is based on a global stochastic optimization algorithm, as well as model complexity and observational data availability. Together our model extrapolations of the incubation study show that long-term soil incubations with experimental data for multiple carbon pools are conducive to estimate both decomposition and microbial parameters. These efforts should provide essential support to future field- and global-scale simulations, and enable more confident predictions of feedbacks between environmental change and carbon cycling. PMID:25012899

  18. Modeling HIV Immune Response and Validation with Clinical Data

    PubMed Central

    Banks, H. T.; Davidian, M.; Hu, Shuhua; Kepler, Grace M.; Rosenberg, E.S.

    2009-01-01

    A system of ordinary differential equations is formulated to describe the pathogenesis of HIV infection, wherein certain features that have been shown to be important by recent experimental research are incorporated in the model. These include the role of CD4+ memory cells that serve as a major reservoir of latently infected cells, a critical role for T-helper cells in the generation of CD8 memory cells capable of efficient recall response, and stimulation by antigens other than HIV. A stability analysis illustrates the capability of this model in admitting multiple locally asymptotically stable (locally a.s.) off-treatment equilibria. We show that this more biologically-detailed model can exhibit the phenomenon of transient viremia experienced by some patients on therapy with viral load levels suppressed below the detection limit. We also show that the loss of CD4+ T-cell help in the generation of CD8+ memory cells leads to larger peak values for the viral load during transient viremia. Censored clinical data is used to obtain parameter estimates. We demonstrate that using a reduced set of 16 free parameters, obtained by fixing some parameters at their population averages, the model provides reasonable fits to the patient data and, moreover, that it exhibits good predictive capability. We further show that parameter values obtained for most clinical patients do not admit multiple locally a.s off-treatment equilibria. This suggests that treatment to move from a high viral load equilibrium state to an equilibrium state with a lower (or zero) viral load is not possible for these patients. PMID:19495424

  19. Vibration Model Validation for Linear Collider Detector Platforms

    SciTech Connect

    Bertsche, Kirk; Amann, J.W.; Markiewicz, T.W.; Oriunno, M.; Weidemann, A.; White, G.; /SLAC

    2012-05-16

    The ILC and CLIC reference designs incorporate reinforced-concrete platforms underneath the detectors so that the two detectors can each be moved onto and off of the beamline in a Push-Pull configuration. These platforms could potentially amplify ground vibrations, which would reduce luminosity. In this paper we compare vibration models to experimental data on reinforced concrete structures, estimate the impact on luminosity, and summarize implications for the design of a reinforced concrete platform for the ILC or CLIC detectors.

  20. The role of global cloud climatologies in validating numerical models

    NASA Technical Reports Server (NTRS)

    HARSHVARDHAN

    1992-01-01

    Global maps of the monthly mean net upward longwave radiation flux at the ocean surface were obtained for April, July, October 1985 and January 1986. These maps were produced by blending information obtained from a combination of general circulation model cloud radiative forcing fields, the top of the atmosphere cloud radiative forcing from ERBE and TOVS profiles and sea surface temperature on ISCCP C1 tapes. The fields are compatible with known meteorological regimes of atmospheric water vapor content and cloudiness. There is a vast area of high net upward longwave radiation flux (greater than 80/sq Wm) in the eastern Pacific Ocean throughout most of the year. Areas of low net upward longwave radiation flux ((less than 40/sq Wm) are the tropical convective regions and extra tropical regions that tend to have persistent low cloud cover.The technique used relies on General Circulation Model simulations and so is subject to some of the uncertainties associated with the model. However, all input information regarding temperature, moisture, and cloud cover is from satellite data having near global coverage. This feature of the procedure alone warrants its consideration for further use in compiling global maps of longwave radiation.