Science.gov

Sample records for ii model validation

  1. Filament winding cylinders. II - Validation of the process model

    NASA Technical Reports Server (NTRS)

    Calius, Emilio P.; Lee, Soo-Yong; Springer, George S.

    1990-01-01

    Analytical and experimental studies were performed to validate the model developed by Lee and Springer for simulating the manufacturing process of filament wound composite cylinders. First, results calculated by the Lee-Springer model were compared to results of the Calius-Springer thin cylinder model. Second, temperatures and strains calculated by the Lee-Springer model were compared to data. The data used in these comparisons were generated during the course of this investigation with cylinders made of Hercules IM-6G/HBRF-55 and Fiberite T-300/976 graphite-epoxy tows. Good agreement was found between the calculated and measured stresses and strains, indicating that the model is a useful representation of the winding and curing processes.

  2. Modeling extracellular electrical stimulation: II. Computational validation and numerical results

    NASA Astrophysics Data System (ADS)

    Tahayori, Bahman; Meffin, Hamish; Dokos, Socrates; Burkitt, Anthony N.; Grayden, David B.

    2012-12-01

    The validity of approximate equations describing the membrane potential under extracellular electrical stimulation (Meffin et al 2012 J. Neural Eng. 9 065005) is investigated through finite element analysis in this paper. To this end, the finite element method is used to simulate a cylindrical neurite under extracellular stimulation. Laplace’s equations with appropriate boundary conditions are solved numerically in three dimensions and the results are compared to the approximate analytic solutions. Simulation results are in agreement with the approximate analytic expressions for longitudinal and transverse modes of stimulation. The range of validity of the equations describing the membrane potential for different values of stimulation and neurite parameters are presented as well. The results indicate that the analytic approach can be used to model extracellular electrical stimulation for realistic physiological parameters with a high level of accuracy.

  3. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    SciTech Connect

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and sensible heat fluxes appear to occupy intermediate positions between these extremes, but the existing large observational uncertainties in these processes make this a provisional assessment. In all selected processes as well, the error statistics are found to be sensitive to season and latitude sector, confirming the need for finer-scale analyses which also are in progress.

  4. A scattering model for perfectly conducting random surfaces. I - Model development. II - Range of validity

    NASA Technical Reports Server (NTRS)

    Fung, A. K.; Pan, G. W.

    1987-01-01

    The surface current on a perfectly conducting randomly rough surface is estimated by solving iteratively a standard integral equation, and the estimate is then used to compute the far-zone scattered fields and the backscattering coefficients for vertical, horizontal and cross polarizations. The model developed here yields a simple backscattering coefficient expression in terms of the surface parameters. The expression reduces analytically to the Kirchhoff and the first-order small-perturbation model in the high- and low-frequency regions, respectively. The range of validity of the model is determined.

  5. Validated Competing Event Model for the Stage I-II Endometrial Cancer Population

    SciTech Connect

    Carmona, Ruben; Gulaya, Sachin; Murphy, James D.; Rose, Brent S.; Wu, John; Noticewala, Sonal; McHale, Michael T.; Yashar, Catheryn M.; Vaida, Florin; Mell, Loren K.

    2014-07-15

    Purpose/Objectives(s): Early-stage endometrial cancer patients are at higher risk of noncancer mortality than of cancer mortality. Competing event models incorporating comorbidity could help identify women most likely to benefit from treatment intensification. Methods and Materials: 67,397 women with stage I-II endometrioid adenocarcinoma after total hysterectomy diagnosed from 1988 to 2009 were identified in Surveillance, Epidemiology, and End Results (SEER) and linked SEER-Medicare databases. Using demographic and clinical information, including comorbidity, we sought to develop and validate a risk score to predict the incidence of competing mortality. Results: In the validation cohort, increasing competing mortality risk score was associated with increased risk of noncancer mortality (subdistribution hazard ratio [SDHR], 1.92; 95% confidence interval [CI], 1.60-2.30) and decreased risk of endometrial cancer mortality (SDHR, 0.61; 95% CI, 0.55-0.78). Controlling for other variables, Charlson Comorbidity Index (CCI) = 1 (SDHR, 1.62; 95% CI, 1.45-1.82) and CCI >1 (SDHR, 3.31; 95% CI, 2.74-4.01) were associated with increased risk of noncancer mortality. The 10-year cumulative incidences of competing mortality within low-, medium-, and high-risk strata were 27.3% (95% CI, 25.2%-29.4%), 34.6% (95% CI, 32.5%-36.7%), and 50.3% (95% CI, 48.2%-52.6%), respectively. With increasing competing mortality risk score, we observed a significant decline in omega (ω), indicating a diminishing likelihood of benefit from treatment intensification. Conclusion: Comorbidity and other factors influence the risk of competing mortality among patients with early-stage endometrial cancer. Competing event models could improve our ability to identify patients likely to benefit from treatment intensification.

  6. Evaluation of Reliability and Validity of the Hendrich II Fall Risk Model in a Chinese Hospital Population

    PubMed Central

    Zhang, Congcong; Wu, Xinjuan; Lin, Songbai; Jia, Zhaoxia; Cao, Jing

    2015-01-01

    To translate, validate and examine the reliability and validity of a Chinese version of the Hendrich II Fall risk Model (HFRM) in predicting falls in elderly inpatient. A sample of 989 Chinese elderly inpatients was recruited upon admission at the Peking Union Medical College Hospital. The inpatients were assessed for fall risk using the Chinese version of the HFRM at admission. The reliability of the Chinese version of the HFRM was determined using the internal consistency and test-rested methods. Validity was determined using construct validity and convergent validity. Receiver operating characteristic (ROC) curves were created to determine the sensitivity and specificity. The Chinese version of the HFRM showed excellent repeatability with an intra-class correlation coefficient (ICC) of 0.9950 (95% confidence interval (CI): 0.99230.9984). The inter-rater reliability was high with an ICC of 0.9950 (95%CI: 0.99230.9984). Cronbachs alpha coefficient was 0.366. Content validity was excellent, with a content validity ratio of 0.9333. The Chinese version of the HFRM had a sensitivity of 72% and a specificity of 69% when using a cut-off of 5 points on the scale. The area under the curve (AUC) was 0.815 (P<0.001). The Chinese version of the HFRM showed good reliability and validity in assessing the risk of fall in Chinese elderly inpatients. PMID:26544961

  7. A wheat grazing model for simulating grain and beef production: Part II - model validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Model evaluation is a prerequisite to its adoption and successful application. The objective of this paper is to evaluate the ability of a newly developed wheat grazing model to predict fall-winter forage and grain yields of winter wheat (Triticum aestivum L.) as well as daily weight gains per steer...

  8. Assessing the wildlife habitat value of New England salt marshes: II. Model testing and validation.

    PubMed

    McKinney, Richard A; Charpentier, Michael A; Wigand, Cathleen

    2009-07-01

    We tested a previously described model to assess the wildlife habitat value of New England salt marshes by comparing modeled habitat values and scores with bird abundance and species richness at sixteen salt marshes in Narragansett Bay, Rhode Island USA. As a group, wildlife habitat value assessment scores for the marshes ranged from 307-509, or 31-67% of the maximum attainable score. We recorded 6 species of wading birds (Ardeidae; herons, egrets, and bitterns) at the sites during biweekly survey. Species richness (r (2)=0.24, F=4.53, p=0.05) and abundance (r (2)=0.26, F=5.00, p=0.04) of wading birds significantly increased with increasing assessment score. We optimized our assessment model for wading birds by using Akaike information criteria (AIC) to compare a series of models comprised of specific components and categories of our model that best reflect their habitat use. The model incorporating pre-classification, wading bird habitat categories, and natural land surrounding the sites was substantially supported by AIC analysis as the best model. The abundance of wading birds significantly increased with increasing assessment scores generated with the optimized model (r (2)=0.48, F=12.5, p=0.003), demonstrating that optimizing models can be helpful in improving the accuracy of the assessment for a given species or species assemblage. In addition to validating the assessment model, our results show that in spite of their urban setting our study marshes provide substantial wildlife habitat value. This suggests that even small wetlands in highly urbanized coastal settings can provide important wildlife habitat value if key habitat attributes (e.g., natural buffers, habitat heterogeneity) are present. PMID:18597178

  9. Development and validation of an evaporation duct model. Part II: Evaluation and improvement of stability functions

    NASA Astrophysics Data System (ADS)

    Ding, Juli; Fei, Jianfang; Huang, Xiaogang; Cheng, Xiaoping; Hu, Xiaohua; Ji, Liang

    2015-06-01

    This study aims to validate and improve the universal evaporation duct (UED) model through a further analysis of the stability function ( ψ). A large number of hydrometeorological observations obtained from a tower platform near Xisha Island of the South China Sea are employed, together with the latest variations in ψ function. Applicability of different ψ functions for specific sea areas and stratification conditions is investigated based on three objective criteria. The results show that, under unstable conditions, ψ function of Fairall et al. (1996) (i.e., Fairall96, similar for abbreviations of other function names) in general offers the best performance. However, strictly speaking, this holds true only for the stability (represented by bulk Richardson number R iB) range -2.6 ⩽ R iB < -0.1; when conditions become weakly unstable (-0.1 ⩽ R iB < -0.01), Fairall96 offers the second best performance after Hu and Zhang (1992) (HYQ92). Conversely, for near-neutral but slightly unstable conditions (-0.01 ⩽ R iB < 0.0), the effects of Edson04, Fairall03, Grachev00, and Fairall96 are similar, with Edson04 being the best function but offering only a weak advantage. Under stable conditions, HYQ92 is the optimal and offers a pronounced advantage, followed by the newly introduced SHEBA07 (by Grachev et al., 2007) function. Accordingly, the most favorable functions, i.e., Fairall96 and HYQ92, are incorporated into the UED model to obtain an improved version of the model. With the new functions, the mean root-mean-square (rms) errors of the modified refractivity ( M), 0-5-m M slope, 5-40-m M slope, and the rms errors of evaporation duct height (EDH) are reduced by 21.65%, 9.12%, 38.79%, and 59.06%, respectively, compared to the classical Naval Postgraduate School model.

  10. Predictive modeling of infrared radiative heating in tomato dry-peeling process: Part II. Model validation and sensitivity analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A predictive mathematical model was developed to simulate heat transfer in a tomato undergoing double sided infrared (IR) heating in a dry-peeling process. The aims of this study were to validate the developed model using experimental data and to investigate different engineering parameters that mos...

  11. Person Heterogeneity of the BDI-II-C and Its Effects on Dimensionality and Construct Validity: Using Mixture Item Response Models

    ERIC Educational Resources Information Center

    Wu, Pei-Chen; Huang, Tsai-Wei

    2010-01-01

    This study was to apply the mixed Rasch model to investigate person heterogeneity of Beck Depression Inventory-II-Chinese version (BDI-II-C) and its effects on dimensionality and construct validity. Person heterogeneity was reflected by two latent classes that differ qualitatively. Additionally, person heterogeneity adversely affected the

  12. Black liquor combustion validated recovery boiler modeling: Final year report. Volume 2 (Appendices I, section 5 and II, section 1)

    SciTech Connect

    Grace, T.M.; Frederick, W.J.; Salcudean, M.; Wessel, R.A.

    1998-08-01

    This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished were as follows: (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes. (2) Validate the enhanced furnace models, so that users can have confidence in the predicted results. (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler. (4) Facilitate the transfer of codes, black liquid submodels, and fundamental knowledge to the US kraft pulp industry. Volume 2 contains the last section of Appendix I, Radiative heat transfer in kraft recovery boilers, and the first section of Appendix II, The effect of temperature and residence time on the distribution of carbon, sulfur, and nitrogen between gaseous and condensed phase products from low temperature pyrolysis of kraft black liquor.

  13. Validating the Serpent Model of FiR 1 Triga Mk-II Reactor by Means of Reactor Dosimetry

    NASA Astrophysics Data System (ADS)

    Viitanen, Tuomas; Leppänen, Jaakko

    2016-02-01

    A model of the FiR 1 Triga Mk-II reactor has been previously generated for the Serpent Monte Carlo reactor physics and burnup calculation code. In the current article, this model is validated by comparing the predicted reaction rates of nickel and manganese at 9 different positions in the reactor to measurements. In addition, track-length estimators are implemented in Serpent 2.1.18 to increase its performance in dosimetry calculations. The usage of the track-length estimators is found to decrease the reaction rate calculation times by a factor of 7-8 compared to the standard estimator type in Serpent, the collision estimators. The differences in the reaction rates between the calculation and the measurement are below 20%.

  14. The model SIRANE for atmospheric urban pollutant dispersion; PART II, validation of the model on a real case study

    NASA Astrophysics Data System (ADS)

    Soulhac, L.; Salizzoni, P.; Mejean, P.; Didier, D.; Rios, I.

    2012-03-01

    We analyse the performance of the model SIRANE by comparing its outputs to field data measured within an urban district. SIRANE is the first urban dispersion model based on the concept of street network, and contains specific parametrical law to explicitly simulate the main transfer mechanisms within the urban canopy. The model validation is performed by means of field data collected during a 15 days measurement campaign in an urban district in Lyon, France. The campaign provided information on traffic fluxes and cars emissions, meteorological conditions, background pollution levels and pollutant concentration in different location within the district. This data set, together with complementary modelling tools needed to estimate the spatial distribution of traffic fluxes, allowed us to estimate the input data required by the model. The data set provide also the information essential to evaluate the accuracy of the model outputs. Comparison between model predictions and field measurements was performed in two ways. By evaluate the reliability of the model in simulating the spatial distribution of the pollutant and of their time variability. The study includes a sensitivity analysis to identify the key input parameters influencing the performance of the model, namely the emissions rates and the wind velocity. The analysis focuses only on the influence of varying input parameters in the modelling chain in the model predictions and complements the analyses provided by wind tunnel studies focussing on the parameterisation implemented in the model. The study also elucidates the critical role of background concentrations that represent a significant contribution to local pollution levels. The overall model performance, measured using the Chang and Hanna (2004) criteria can be considered as 'good' except for NO and some of BTX species. The results suggest that improvements of the performances on NO require testing new photochemical models, whereas the improvement on BTX could be achieved by correcting their vehicular emissions factors.

  15. Assessing Wildlife Habitat Value of New England Salt Marshes: II. Model Testing and Validation

    EPA Science Inventory

    We test a previously described model to assess the wildlife habitat value of New England salt marshes by comparing modeled habitat values and scores with bird abundance and species richness at sixteen salt marshes in Narragansett Bay, Rhode Island USA. Assessment scores ranged f...

  16. MEDSLIK-II, a Lagrangian marine oil spill model for short-term forecasting - Part 2: Numerical simulations and validations

    NASA Astrophysics Data System (ADS)

    De Dominicis, M.; Pinardi, N.; Zodiatis, G.; Archetti, R.

    2013-03-01

    In this paper we use MEDSLIK-II, a Lagrangian marine oil spill model described in Part 1 of this paper (De Dominicis et al., 2013), to simulate oil slick transport and transformation processes for realistic oceanic cases where satellite or drifting buoys data are available for verification. The model is coupled with operational oceanographic currents, atmospheric analyses winds and remote-sensing data for initialization. The sensitivity of the oil spill simulations to several model parameterizations is analyzed and the results are validated using surface drifters and SAR (Synthetic Aperture Radar) images in different regions of the Mediterranean Sea. It is found that the forecast skill of Lagrangian trajectories largely depends on the accuracy of the Eulerian ocean currents: the operational models give useful estimates of currents, but high-frequency (hourly) and high spatial resolution is required, and the Stokes drift velocity has to be often added, especially in coastal areas. From a numerical point of view, it is found that a realistic oil concentration reconstruction is obtained using an oil tracer grid resolution of about 100 m, with at least 100 000 Lagrangian particles. Moreover, sensitivity experiments to uncertain model parameters show that the knowledge of oil type and slick thickness are, among all the others, key model parameters affecting the simulation results. Considering acceptable for the simulated trajectories a maximum spatial error of the order of three times the horizontal resolution of the Eulerian ocean currents, the predictability skill for particle trajectories is from 1 to 2.5 days depending on the specific current regime. This suggests that re-initialization of the simulations is required every day.

  17. The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models

    EPA Science Inventory

    The second phase of the MicroArray Quality Control (MAQC-II) project evaluated common practices for developing and validating microarray-based models aimed at predicting toxicological and clinical endpoints. Thirty-six teams developed classifiers for 13 endpoints - some easy, som...

  18. Model Validation Status Review

    SciTech Connect

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and engineered barriers, plus the TSPA model itself Description of the model areas is provided in Section 3, and the documents reviewed are described in Section 4. The responsible manager for the Model Validation Status Review was the Chief Science Officer (CSO) for Bechtel-SAIC Co. (BSC). The team lead was assigned by the CSO. A total of 32 technical specialists were engaged to evaluate model validation status in the 21 model areas. The technical specialists were generally independent of the work reviewed, meeting technical qualifications as discussed in Section 5.

  19. Validation of EuroSCORE II risk model for coronary artery bypass surgery in high-risk patients

    PubMed Central

    Adademir, Taylan; Tasar, Mehmet; Ecevit, Ata Niyazi; Karaca, Okay Guven; Salihi, Salih; Buyukbayrak, Fuat; Ozkokeli, Mehmet

    2014-01-01

    Introduction Determining operative mortality risk is mandatory for adult cardiac surgery. Patients should be informed about the operative risk before surgery. There are some risk scoring systems that compare and standardize the results of the operations. These scoring systems needed to be updated recently, which resulted in the development of EuroSCORE II. In this study, we aimed to validate EuroSCORE II by comparing it with the original EuroSCORE risk scoring system in a group of high-risk octogenarian patients who underwent coronary artery bypass grafting (CABG). Material and methods The present study included only high-risk octogenarian patients who underwent isolated coronary artery bypass grafting in our center between January 2000 and January 2010. Redo procedures and concomitant procedures were excluded. We compared observed mortality with expected mortality predicted by EuroSCORE (logistic) and EuroSCORE II scoring systems. Results We considered 105 CABG operations performed in octogenarian patients between January 2000 and January 2010. The mean age of the patients was 81.43 2.21 years (80-89 years). Thirty-nine (37.1%) of them were female. The two scales showed good discriminative capacity in the global patient sample, with the AUC (area under the curve) being higher for EuroSCORE II (AUC 0.772, 95% CI: 0.673-0.872). The goodness of fit was good for both scales. Conclusions We conclude that EuroSCORE II has better AUC (area under the ROC curve) compared to the original EuroSCORE, but both scales showed good discriminative capacity and goodness of fit in octogenarian patients undergoing isolated coronary artery bypass grafting. PMID:26336431

  20. INACTIVATION OF CRYPTOSPORIDIUM OOCYSTS IN A PILOT-SCALE OZONE BUBBLE-DIFFUSER CONTACTOR - II: MODEL VALIDATION AND APPLICATION

    EPA Science Inventory

    The ADR model developed in Part I of this study was successfully validated with experimenta data obtained for the inactivation of C. parvum and C. muris oocysts with a pilot-scale ozone-bubble diffuser contactor operated with treated Ohio River water. Kinetic parameters, required...

  1. TAMDAR Sensor Validation in 2003 AIRS II

    NASA Technical Reports Server (NTRS)

    Daniels, Taumi S.; Murray, John J.; Anderson, Mark V.; Mulally, Daniel J.; Jensen, Kristopher R.; Grainger, Cedric A.; Delene, David J.

    2005-01-01

    This study entails an assessment of TAMDAR in situ temperature, relative humidity and winds sensor data from seven flights of the UND Citation II. These data are undergoing rigorous assessment to determine their viability to significantly augment domestic Meteorological Data Communications Reporting System (MDCRS) and the international Aircraft Meteorological Data Reporting (AMDAR) system observational databases to improve the performance of regional and global numerical weather prediction models. NASA Langley Research Center participated in the Second Alliance Icing Research Study from November 17 to December 17, 2003. TAMDAR data taken during this period is compared with validation data from the UND Citation. The data indicate acceptable performance of the TAMDAR sensor when compared to measurements from the UND Citation research instruments.

  2. Black liquor combustion validated recovery boiler modeling: Final year report. Volume 3 (Appendices II, sections 2--3 and III)

    SciTech Connect

    Grace, T.M.; Frederick, W.J.; Salcudean, M.; Wessel, R.A.

    1998-08-01

    This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished were as follows: (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes. (2) Validate the enhanced furnace models, so that users can have confidence in the predicted results. (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler. (4) Facilitate the transfer of codes, black liquid submodels, and fundamental knowledge to the US kraft pulp industry. Volume 3 contains the following appendix sections: Formation and destruction of nitrogen oxides in recovery boilers; Sintering and densification of recovery boiler deposits laboratory data and a rate model; and Experimental data on rates of particulate formation during char bed burning.

  3. Numerical investigation of dynamic microorgan devices as drug screening platforms. Part II: Microscale modeling approach and validation.

    PubMed

    Tourlomousis, Filippos; Chang, Robert C

    2016-03-01

    The authors have previously reported a rigorous macroscale modeling approach for an in vitro 3D dynamic microorgan device (DMD). This paper represents the second of a two-part model-based investigation where the effect of microscale (single liver cell-level) shear-mediated mechanotransduction on drug biotransformation is deconstructed. Herein, each cell is explicitly incorporated into the geometric model as single compartmentalized metabolic structures. Each cell's metabolic activity is coupled with the microscale hydrodynamic Wall Shear Stress (WSS) simulated around the cell boundary through a semi-empirical polynomial function as an additional reaction term in the mass transfer equations. Guided by the macroscale model-based hydrodynamics, only 9 cells in 3 representative DMD domains are explicitly modeled. Dynamic and reaction similarity rules based on non-dimensionalization are invoked to correlate the numerical and empirical models, accounting for the substrate time scales. The proposed modeling approach addresses the key challenge of computational cost towards modeling complex large-scale DMD-type system with prohibitively high cell densities. Transient simulations are implemented to extract the drug metabolite profile with the microscale modeling approach validated with an experimental drug flow study. The results from the author's study demonstrate the preferred implementation of the microscale modeling approach over that of its macroscale counterpart. Biotechnol. Bioeng. 2016;113: 623-634. © 2015 Wiley Periodicals, Inc. PMID:26333066

  4. Groundwater Model Validation

    SciTech Connect

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation data to constrain model input parameters is shown for the second case study using a Bayesian approach known as Markov Chain Monte Carlo. The approach shows a great potential to be helpful in the validation process and in incorporating prior knowledge with new field data to derive posterior distributions for both model input and output.

  5. A thermomechanical constitutive model for cemented granular materials with quantifiable internal variables. Part II - Validation and localization analysis

    NASA Astrophysics Data System (ADS)

    Das, Arghya; Tengattini, Alessandro; Nguyen, Giang D.; Viggiani, Gioacchino; Hall, Stephen A.; Einav, Itai

    2014-10-01

    We study the mechanical failure of cemented granular materials (e.g., sandstones) using a constitutive model based on breakage mechanics for grain crushing and damage mechanics for cement fracture. The theoretical aspects of this model are presented in Part I: Tengattini et al. (2014), A thermomechanical constitutive model for cemented granular materials with quantifiable internal variables, Part I - Theory (Journal of the Mechanics and Physics of Solids, 10.1016/j.jmps.2014.05.021). In this Part II we investigate the constitutive and structural responses of cemented granular materials through analyses of Boundary Value Problems (BVPs). The multiple failure mechanisms captured by the proposed model enable the behavior of cemented granular rocks to be well reproduced for a wide range of confining pressures. Furthermore, through comparison of the model predictions and experimental data, the micromechanical basis of the model provides improved understanding of failure mechanisms of cemented granular materials. In particular, we show that grain crushing is the predominant inelastic deformation mechanism under high pressures while cement failure is the relevant mechanism at low pressures. Over an intermediate pressure regime a mixed mode of failure mechanisms is observed. Furthermore, the micromechanical roots of the model allow the effects on localized deformation modes of various initial microstructures to be studied. The results obtained from both the constitutive responses and BVP solutions indicate that the proposed approach and model provide a promising basis for future theoretical studies on cemented granular materials.

  6. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  7. Lidar measurements during a haze episode in Penang, Malaysia and validation of the ECMWF MACC-II model

    NASA Astrophysics Data System (ADS)

    Khor, Wei Ying; Lolli, Simone; Hee, Wan Shen; Lim, Hwee San; Jafri, M. Z. Mat; Benedetti, Angela; Jones, Luke

    2015-04-01

    Haze is a phenomenon which occurs when there is a great amount of tiny particulates suspended in the atmosphere. During the period of March 2014, a long period of haze event occurred in Penang, Malaysia. The haze condition was measured and monitored using a ground-based Lidar system. By using the measurements obtained, we evaluated the performance of the ECMWF MACC-II model. Lidar measurements showed that there was a thick aerosol layer confined in the planetary boundary layer (PBL) with extinction coefficients exceeding values of 0.3 km-1. The model however has underestimated the atmospheric conditions in Penang. Backward trajectories analysis was performed to identify aerosols sources and transport. It is speculated that the aerosols came from the North-East direction which was influenced by the North-East monsoon wind and some originated from the central eastern coast of Sumatra along the Straits of Malacca.

  8. Fast, efficient generation of high-quality atomic charges. AM1-BCC model: II. Parameterization and validation.

    PubMed

    Jakalian, Araz; Jack, David B; Bayly, Christopher I

    2002-12-01

    We present the first global parameterization and validation of a novel charge model, called AM1-BCC, which quickly and efficiently generates high-quality atomic charges for computer simulations of organic molecules in polar media. The goal of the charge model is to produce atomic charges that emulate the HF/6-31G* electrostatic potential (ESP) of a molecule. Underlying electronic structure features, including formal charge and electron delocalization, are first captured by AM1 population charges; simple additive bond charge corrections (BCCs) are then applied to these AM1 atomic charges to produce the AM1-BCC charges. The parameterization of BCCs was carried out by fitting to the HF/6-31G* ESP of a training set of >2700 molecules. Most organic functional groups and their combinations were sampled, as well as an extensive variety of cyclic and fused bicyclic heteroaryl systems. The resulting BCC parameters allow the AM1-BCC charging scheme to handle virtually all types of organic compounds listed in The Merck Index and the NCI Database. Validation of the model was done through comparisons of hydrogen-bonded dimer energies and relative free energies of solvation using AM1-BCC charges in conjunction with the 1994 Cornell et al. forcefield for AMBER.(13) Homo- and hetero-dimer hydrogen-bond energies of a diverse set of organic molecules were reproduced to within 0.95 kcal/mol RMS deviation from the ab initio values, and for DNA dimers the energies were within 0.9 kcal/mol RMS deviation from ab initio values. The calculated relative free energies of solvation for a diverse set of monofunctional isosteres were reproduced to within 0.69 kcal/mol of experiment. In all these validation tests, AMBER with the AM1-BCC charge model maintained a correlation coefficient above 0.96. Thus, the parameters presented here for use with the AM1-BCC method present a fast, accurate, and robust alternative to HF/6-31G* ESP-fit charges for general use with the AMBER force field in computer simulations involving organic small molecules. PMID:12395429

  9. Validation of SAGE II NO2 measurements

    NASA Technical Reports Server (NTRS)

    Cunnold, D. M.; Zawodny, J. M.; Chu, W. P.; Mccormick, M. P.; Pommereau, J. P.; Goutail, F.

    1991-01-01

    The validity of NO2 measurements from the stratospheric aerosol and gas experiment (SAGE) II is examined by comparing the data with climatological distributions of NO2 and by examining the consistency of the observations themselves. The precision at high altitudes is found to be 5 percent, which is also the case at specific low altitudes for certain latitudes where the mixing ratio is 4 ppbv, and the precision is 0.2 ppbv at low altitudes. The autocorrelation distance of the smoothed profile measurement noise is 3-5 km and 10 km for 1-km and 5-km smoothing, respectively. The SAGE II measurements agree with spectroscopic measurements to within 10 percent, and the SAGE measurements are about 20 percent smaller than average limb monitor measurements at the mixing ratio peak. SAGE I and SAGE II measurements are slightly different, but the difference is not attributed to changes in atmospheric NO2.

  10. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part II: Benchmark comparisons of PUMA core parameters with MCNP5 and improvements due to a simple cell heterogeneity correction

    SciTech Connect

    Grant, C.; Mollerach, R.; Leszczynski, F.; Serra, O.; Marconi, J.; Fink, J.

    2006-07-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure vessel design with 451 vertical coolant channels and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update of reactor physics calculation methods and models was recently carried out covering cell, supercell (control rod) and core calculations. This paper presents benchmark comparisons of core parameters of a slightly idealized model of the Atucha-I core obtained with the PUMA reactor code with MCNP5. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, more symmetric than Atucha-II, and has some experimental data available. To validate the new models benchmark comparisons of k-effective, channel power and axial power distributions obtained with PUMA and MCNP5 have been performed. In addition, a simple cell heterogeneity correction recently introduced in PUMA is presented, which improves significantly the agreement of calculated channel powers with MCNP5. To complete the validation, the calculation of some of the critical configurations of the Atucha-I reactor measured during the experiments performed at first criticality is also presented. (authors)

  11. Resolving the mass-anisotropy degeneracy of the spherically symmetric Jeans equation - II. Optimum smoothing and model validation

    NASA Astrophysics Data System (ADS)

    Diakogiannis, Foivos I.; Lewis, Geraint F.; Ibata, Rodrigo A.

    2014-09-01

    The spherical Jeans equation is widely used to estimate the mass content of stellar systems with apparent spherical symmetry. However, this method suffers from a degeneracy between the assumed mass density and the kinematic anisotropy profile, ?(r). In a previous work, we laid the theoretical foundations for an algorithm that combines smoothing B splines with equations from dynamics to remove this degeneracy. Specifically, our method reconstructs a unique kinematic profile of ? _{rr}^2 and ? _{tt}^2 for an assumed free functional form of the potential and mass density (?, ?) and given a set of observed line-of-sight velocity dispersion measurements, ? _los^2. In Paper I, we demonstrated the efficiency of our algorithm with a very simple example and we commented on the need for optimum smoothing of the B-spline representation; this is in order to avoid unphysical variational behaviour when we have large uncertainty in our data. In the current contribution, we present a process of finding the optimum smoothing for a given data set by using information of the behaviour from known ideal theoretical models. Markov Chain Monte Carlo methods are used to explore the degeneracy in the dynamical modelling process. We validate our model through applications to synthetic data for systems with constant or variable mass-to-light ratio ?. In all cases, we recover excellent fits of theoretical functions to observables and unique solutions. Our algorithm is a robust method for the removal of the mass-anisotropy degeneracy of the spherically symmetric Jeans equation for an assumed functional form of the mass density.

  12. Data Assimilation of Photosynthetic Light-use Efficiency using Multi-angular Satellite Data: II Model Implementation and Validation

    NASA Technical Reports Server (NTRS)

    Hilker, Thomas; Hall, Forest G.; Tucker, J.; Coops, Nicholas C.; Black, T. Andrew; Nichol, Caroline J.; Sellers, Piers J.; Barr, Alan; Hollinger, David Y.; Munger, J. W.

    2012-01-01

    Spatially explicit and temporally continuous estimates of photosynthesis will be of great importance for increasing our understanding of and ultimately closing the terrestrial carbon cycle. Current capabilities to model photosynthesis, however, are limited by accurate enough representations of the complexity of the underlying biochemical processes and the numerous environmental constraints imposed upon plant primary production. A potentially powerful alternative to model photosynthesis through these indirect observations is the use of multi-angular satellite data to infer light-use efficiency (e) directly from spectral reflectance properties in connection with canopy shadow fractions. Hall et al. (this issue) introduced a new approach for predicting gross ecosystem production that would allow the use of such observations in a data assimilation mode to obtain spatially explicit variations in e from infrequent polar-orbiting satellite observations, while meteorological data are used to account for the more dynamic responses of e to variations in environmental conditions caused by changes in weather and illumination. In this second part of the study we implement and validate the approach of Hall et al. (this issue) across an ecologically diverse array of eight flux-tower sites in North America using data acquired from the Compact High Resolution Imaging Spectroradiometer (CHRIS) and eddy-flux observations. Our results show significantly enhanced estimates of e and therefore cumulative gross ecosystem production (GEP) over the course of one year at all examined sites. We also demonstrate that e is greatly heterogeneous even across small study areas. Data assimilation and direct inference of GEP from space using a new, proposed sensor could therefore be a significant step towards closing the terrestrial carbon cycle.

  13. Ab initio structural modeling of and experimental validation for Chlamydia trachomatis protein CT296 reveal structural similarity to Fe(II) 2-oxoglutarate-dependent enzymes

    SciTech Connect

    Kemege, Kyle E.; Hickey, John M.; Lovell, Scott; Battaile, Kevin P.; Zhang, Yang; Hefty, P. Scott

    2012-02-13

    Chlamydia trachomatis is a medically important pathogen that encodes a relatively high percentage of proteins with unknown function. The three-dimensional structure of a protein can be very informative regarding the protein's functional characteristics; however, determining protein structures experimentally can be very challenging. Computational methods that model protein structures with sufficient accuracy to facilitate functional studies have had notable successes. To evaluate the accuracy and potential impact of computational protein structure modeling of hypothetical proteins encoded by Chlamydia, a successful computational method termed I-TASSER was utilized to model the three-dimensional structure of a hypothetical protein encoded by open reading frame (ORF) CT296. CT296 has been reported to exhibit functional properties of a divalent cation transcription repressor (DcrA), with similarity to the Escherichia coli iron-responsive transcriptional repressor, Fur. Unexpectedly, the I-TASSER model of CT296 exhibited no structural similarity to any DNA-interacting proteins or motifs. To validate the I-TASSER-generated model, the structure of CT296 was solved experimentally using X-ray crystallography. Impressively, the ab initio I-TASSER-generated model closely matched (2.72-{angstrom} C{alpha} root mean square deviation [RMSD]) the high-resolution (1.8-{angstrom}) crystal structure of CT296. Modeled and experimentally determined structures of CT296 share structural characteristics of non-heme Fe(II) 2-oxoglutarate-dependent enzymes, although key enzymatic residues are not conserved, suggesting a unique biochemical process is likely associated with CT296 function. Additionally, functional analyses did not support prior reports that CT296 has properties shared with divalent cation repressors such as Fur.

  14. Validation Studies for the Diet History Questionnaire II

    Cancer.gov

    Data show that the DHQ I instrument provides reasonable nutrient estimates, and three studies were conducted to assess its validity/calibration. There have been no such validation studies with the DHQ II.

  15. MEDSLIK-II, a Lagrangian marine surface oil spill model for short-term forecasting - Part 2: Numerical simulations and validations

    NASA Astrophysics Data System (ADS)

    De Dominicis, M.; Pinardi, N.; Zodiatis, G.; Archetti, R.

    2013-11-01

    In this paper we use MEDSLIK-II, a Lagrangian marine surface oil spill model described in Part 1 (De Dominicis et al., 2013), to simulate oil slick transport and transformation processes for realistic oceanic cases, where satellite or drifting buoys data are available for verification. The model is coupled with operational oceanographic currents, atmospheric analyses winds and remote sensing data for initialization. The sensitivity of the oil spill simulations to several model parameterizations is analyzed and the results are validated using surface drifters, SAR (synthetic aperture radar) and optical satellite images in different regions of the Mediterranean Sea. It is found that the forecast skill of Lagrangian trajectories largely depends on the accuracy of the Eulerian ocean currents: the operational models give useful estimates of currents, but high-frequency (hourly) and high-spatial resolution is required, and the Stokes drift velocity has to be added, especially in coastal areas. From a numerical point of view, it is found that a realistic oil concentration reconstruction is obtained using an oil tracer grid resolution of about 100 m, with at least 100 000 Lagrangian particles. Moreover, sensitivity experiments to uncertain model parameters show that the knowledge of oil type and slick thickness are, among all the others, key model parameters affecting the simulation results. Considering acceptable for the simulated trajectories a maximum spatial error of the order of three times the horizontal resolution of the Eulerian ocean currents, the predictability skill for particle trajectories is from 1 to 2.5 days depending on the specific current regime. This suggests that re-initialization of the simulations is required every day.

  16. Model Fe-Al Steel with Exceptional Resistance to High Temperature Coarsening. Part II: Experimental Validation and Applications

    NASA Astrophysics Data System (ADS)

    Zhou, Tihe; Zhang, Peng; O'Malley, Ronald J.; Zurob, Hatem S.; Subramanian, Mani

    2015-01-01

    In order to achieve a fine uniform grain-size distribution using the process of thin slab casting and directing rolling (TSCDR), it is necessary to control the grain-size prior to the onset of thermomechanical processing. In the companion paper, Model Fe- Al Steel with Exceptional Resistance to High Temperature Coarsening. Part I: Coarsening Mechanism and Particle Pinning Effects, a new steel composition which uses a small volume fraction of austenite particles to pin the growth of delta-ferrite grains at high temperature was proposed and grain growth was studied in reheated samples. This paper will focus on the development of a simple laboratory-scale setup to simulate thin-slab casting of the newly developed steel and demonstrate the potential for grain size control under industrial conditions. Steel bars with different diameters are briefly dipped into the molten steel to create a shell of solidified material. These are then cooled down to room temperature at different cooling rates. During cooling, the austenite particles nucleate along the delta-ferrite grain boundaries and greatly retard grain growth. With decreasing temperature, more austenite particles precipitate, and grain growth can be completely arrested in the holding furnace. Additional applications of the model alloy are discussed including grain-size control in the heat affected zone in welds and grain-growth resistance at high temperature.

  17. Ecological reality and model validation

    SciTech Connect

    Cale, Jr, W. G.; Shugart, H. H.

    1980-01-01

    Definitions of model realism and model validation are developed. Ecological and mathematical arguments are then presented to show that model equations which explicitly treat ecosystem processes can be systematically improved such that greater realism is attained and the condition of validity is approached. Several examples are presented.

  18. A musculoskeletal model of the equine forelimb for determining surface stresses and strains in the humerus-part II. Experimental testing and model validation.

    PubMed

    Pollock, Sarah; Stover, Susan M; Hull, M L; Galuppo, Larry D

    2008-08-01

    The first objective of this study was to experimentally determine surface bone strain magnitudes and directions at the donor site for bone grafts, the site predisposed to stress fracture, the medial and cranial aspects of the transverse cross section corresponding to the stress fracture site, and the middle of the diaphysis of the humerus of a simplified in vitro laboratory preparation. The second objective was to determine whether computing strains solely in the direction of the longitudinal axis of the humerus in the mathematical model was inherently limited by comparing the strains measured along the longitudinal axis of the bone to the principal strain magnitudes and directions. The final objective was to determine whether the mathematical model formulated in Part I [Pollock et al., 2008, ASME J. Biomech. Eng., 130, p. 041006] is valid for determining the bone surface strains at the various locations on the humerus where experimentally measured longitudinal strains are comparable to principal strains. Triple rosette strain gauges were applied at four locations circumferentially on each of two cross sections of interest using a simplified in vitro laboratory preparation. The muscles included the biceps brachii muscle in addition to loaded shoulder muscles that were predicted active by the mathematical model. Strains from the middle grid of each rosette, aligned along the longitudinal axis of the humerus, were compared with calculated principal strain magnitudes and directions. The results indicated that calculating strains solely in the direction of the longitudinal axis is appropriate at six of eight locations. At the cranial and medial aspects of the middle of the diaphysis, the average minimum principal strain was not comparable to the average experimental longitudinal strain. Further analysis at the remaining six locations indicated that the mathematical model formulated in Part I predicts strains within +/-2 standard deviations of experimental strains at four of these locations and predicts negligible strains at the remaining two locations, which is consistent with experimental strains. Experimentally determined longitudinal strains at the middle of the diaphysis of the humerus indicate that tensile strains occur at the cranial aspect and compressive strains occur at the caudal aspect while the horse is standing, which is useful for fracture fixation. PMID:18601449

  19. ON PREDICTION AND MODEL VALIDATION

    SciTech Connect

    M. MCKAY; R. BECKMAN; K. CAMPBELL

    2001-02-01

    Quantification of prediction uncertainty is an important consideration when using mathematical models of physical systems. This paper proposes a way to incorporate ''validation data'' in a methodology for quantifying uncertainty of the mathematical predictions. The report outlines a theoretical framework.

  20. TUTORIAL: Validating biorobotic models

    NASA Astrophysics Data System (ADS)

    Webb, Barbara

    2006-09-01

    Some issues in neuroscience can be addressed by building robot models of biological sensorimotor systems. What we can conclude from building models or simulations, however, is determined by a number of factors in addition to the central hypothesis we intend to test. These include the way in which the hypothesis is represented and implemented in simulation, how the simulation output is interpreted, how it is compared to the behaviour of the biological system, and the conditions under which it is tested. These issues will be illustrated by discussing a series of robot models of cricket phonotaxis behaviour. .

  1. MODEL VALIDATION AND UNCERTAINTY QUANTIFICATION.

    SciTech Connect

    Hemez, F.M.; Doebling, S.W.

    2000-10-01

    This session offers an open forum to discuss issues and directions of research in the areas of model updating, predictive quality of computer simulations, model validation and uncertainty quantification. Technical presentations review the state-of-the-art in nonlinear dynamics and model validation for structural dynamics. A panel discussion introduces the discussion on technology needs, future trends and challenges ahead with an emphasis placed on soliciting participation of the audience, One of the goals is to show, through invited contributions, how other scientific communities are approaching and solving difficulties similar to those encountered in structural dynamics. The session also serves the purpose of presenting the on-going organization of technical meetings sponsored by the U.S. Department of Energy and dedicated to health monitoring, damage prognosis, model validation and uncertainty quantification in engineering applications. The session is part of the SD-2000 Forum, a forum to identify research trends, funding opportunities and to discuss the future of structural dynamics.

  2. The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models

    PubMed Central

    2012-01-01

    Gene expression data from microarrays are being applied to predict preclinical and clinical endpoints, but the reliability of these predictions has not been established. In the MAQC-II project, 36 independent teams analyzed six microarray data sets to generate predictive models for classifying a sample with respect to one of 13 endpoints indicative of lung or liver toxicity in rodents, or of breast cancer, multiple myeloma or neuroblastoma in humans. In total, >30,000 models were built using many combinations of analytical methods. The teams generated predictive models without knowing the biological meaning of some of the endpoints and, to mimic clinical reality, tested the models on data that had not been used for training. We found that model performance depended largely on the endpoint and team proficiency and that different approaches generated models of similar performance. The conclusions and recommendations from MAQC-II should be useful for regulatory agencies, study committees and independent investigators that evaluate methods for global gene expression analysis. PMID:20676074

  3. A dynamic model of some malaria-transmitting anopheline mosquitoes of the Afrotropical region. II. Validation of species distribution and seasonal variations

    PubMed Central

    2013-01-01

    Background The first part of this study aimed to develop a model for Anopheles gambiae s.l. with separate parametrization schemes for Anopheles gambiae s.s. and Anopheles arabiensis. The characterizations were constructed based on literature from the past decades. This part of the study is focusing on the model’s ability to separate the mean state of the two species of the An. gambiae complex in Africa. The model is also evaluated with respect to capturing the temporal variability of An. arabiensis in Ethiopia. Before conclusions and guidance based on models can be made, models need to be validated. Methods The model used in this paper is described in part one (Malaria Journal 2013, 12:28). For the validation of the model, a data base of 5,935 points on the presence of An. gambiae s.s. and An. arabiensis was constructed. An additional 992 points were collected on the presence An. gambiae s.l.. These data were used to assess if the model could recreate the spatial distribution of the two species. The dataset is made available in the public domain. This is followed by a case study from Madagascar where the model’s ability to recreate the relative fraction of each species is investigated. In the last section the model’s ability to reproduce the temporal variability of An. arabiensis in Ethiopia is tested. The model was compared with data from four papers, and one field survey covering two years. Results Overall, the model has a realistic representation of seasonal and year to year variability in mosquito densities in Ethiopia. The model is also able to describe the distribution of An. gambiae s.s. and An. arabiensis in sub-Saharan Africa. This implies this model can be used for seasonal and long term predictions of changes in the burden of malaria. Before models can be used to improving human health, or guide which interventions are to be applied where, there is a need to understand the system of interest. Validation is an important part of this process. It is also found that one of the main mechanisms separating An. gambiae s.s. and An. arabiensis is the availability of hosts; humans and cattle. Climate play a secondary, but still important, role. PMID:23442727

  4. Partial Validation of Multibody Program to Optimize Simulated Trajectories II (POST II) Parachute Simulation With Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Ben; Queen, Eric M.

    2002-01-01

    A capability to simulate trajectories Of Multiple interacting rigid bodies has been developed. This capability uses the Program to Optimize Simulated Trajectories II (POST II). Previously, POST II had the ability to simulate multiple bodies without interacting forces. The current implementation is used for the Simulation of parachute trajectories, in which the parachute and suspended bodies can be treated as rigid bodies. An arbitrary set of connecting lines can be included in the model and are treated as massless spring-dampers. This paper discusses details of the connection line modeling and results of several test cases used to validate the capability.

  5. Code validation with EBR-II test data

    SciTech Connect

    Herzog, J.P.; Chang, L.K.; Dean, E.M.; Feldman, E.E.; Hill, D.J.; Mohr, D.; Planchon, H.P.

    1992-01-01

    An extensive system of computer codes is used at Argonne National Laboratory to analyze whole-plant transient behavior of the Experimental Breeder Reactor 2. Three of these codes, NATDEMO/HOTCHAN, SASSYS, and DSNP have been validated with data from reactor transient tests. The validated codes are the foundation of safety analyses and pretest predictions for the continuing design improvements and experimental programs in EBR-II, and are also valuable tools for the analysis of innovative reactor designs.

  6. Code validation with EBR-II test data

    SciTech Connect

    Herzog, J.P.; Chang, L.K.; Dean, E.M.; Feldman, E.E.; Hill, D.J.; Mohr, D.; Planchon, H.P.

    1992-07-01

    An extensive system of computer codes is used at Argonne National Laboratory to analyze whole-plant transient behavior of the Experimental Breeder Reactor 2. Three of these codes, NATDEMO/HOTCHAN, SASSYS, and DSNP have been validated with data from reactor transient tests. The validated codes are the foundation of safety analyses and pretest predictions for the continuing design improvements and experimental programs in EBR-II, and are also valuable tools for the analysis of innovative reactor designs.

  7. MODEL VALIDATION REPORT FOR THE HOUSATONIC RIVER

    EPA Science Inventory

    The Model Validation Report will present a comparison of model validation runs to existing data for the model validation period. The validation period spans a twenty year time span to test the predictive capability of the model over a longer time period, similar to that which wil...

  8. Turbulence Modeling Verification and Validation

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2014-01-01

    Computational fluid dynamics (CFD) software that solves the Reynolds-averaged Navier-Stokes (RANS) equations has been in routine use for more than a quarter of a century. It is currently employed not only for basic research in fluid dynamics, but also for the analysis and design processes in many industries worldwide, including aerospace, automotive, power generation, chemical manufacturing, polymer processing, and petroleum exploration. A key feature of RANS CFD is the turbulence model. Because the RANS equations are unclosed, a model is necessary to describe the effects of the turbulence on the mean flow, through the Reynolds stress terms. The turbulence model is one of the largest sources of uncertainty in RANS CFD, and most models are known to be flawed in one way or another. Alternative methods such as direct numerical simulations (DNS) and large eddy simulations (LES) rely less on modeling and hence include more physics than RANS. In DNS all turbulent scales are resolved, and in LES the large scales are resolved and the effects of the smallest turbulence scales are modeled. However, both DNS and LES are too expensive for most routine industrial usage on today's computers. Hybrid RANS-LES, which blends RANS near walls with LES away from walls, helps to moderate the cost while still retaining some of the scale-resolving capability of LES, but for some applications it can still be too expensive. Even considering its associated uncertainties, RANS turbulence modeling has proved to be very useful for a wide variety of applications. For example, in the aerospace field, many RANS models are considered to be reliable for computing attached flows. However, existing turbulence models are known to be inaccurate for many flows involving separation. Research has been ongoing for decades in an attempt to improve turbulence models for separated and other nonequilibrium flows. When developing or improving turbulence models, both verification and validation are important steps in the process. Verification insures that the CFD code is solving the equations as intended (no errors in the implementation). This is typically done either through the method of manufactured solutions (MMS) or through careful step-by-step comparisons with other verified codes. After the verification step is concluded, validation is performed to document the ability of the turbulence model to represent different types of flow physics. Validation can involve a large number of test case comparisons with experiments, theory, or DNS. Organized workshops have proved to be valuable resources for the turbulence modeling community in its pursuit of turbulence modeling verification and validation. Workshop contributors using different CFD codes run the same cases, often according to strict guidelines, and compare results. Through these comparisons, it is often possible to (1) identify codes that have likely implementation errors, and (2) gain insight into the capabilities and shortcomings of different turbulence models to predict the flow physics associated with particular types of flows. These are valuable lessons because they help bring consistency to CFD codes by encouraging the correction of faulty programming and facilitating the adoption of better models. They also sometimes point to specific areas needed for improvement in the models. In this paper, several recent workshops are summarized primarily from the point of view of turbulence modeling verification and validation. Furthermore, the NASA Langley Turbulence Modeling Resource website is described. The purpose of this site is to provide a central location where RANS turbulence models are documented, and test cases, grids, and data are provided. The goal of this paper is to provide an abbreviated survey of turbulence modeling verification and validation efforts, summarize some of the outcomes, and give some ideas for future endeavors in this area.

  9. Simulating long-term dynamics of the coupled North Sea and Baltic Sea ecosystem with ECOSMO II: Model description and validation

    NASA Astrophysics Data System (ADS)

    Daewel, Ute; Schrum, Corinna

    2013-06-01

    The North Sea and the Baltic Sea ecosystems differ substantially in both hydrology and biogeochemical processes. Nonetheless, both systems are closely linked to each other and a coupled modeling approach is indispensable when aiming to simulate and understand long-term ecosystem dynamics in both seas. In this study, we present first an updated version of the fully coupled bio-physical model ECOSMO, a 3d hydrodynamic and a N(utrient)P(hytoplankton)Z(ooplankton)D(etritus) model, which is now adopted to the coupled system North Sea-Baltic Sea. To make the model applicable to both ecosystems, processes relevant for the Baltic Sea (e.g. sedimentation, cyanobacteria) were incorporated into the model formulation. Secondly we assess the validity of the model to describe seasonal, inter-annual and decadal variations in both seas. Our analyses show that the model sufficiently represents the spatial and temporal dynamics in both ecosystems but with some uncertainties in the coastal areas of the North Sea, likely related to the missing representation of tidal flats in the model, and in the deep-water nutrient pool of the Baltic Sea. Finally we present results from a 61-year (1948-2008) hindcast of the coupled North Sea and Baltic Sea ecosystem and identify long-term changes in primary and secondary production. The simulated long-term dynamics of primary and secondary production could be corroborated by observations from available literature and shows a general increase in the last three decades of the simulation when compared to the first 30 years. Regime shifts could be identified for both ecosystems, but with differences in both, timing and magnitude of the related change.

  10. ADAPT model: Model use, calibration and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents an overview of the Agricultural Drainage and Pesticide Transport (ADAPT) model and a case study to illustrate the calibration and validation steps for predicting subsurface tile drainage and nitrate-N losses from an agricultural system. The ADAPT model is a daily time step field ...

  11. (Validity of environmental transfer models)

    SciTech Connect

    Blaylock, B.G.; Hoffman, F.O.; Gardner, R.H.

    1990-11-07

    BIOMOVS (BIOspheric MOdel Validation Study) is an international cooperative study initiated in 1985 by the Swedish National Institute of Radiation Protection to test models designed to calculate the environmental transfer and bioaccumulation of radionuclides and other trace substances. The objective of the symposium and workshop was to synthesize results obtained during Phase 1 of BIOMOVS (the first five years of the study) and to suggest new directions that might be pursued during Phase 2 of BIOMOVS. The travelers were an instrumental part of the development of BIOMOVS. This symposium allowed the travelers to present a review of past efforts at model validation and a synthesis of current activities and to refine ideas concerning future development of models and data for assessing the fate, effect, and human risks of environmental contaminants. R. H. Gardner also visited the Free University, Amsterdam, and the National Institute of Public Health and Environmental Protection (RIVM) in Bilthoven to confer with scientists about current research in theoretical ecology and the use of models for estimating the transport and effect of environmental contaminants and to learn about the European efforts to map critical loads of acid deposition.

  12. Validation for a recirculation model.

    PubMed

    LaPuma, P T

    2001-04-01

    Recent Clean Air Act regulations designed to reduce volatile organic compound (VOC) emissions have placed new restrictions on painting operations. Treating large volumes of air which contain dilute quantities of VOCs can be expensive. Recirculating some fraction of the air allows an operator to comply with environmental regulations at reduced cost. However, there is a potential impact on employee safety because indoor pollutants will inevitably increase when air is recirculated. A computer model was developed, written in Microsoft Excel 97, to predict compliance costs and indoor air concentration changes with respect to changes in the level of recirculation for a given facility. The model predicts indoor air concentrations based on product usage and mass balance equations. This article validates the recirculation model using data collected from a C-130 aircraft painting facility at Hill Air Force Base, Utah. Air sampling data and air control cost quotes from vendors were collected for the Hill AFB painting facility and compared to the model's predictions. The model's predictions for strontium chromate and isocyanate air concentrations were generally between the maximum and minimum air sampling points with a tendency to predict near the maximum sampling points. The model's capital cost predictions for a thermal VOC control device ranged from a 14 percent underestimate to a 50 percent overestimate of the average cost quotes. A sensitivity analysis of the variables is also included. The model is demonstrated to be a good evaluation tool in understanding the impact of recirculation. PMID:11318387

  13. Validation of Magnetospheric Magnetohydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Curtis, Brian

    Magnetospheric magnetohydrodynamic (MHD) models are commonly used for both prediction and modeling of Earth's magnetosphere. To date, very little validation has been performed to determine their limits, uncertainties, and differences. In this work, we performed a comprehensive analysis using several commonly used validation techniques in the atmospheric sciences to MHD-based models of Earth's magnetosphere for the first time. The validation techniques of parameter variability/sensitivity analysis and comparison to other models were used on the OpenGGCM, BATS-R-US, and SWMF magnetospheric MHD models to answer several questions about how these models compare. The questions include: (1) the difference between the model's predictions prior to and following to a reversal of Bz in the upstream interplanetary field (IMF) from positive to negative, (2) the influence of the preconditioning duration, and (3) the differences between models under extreme solar wind conditions. A differencing visualization tool was developed and used to address these three questions. We find: (1) For a reversal in IMF Bz from positive to negative, the OpenGGCM magnetopause is closest to Earth as it has the weakest magnetic pressure near-Earth. The differences in magnetopause positions between BATS-R-US and SWMF are explained by the influence of the ring current, which is included in SWMF. Densities are highest for SWMF and lowest for OpenGGCM. The OpenGGCM tail currents differ significantly from BATS-R-US and SWMF; (2) A longer preconditioning time allowed the magnetosphere to relax more, giving different positions for the magnetopause with all three models before the IMF Bz reversal. There were differences greater than 100% for all three models before the IMF Bz reversal. The differences in the current sheet region for the OpenGGCM were small after the IMF Bz reversal. The BATS-R-US and SWMF differences decreased after the IMF Bz reversal to near zero; (3) For extreme conditions in the solar wind, the OpenGGCM has a large region of Earthward flow velocity (Ux) in the current sheet region that grows as time progresses in a compressed environment. BATS-R-US Bz , rho and Ux stabilize to a near constant value approximately one hour into the run under high compression conditions. Under high compression, the SWMF parameters begin to oscillate approximately 100 minutes into the run. All three models have similar magnetopause positions under low pressure conditions. The OpenGGCM current sheet velocities along the Sun-Earth line are largest under low pressure conditions. The results of this analysis indicate the need for accounting for model uncertainties and differences when comparing model predictions with data, provide error bars on model prediction in various magnetospheric regions, and show that the magnetotail is sensitive to the preconditioning time.

  14. Developing better and more valid animal models of brain disorders.

    PubMed

    Stewart, Adam Michael; Kalueff, Allan V

    2015-01-01

    Valid sensitive animal models are crucial for understanding the pathobiology of complex human disorders, such as anxiety, autism, depression and schizophrenia, which all have the 'spectrum' nature. Discussing new important strategic directions of research in this field, here we focus i) on cross-species validation of animal models, ii) ensuring their population (external) validity, and iii) the need to target the interplay between multiple disordered domains. We note that optimal animal models of brain disorders should target evolutionary conserved 'core' traits/domains and specifically mimic the clinically relevant inter-relationships between these domains. PMID:24384129

  15. Assimilation of Geosat Altimetric Data in a Nonlinear Shallow-Water Model of the Indian Ocean by Adjoint Approach. Part II: Some Validation and Interpretation of the Assimilated Results. Part 2; Some Validation and Interpretation of the Assimilated Results

    NASA Technical Reports Server (NTRS)

    Greiner, Eric; Perigaud, Claire

    1996-01-01

    This paper examines the results of assimilating Geosat sea level variations relative to the November 1986-November 1988 mean reference, in a nonlinear reduced-gravity model of the Indian Ocean, Data have been assimilated during one year starting in November 1986 with the objective of optimizing the initial conditions and the yearly averaged reference surface. The thermocline slope simulated by the model with or without assimilation is validated by comparison with the signal, which can be derived from expandable bathythermograph measurements performed in the Indian Ocean at that time. The topography simulated with assimilation on November 1986 is in very good agreement with the hydrographic data. The slopes corresponding to the South Equatorial Current and to the South Equatorial Countercurrent are better reproduced with assimilation than without during the first nine months. The whole circulation of the cyclonic gyre south of the equator is then strongly intensified by assimilation. Another assimilation experiment is run over the following year starting in November 1987. The difference between the two yearly mean surfaces simulated with assimilation is in excellent agreement with Geosat. In the southeastern Indian Ocean, the correction to the yearly mean dynamic topography due to assimilation over the second year is negatively correlated to the one the year before. This correction is also in agreement with hydrographic data. It is likely that the signal corrected by assimilation is not only due to wind error, because simulations driven by various wind forcings present the same features over the two years. Model simulations run with a prescribed throughflow transport anomaly indicate that assimilation is rather correcting in the interior of the model domain for inadequate boundary conditions with the Pacific.

  16. Obstructive lung disease models: what is valid?

    PubMed

    Ferdinands, Jill M; Mannino, David M

    2008-12-01

    Use of disease simulation models has led to scrutiny of model methods and demand for evidence that models credibly simulate health outcomes. We sought to describe recent obstructive lung disease simulation models and their validation. Medline and EMBASE were used to identify obstructive lung disease simulation models published from January 2000 to June 2006. Publications were reviewed to assess model attributes and four types of validation: first-order (verification/debugging), second-order (comparison with studies used in model development), third-order (comparison with studies not used in model development), and predictive validity. Six asthma and seven chronic obstructive pulmonary disease models were identified. Seven (54%) models included second-order validation, typically by comparing observed outcomes to simulations of source study cohorts. Seven (54%) models included third-order validation, in which modeled outcomes were usually compared qualitatively for agreement with studies independent of the model. Validation endpoints included disease prevalence, exacerbation, and all-cause mortality. Validation was typically described as acceptable, despite near-universal absence of criteria for judging adequacy of validation. Although over half of recent obstructive lung disease simulation models report validation, inconsistencies in validation methods and lack of detailed reporting make assessing adequacy of validation difficult. For simulation modeling to be accepted as a tool for evaluating clinical and public health programs, models must be validated to credibly simulate health outcomes of interest. Defining the required level of validation and providing guidance for quantitative assessment and reporting of validation are important future steps in promoting simulation models as practical decision tools. PMID:19353353

  17. A statistical-dynamical scheme for reconstructing ocean forcing in the Atlantic. Part II: methodology, validation and application to high-resolution ocean models

    NASA Astrophysics Data System (ADS)

    Minvielle, Marie; Cassou, Christophe; Bourdall-Badie, Romain; Terray, Laurent; Najac, Julien

    2011-02-01

    A novel statistical-dynamical scheme has been developed to reconstruct the sea surface atmospheric variables necessary to force an ocean model. Multiple linear regressions are first built over a so-called learning period and over the entire Atlantic basin from the observed relationship between the surface wind conditions, or predictands, and the anomalous large scale atmospheric circulations, or predictors. The latter are estimated in the extratropics by 500 hPa geopotential height weather regimes and in the tropics by low-level wind classes. The transfer function further combined to an analog step is then used to reconstruct all the surface variables fields over 1958-2002. We show that the proposed hybrid scheme is very skillful in reproducing the mean state, the seasonal cycle and the temporal evolution of all the surface ocean variables at interannual timescale. Deficiencies are found in the level of variance especially in the tropics. It is underestimated for 2-m temperature and humidity as well as for surface radiative fluxes in the interannual frequency band while it is slightly overestimated at higher frequency. Decomposition in empirical orthogonal function (EOF) shows that the spatial and temporal coherence of the forcing fields is however very well captured by the reconstruction method. For dynamical downscaling purposes, reconstructed fields are then interpolated and used to carry out a high-resolution oceanic simulation using the NATL4 (1/4) model integrated over 1979-2001. This simulation is compared to a reference experiment where the original observed forcing fields are prescribed instead. Mean states between the two experiments are virtually undistinguishable both in terms of surface fluxes and ocean dynamics estimated by the barotropic and the meridional overturning streamfunctions. The 3-dimensional variance of the simulated ocean is well preserved at interannual timescale both for temperature and salinity except in the tropics where it is underestimated. The main modes of interannual variability assessed through EOF are correctly reproduced for sea surface temperature, barotropic streamfunction and mixed layer depth both in terms of spatial structure and temporal evolution. Collectively, our results provide evidence that the statistical-dynamical scheme presented in this two-part study is an efficient and promising tool to infer oceanic changes (in particular those related to the wind-driven circulation) due to modifications in the large-scale atmospheric circulation. As a prerequisite, we have here validated the method for present-day climate; we encourage its use for climate change studies with some adaptations though.

  18. Preparation of Power Distribution System for High Penetration of Renewable Energy Part I. Dynamic Voltage Restorer for Voltage Regulation Pat II. Distribution Circuit Modeling and Validation

    NASA Astrophysics Data System (ADS)

    Khoshkbar Sadigh, Arash

    Part I: Dynamic Voltage Restorer In the present power grids, voltage sags are recognized as a serious threat and a frequently occurring power-quality problem and have costly consequence such as sensitive loads tripping and production loss. Consequently, the demand for high power quality and voltage stability becomes a pressing issue. Dynamic voltage restorer (DVR), as a custom power device, is more effective and direct solutions for "restoring" the quality of voltage at its load-side terminals when the quality of voltage at its source-side terminals is disturbed. In the first part of this thesis, a DVR configuration with no need of bulky dc link capacitor or energy storage is proposed. This fact causes to reduce the size of the DVR and increase the reliability of the circuit. In addition, the proposed DVR topology is based on high-frequency isolation transformer resulting in the size reduction of transformer. The proposed DVR circuit, which is suitable for both low- and medium-voltage applications, is based on dc-ac converters connected in series to split the main dc link between the inputs of dc-ac converters. This feature makes it possible to use modular dc-ac converters and utilize low-voltage components in these converters whenever it is required to use DVR in medium-voltage application. The proposed configuration is tested under different conditions of load power factor and grid voltage harmonic. It has been shown that proposed DVR can compensate the voltage sag effectively and protect the sensitive loads. Following the proposition of the DVR topology, a fundamental voltage amplitude detection method which is applicable in both single/three-phase systems for DVR applications is proposed. The advantages of proposed method include application in distorted power grid with no need of any low-pass filter, precise and reliable detection, simple computation and implementation without using a phased locked loop and lookup table. The proposed method has been verified by simulation and experimental tests under various conditions considering all possible cases such as different amounts of voltage sag depth (VSD), different amounts of point-on-wave (POW) at which voltage sag occurs, harmonic distortion, line frequency variation, and phase jump (PJ). Furthermore, the ripple amount of fundamental voltage amplitude calculated by the proposed method and its error is analyzed considering the line frequency variation together with harmonic distortion. The best and worst detection time of proposed method were measured 1ms and 8.8ms, respectively. Finally, the proposed method has been compared with other voltage sag detection methods available in literature. Part 2: Power System Modeling for Renewable Energy Integration: As power distribution systems are evolving into more complex networks, electrical engineers have to rely on software tools to perform circuit analysis. There are dozens of powerful software tools available in the market to perform the power system studies. Although their main functions are similar, there are differences in features and formatting structures to suit specific applications. This creates challenges for transferring power system circuit models data (PSCMD) between different software and rebuilding the same circuit in the second software environment. The objective of this part of thesis is to develop a Unified Platform (UP) to facilitate transferring PSCMD among different software packages and relieve the challenges of the circuit model conversion process. UP uses a commonly available spreadsheet file with a defined format, for any home software to write data to and for any destination software to read data from, via a script-based application called PSCMD transfer application. The main considerations in developing the UP are to minimize manual intervention and import a one-line diagram into the destination software or export it from the source software, with all details to allow load flow, short circuit and other analyses. In this study, ETAP, OpenDSS, and GridLab-D are considered, and PSCMD transfer applications written in MATLAB have been developed for each of these to read the circuit model data provided in the UP spreadsheet. In order to test the developed PSCMD transfer applications, circuit model data of a test circuit and a power distribution circuit from Southern California Edison (SCE) - a utility company - both built in CYME, were exported into the spreadsheet file according to the UP format. Thereafter, circuit model data were imported successfully from the spreadsheet files into above mentioned software using the PSCMD transfer applications developed for each software. After the SCE studied circuit is transferred into OpenDSS software using the proposed UP scheme and developed application, it has been studied to investigate the impacts of large-scale solar energy penetration. The main challenge of solar energy integration into power grid is its intermittency (i.e., discontinuity of output power) nature due to cloud shading of photovoltaic panels which depends on weather conditions. In order to conduct this study, OpenDSS time-series simulation feature, which is required due to intermittency of solar energy, is utilized. In this study, the impacts of intermittency of solar energy penetration, especially high-variability points, on voltage fluctuation and operation of capacitor bank and voltage regulator is provided. In addition, the necessity to interpolate and resample unequally spaced time-series measurement data and convert them to equally spaced time-series data as well as the effect of resampling time-interval on the amount of error is discussed. Two applications are developed in Matlab to do interpolation and resampling as well as to calculate the amount of error for different resampling time-intervals to figure out the suitable resampling time-interval. Furthermore, an approach based on cumulative distribution, regarding the length for lines/cables types and the power rating for loads, is presented to prioritize which loads, lines and cables the meters should be installed at to have the most effect on model validation.

  19. Geochemistry Model Validation Report: External Accumulation Model

    SciTech Connect

    K. Zarrabi

    2001-09-27

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation, density of accumulation, and the geometry of the accumulation zone. The density of accumulation and the geometry of the accumulation zone are calculated using a characterization of the fracture system based on field measurements made in the proposed repository (BSC 2001k). The model predicts that accumulation would spread out in a conical accumulation volume. The accumulation volume is represented with layers as shown in Figure 1. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance.

  20. Inert doublet model and LEP II limits

    SciTech Connect

    Lundstroem, Erik; Gustafsson, Michael; Edsjoe, Joakim

    2009-02-01

    The inert doublet model is a minimal extension of the standard model introducing an additional SU(2) doublet with new scalar particles that could be produced at accelerators. While there exists no LEP II analysis dedicated for these inert scalars, the absence of a signal within searches for supersymmetric neutralinos can be used to constrain the inert doublet model. This translation however requires some care because of the different properties of the inert scalars and the neutralinos. We investigate what restrictions an existing DELPHI Collaboration study of neutralino pair production can put on the inert scalars and discuss the result in connection with dark matter. We find that although an important part of the inert doublet model parameter space can be excluded by the LEP II data, the lightest inert particle still constitutes a valid dark matter candidate.

  1. The range of validity of the two-body approximation in models of terrestrial planet accumulation. II - Gravitational cross sections and runaway accretion

    NASA Technical Reports Server (NTRS)

    Wetherill, G. W.; Cox, L. P.

    1985-01-01

    The validity of the two-body approximation in calculating encounters between planetesimals has been evaluated as a function of the ratio of unperturbed planetesimal velocity (with respect to a circular orbit) to mutual escape velocity when their surfaces are in contact (V/V-sub-e). Impact rates as a function of this ratio are calculated to within about 20 percent by numerical integration of the equations of motion. It is found that when the ratio is greater than 0.4 the two-body approximation is a good one. Consequences of reducing the ratio to less than 0.02 are examined. Factors leading to an optimal size for growth of planetesimals from a swarm of given eccentricity and placing a limit on the extent of runaway accretion are derived.

  2. Validation of Multibody Program to Optimize Simulated Trajectories II Parachute Simulation with Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.

    2009-01-01

    A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.

  3. Model validation software -- Theory manual

    SciTech Connect

    Dolin, R.M.

    1997-11-04

    Work began in May of 1991 on the initial Independent Spline (IS) technology. The IS technology was based on research by Dolin showing that numerical topology and geometry could be validated through their topography. A unique contribution to this research is that the IS technology has provided a capability to modify one spline`s topology to match another spline`s topography. Work began in May of 1996 to extend the original IS capability to allow solid model topologies to be compared with corresponding two-dimensional topologies. Work began in July, 1996 to extend the IS capability to allow for tool path and inspection data analyses. Tool path analysis involves spline-spline comparisons. Inspection data analysis involves fitting inspection data with some type of analytical curve and then comparing that curve with the original (i.e., nominal) curve topology. There are three types of curves that the inspection data can be fit with. Using all three types of curve fits help engineers understand the As-Built state of whatever it is that is being interrogated. The ability to compute axi-symmetric volumes of revolution for a data set fit with either of the three curves fitting methods described above will be added later. This involves integrating the area under each curve and then revolving the area through 2{pi} radians to get a volume of revolution. The algorithms for doing this will be taken from the IGVIEW software system. The main IS program module parses out the desired activities into four different logical paths: (1) original IS spline modification; (2) two- or three-dimensional topography evaluated against 2D spline; (3) tool path analysis with tool path modifications; and (4) tool path and inspection data comparisons with nominal topography. Users have the option of running the traditional IS application software, comparing 3D ASCII data to a Wilson-Fowler spline interpolation of 2D data, comparing a Wilson-Fowler spline interpolation to analytical topology, or comparing two Wilson-Fowler spline interpolations with each other. Even though there are four independent logic paths that can be taken, eventually they share subroutines to accomplish their tasks. When and how each logical path does something is however, unique. In another mode the author has to determine what contour/spline/topology is going to be the baseline/nominal topology. He reads that in and generates an APT WF spline with it. He then reads an external topology data file and fits the data with linear, quadratic, and cubic curves. Then each of these curves is analyzed against the baseline topology and an analysis summary is generated.

  4. NASA GSFC CCMC Recent Model Validation Activities

    NASA Technical Reports Server (NTRS)

    Rastaetter, L.; Pulkkinen, A.; Taktakishvill, A.; Macneice, P.; Shim, J. S.; Zheng, Yihua; Kuznetsova, M. M.; Hesse, M.

    2012-01-01

    The Community Coordinated Modeling Center (CCMC) holds the largest assembly of state-of-the-art physics-based space weather models developed by the international space physics community. In addition to providing the community easy access to these modern space research models to support science research, its another primary goal is to test and validate models for transition from research to operations. In this presentation, we provide an overview of the space science models available at CCMC. Then we will focus on the community-wide model validation efforts led by CCMC in all domains of the Sun-Earth system and the internal validation efforts at CCMC to support space weather servicesjoperations provided its sibling organization - NASA GSFC Space Weather Center (http://swc.gsfc.nasa.gov). We will also discuss our efforts in operational model validation in collaboration with NOAA/SWPC.

  5. Algorithm for model validation: theory and applications.

    PubMed

    Sornette, D; Davis, A B; Ide, K; Vixie, K R; Pisarenko, V; Kamm, J R

    2007-04-17

    Validation is often defined as the process of determining the degree to which a model is an accurate representation of the real world from the perspective of its intended uses. Validation is crucial as industries and governments depend increasingly on predictions by computer models to justify their decisions. We propose to formulate the validation of a given model as an iterative construction process that mimics the often implicit process occurring in the minds of scientists. We offer a formal representation of the progressive build-up of trust in the model. Thus, we replace static claims on the impossibility of validating a given model by a dynamic process of constructive approximation. This approach is better adapted to the fuzzy, coarse-grained nature of validation. Our procedure factors in the degree of redundancy versus novelty of the experiments used for validation as well as the degree to which the model predicts the observations. We illustrate the methodology first with the maturation of quantum mechanics as the arguably best established physics theory and then with several concrete examples drawn from some of our primary scientific interests: a cellular automaton model for earthquakes, a multifractal random walk model for financial time series, an anomalous diffusion model for solar radiation transport in the cloudy atmosphere, and a computational fluid dynamics code for the Richtmyer-Meshkov instability. PMID:17420476

  6. Algorithm for model validation: Theory and applications

    PubMed Central

    Sornette, D.; Davis, A. B.; Ide, K.; Vixie, K. R.; Pisarenko, V.; Kamm, J. R.

    2007-01-01

    Validation is often defined as the process of determining the degree to which a model is an accurate representation of the real world from the perspective of its intended uses. Validation is crucial as industries and governments depend increasingly on predictions by computer models to justify their decisions. We propose to formulate the validation of a given model as an iterative construction process that mimics the often implicit process occurring in the minds of scientists. We offer a formal representation of the progressive build-up of trust in the model. Thus, we replace static claims on the impossibility of validating a given model by a dynamic process of constructive approximation. This approach is better adapted to the fuzzy, coarse-grained nature of validation. Our procedure factors in the degree of redundancy versus novelty of the experiments used for validation as well as the degree to which the model predicts the observations. We illustrate the methodology first with the maturation of quantum mechanics as the arguably best established physics theory and then with several concrete examples drawn from some of our primary scientific interests: a cellular automaton model for earthquakes, a multifractal random walk model for financial time series, an anomalous diffusion model for solar radiation transport in the cloudy atmosphere, and a computational fluid dynamics code for the RichtmyerMeshkov instability. PMID:17420476

  7. A broad view of model validation

    SciTech Connect

    Tsang, C.F.

    1989-10-01

    The safety assessment of a nuclear waste repository requires the use of models. Such models need to be validated to ensure, as much as possible, that they are a good representation of the actual processes occurring in the real system. In this paper we attempt to take a broad view by reviewing step by step the modeling process and bringing out the need to validating every step of this process. This model validation includes not only comparison of modeling results with data from selected experiments, but also evaluation of procedures for the construction of conceptual models and calculational models as well as methodologies for studying data and parameter correlation. The need for advancing basic scientific knowledge in related fields, for multiple assessment groups, and for presenting our modeling efforts in open literature to public scrutiny is also emphasized. 16 refs.

  8. Description and validation of realistic and structured endourology training model

    PubMed Central

    Soria, Federico; Morcillo, Esther; Sanz, Juan Luis; Budia, Alberto; Serrano, Alvaro; Sanchez-Margallo, Francisco M

    2014-01-01

    Purpose: The aim of the present study was to validate a model of training, which combines the use of non-biological and ex vivo biological bench models, as well as the modelling of urological injuries for endourological treatment in a porcine animal model. Material and Methods: A total of 40 participants took part in this study. The duration of the activity was 16 hours. The model of training was divided into 3 levels: level I, concerning the acquisition of basic theoretical knowledge; level II, involving practice with the bench models and level III, concerning practice in the porcine animal model. First, trainees practiced with animals without using a model of injured (ureteroscopy, management of guide wires and catheters under fluoroscopic control) and later practiced in lithiasic animal model. During the activity, an evaluation of the face and content validity was conducted, as well as constructive validation provided by the trainees versus experts. Evolution of the variables during the course within each group was analysed using the Student’s t test for paired samples, while comparisons between groups, were performed using the Student’s t test for unpaired samples. Results: The assessments of face and content validity were satisfactory. The constructive validation, “within one trainee” shows that were statistical significant differences between the first time the trainees performed the tasks in the animal model and the last time, mainly in the knowledge of procedure and Holmium laser lithotripsy cathegories. At the beginning of level III, there are also statistical significant differences between trainee’s scores and the expert’s scores.Conclusions: This realistic Endourology training model allows the acquisition of knowledge and technical and non-technical skills as evidenced by the face, content and constructive validity. Structured use of bench models (biological and non biological) and animal model simulators increase the endourological basic skills. PMID:25374928

  9. Systematic Independent Validation of Inner Heliospheric Models

    NASA Technical Reports Server (NTRS)

    MacNeice, P. J.; Takakishvili, Alexandre

    2008-01-01

    This presentation is the first in a series which will provide independent validation of community models of the outer corona and inner heliosphere. In this work we establish a set of measures to be used in validating this group of models. We use these procedures to generate a comprehensive set of results from the Wang- Sheeley-Arge (WSA) model which will be used as a baseline, or reference, against which to compare all other models. We also run a test of the validation procedures by applying them to a small set of results produced by the ENLIL Magnetohydrodynamic (MHD) model. In future presentations we will validate other models currently hosted by the Community Coordinated Modeling Center(CCMC), including a comprehensive validation of the ENLIL model. The Wang-Sheeley-Arge (WSA) model is widely used to model the Solar wind, and is used by a number of agencies to predict Solar wind conditions at Earth as much as four days into the future. Because it is so important to both the research and space weather forecasting communities, it is essential that its performance be measured systematically, and independently. In this paper we offer just such an independent and systematic validation. We report skill scores for the model's predictions of wind speed and IMF polarity for a large set of Carrington rotations. The model was run in all its routinely used configurations. It ingests line of sight magnetograms. For this study we generated model results for monthly magnetograms from the National Solar Observatory (SOLIS), Mount Wilson Observatory and the GONG network, spanning the Carrington rotation range from 1650 to 2068. We compare the influence of the different magnetogram sources, performance at quiet and active times, and estimate the effect of different empirical wind speed tunings. We also consider the ability of the WSA model to identify sharp transitions in wind speed from slow to fast wind. These results will serve as a baseline against which to compare future versions of the model

  10. An approach to validation of thermomechanical models

    SciTech Connect

    Costin, L.S.; Hardy, M.P.; Brechtel, C.E.

    1993-08-01

    Thermomechanical models are being developed to support the design of an Exploratory Studies Facility (ESF) and a potential high-level nuclear waste repository at Yucca Mountain, Nevada. These models are used for preclosure design of underground openings, such as access drifts, emplacement drifts, and waste emplacement boreholes; and in support of postclosure issue resolution relating to waste canister performance, disturbance of the hydrological properties of the host rock, and overall system performance assessment. For both design and performance assessment, the purpose of using models in analyses is to better understand and quantify some phenomenon or process. Therefore, validation is an important process that must be pursued in conjunction with the development and application of models. The Site Characterization Plan (SCP) addressed some general aspects of model validation, but no specific approach has, as yet, been developed for either design or performance assessment models. This paper will discuss a proposed process for thermomechanical model validation and will focus on the use of laboratory and in situ experiments as part of the validation process. The process may be generic enough in nature that it could be applied to the validation of other types of models, for example, models of unsaturated hydrologic flow.

  11. Local thermal seeing modeling validation through observatory measurements

    NASA Astrophysics Data System (ADS)

    Vogiatzis, Konstantinos; Otarola, Angel; Skidmore, Warren; Travouillon, Tony; Angeli, George

    2012-09-01

    Dome and mirror seeing are critical effects influencing the optical performance of ground-based telescopes. Computational Fluid Dynamics (CFD) can be used to obtain the refractive index field along a given optical path and calculate the corresponding image quality utilizing optical modeling tools. This procedure is validated using measurements from the Keck II and CFHT telescopes. CFD models of Keck II and CFHT observatories on the Mauna Kea summit have been developed. The detailed models resolve all components that can influence the flow pattern through turbulence generation or heat release. Unsteady simulations generate time records of velocity and temperature fields from which the refractive index field at a given wavelength and turbulence parameters are obtained. At Keck II the Cn2 and l0 (inner scale of turbulence) were monitored along a 63m path sensitive primarily to turbulence around the top ring of the telescope tube. For validation, these parameters were derived from temperature and velocity fluctuations obtained from CFD simulations. At CFHT dome seeing has been inferred from their database that includes telescope delivered Image Quality (IQ). For this case CFD simulations were run for specific orientations of the telescope respect to incoming wind, wind speeds and outside air temperature. For validation, temperature fluctuations along the optical beam from the CFD are turned to refractive index variations and corresponding Optical Path Differences (OPD) then to Point Spread Functions (PSF) that are ultimately compared to the record of IQ.

  12. Validity of the Sleep Subscale of the Diagnostic Assessment for the Severely Handicapped-II (DASH-II)

    ERIC Educational Resources Information Center

    Matson, Johnny L.; Malone, Carrie J.

    2006-01-01

    Currently there are no available sleep disorder measures for individuals with severe and profound intellectual disability. We, therefore, attempted to establish the external validity of the "Diagnostic Assessment for the Severely Handicapped-II" (DASH-II) sleep subscale by comparing daily observational sleep data with the responses of direct care

  13. Validation of the Hot Strip Mill Model

    SciTech Connect

    Richard Shulkosky; David Rosberg; Jerrud Chapman

    2005-03-30

    The Hot Strip Mill Model (HSMM) is an off-line, PC based software originally developed by the University of British Columbia (UBC) and the National Institute of Standards and Technology (NIST) under the AISI/DOE Advanced Process Control Program. The HSMM was developed to predict the temperatures, deformations, microstructure evolution and mechanical properties of steel strip or plate rolled in a hot mill. INTEG process group inc. undertook the current task of enhancing and validating the technology. With the support of 5 North American steel producers, INTEG process group tested and validated the model using actual operating data from the steel plants and enhanced the model to improve prediction results.

  14. Structural system identification: Structural dynamics model validation

    SciTech Connect

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  15. Feature extraction for structural dynamics model validation

    SciTech Connect

    Hemez, Francois; Farrar, Charles; Park, Gyuhae; Nishio, Mayuko; Worden, Keith; Takeda, Nobuo

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  16. Validation of an Experimentally Derived Uncertainty Model

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Cox, D. E.; Balas, G. J.; Juang, J.-N.

    1996-01-01

    The results show that uncertainty models can be obtained directly from system identification data by using a minimum norm model validation approach. The error between the test data and an analytical nominal model is modeled as a combination of unstructured additive and structured input multiplicative uncertainty. Robust controllers which use the experimentally derived uncertainty model show significant stability and performance improvements over controllers designed with assumed ad hoc uncertainty levels. Use of the identified uncertainty model also allowed a strong correlation between design predictions and experimental results.

  17. Evaluation (not validation) of quantitative models.

    PubMed Central

    Oreskes, N

    1998-01-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid. Modelers and policymakers must continue to work toward finding effective ways to evaluate and judge the quality of their models, and to develop appropriate terminology to communicate these judgments to the public whose health and safety may be at stake. PMID:9860904

  18. Validation of Space Weather Models at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, M. M.; Pulkkinen, A.; Rastaetter, L.; Hesse, M.; Chulaki, A.; Maddox, M.

    2011-01-01

    The Community Coordinated Modeling Center (CCMC) is a multiagency partnership, which aims at the creation of next generation space weather modes. CCMC goal is to support the research and developmental work necessary to substantially increase space weather modeling capabilities and to facilitate advanced models deployment in forecasting operations. The CCMC conducts unbiased model testing and validation and evaluates model readiness for operational environment. The presentation will demonstrate the recent progress in CCMC metrics and validation activities.

  19. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    SciTech Connect

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.

  20. Real-time remote scientific model validation

    NASA Technical Reports Server (NTRS)

    Frainier, Richard; Groleau, Nicolas

    1994-01-01

    This paper describes flight results from the use of a CLIPS-based validation facility to compare analyzed data from a space life sciences (SLS) experiment to an investigator's preflight model. The comparison, performed in real-time, either confirms or refutes the model and its predictions. This result then becomes the basis for continuing or modifying the investigator's experiment protocol. Typically, neither the astronaut crew in Spacelab nor the ground-based investigator team are able to react to their experiment data in real time. This facility, part of a larger science advisor system called Principal Investigator in a Box, was flown on the space shuttle in October, 1993. The software system aided the conduct of a human vestibular physiology experiment and was able to outperform humans in the tasks of data integrity assurance, data analysis, and scientific model validation. Of twelve preflight hypotheses associated with investigator's model, seven were confirmed and five were rejected or compromised.

  1. Validation of Hadronic Models in GEANT4

    SciTech Connect

    Koi, Tatsumi; Wright, Dennis H.; Folger, Gunter; Ivanchenko, Vladimir; Kossov, Mikhail; Starkov, Nikolai; Heikkinen, Aatos; Truscott, Peter; Lei, Fan; Wellisch, Hans-Peter

    2007-09-26

    Geant4 is a software toolkit for the simulation of the passage of particles through matter. It has abundant hadronic models from thermal neutron interactions to ultra relativistic hadrons. An overview of validations in Geant4 hadronic physics is presented based on thin target measurements. In most cases, good agreement is available between Monte Carlo prediction and experimental data; however, several problems have been detected which require some improvement in the models.

  2. Calibration and validation of rockfall models

    NASA Astrophysics Data System (ADS)

    Frattini, Paolo; Valagussa, Andrea; Zenoni, Stefania; Crosta, Giovanni B.

    2013-04-01

    Calibrating and validating landslide models is extremely difficult due to the particular characteristic of landslides: limited recurrence in time, relatively low frequency of the events, short durability of post-event traces, poor availability of continuous monitoring data, especially for small landslide and rockfalls. For this reason, most of the rockfall models presented in literature completely lack calibration and validation of the results. In this contribution, we explore different strategies for rockfall model calibration and validation starting from both an historical event and a full-scale field test. The event occurred in 2012 in Courmayeur (Western Alps, Italy), and caused serious damages to quarrying facilities. This event has been studied soon after the occurrence through a field campaign aimed at mapping the blocks arrested along the slope, the shape and location of the detachment area, and the traces of scars associated to impacts of blocks on the slope. The full-scale field test was performed by Geovert Ltd in the Christchurch area (New Zealand) after the 2011 earthquake. During the test, a number of large blocks have been mobilized from the upper part of the slope and filmed with high velocity cameras from different viewpoints. The movies of each released block were analysed to identify the block shape, the propagation path, the location of impacts, the height of the trajectory and the velocity of the block along the path. Both calibration and validation of rockfall models should be based on the optimization of the agreement between the actual trajectories or location of arrested blocks and the simulated ones. A measure that describe this agreement is therefore needed. For calibration purpose, this measure should simple enough to allow trial and error repetitions of the model for parameter optimization. In this contribution we explore different calibration/validation measures: (1) the percentage of simulated blocks arresting within a buffer of the actual blocks, (2) the percentage of trajectories passing through the buffer of the actual rockfall path, (3) the mean distance between the location of arrest of each simulated blocks and the location of the nearest actual blocks; (4) the mean distance between the location of detachment of each simulated block and the location of detachment of the actual block located closer to the arrest position. By applying the four measures to the case studies, we observed that all measures are able to represent the model performance for validation purposes. However, the third measure is more simple and reliable than the others, and seems to be optimal for model calibration, especially when using a parameter estimation and optimization modelling software for automated calibration.

  3. A Hierarchical Systems Approach to Model Validation

    NASA Astrophysics Data System (ADS)

    Easterbrook, S. M.

    2011-12-01

    Existing approaches to the question of how climate models should be evaluated tend to rely on either philosophical arguments about the status of models as scientific tools, or on empirical arguments about how well runs from a given model match observational data. These have led to quantitative approaches expressed in terms of model bias or forecast skill, and ensemble approaches where models are assessed according to the extent to which the ensemble brackets the observational data. Unfortunately, such approaches focus the evaluation on models per se (or more specifically, on the simulation runs they produce) as though the models can be isolated from their context. Such approach may overlook a number of important aspects of the use of climate models: - the process by which models are selected and configured for a given scientific question. - the process by which model outputs are selected, aggregated and interpreted by a community of expertise in climatology. - the software fidelity of the models (i.e. whether the running code is actually doing what the modellers think it's doing). - the (often convoluted) history that begat a given model, along with the modelling choices long embedded in the code. - variability in the scientific maturity of different model components within a coupled system. These omissions mean that quantitative approaches cannot assess whether a model produces the right results for the wrong reasons, or conversely, the wrong results for the right reasons (where, say the observational data is problematic, or the model is configured to be unlike the earth system for a specific reason). Hence, we argue that it is a mistake to think that validation is a post-hoc process to be applied to an individual "finished" model, to ensure it meets some criteria for fidelity to the real world. We are therefore developing a framework for model validation that extends current approaches down into the detailed codebase and the processes by which the code is built and tested; and up into the broader scientific context in which models are selected and used to explore theories and test hypotheses. By taking software testing into account, we can build up a picture of the day-to-day practices by which modellers make small changes to the model and test the effect of such changes, both in isolated sections of code, and on the climatology of a full model. By taking the broader scientific context into account, we examine how features of the entire scientific enterprise improve (or impede) model validity, from the collection of observational data, creation of theories, use of these theories to develop models, choices for which model and which model configuration to use, choices for how to set up the runs, and interpretation of the results. Our approach cannot quantify model validity, but it can provide a systematic account of how the detailed practices involved in the development and use of climate models contribute to the quality of modelling systems and the scientific enterprise that they support. By making the relationships between these practices and model quality more explicit, we expect to identify specific strengths and weaknesses the modelling systems, particularly with respect to structural uncertainty in the models, and better characterize the "unknown unknowns".

  4. Solar Sail Model Validation from Echo Trajectories

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Brickerhoff, Adam T.

    2007-01-01

    The NASA In-Space Propulsion program has been engaged in a project to increase the technology readiness of solar sails. Recently, these efforts came to fruition in the form of several software tools to model solar sail guidance, navigation and control. Furthermore, solar sails are one of five technologies competing for the New Millennium Program Space Technology 9 flight demonstration mission. The historic Echo 1 and Echo 2 balloons were comprised of aluminized Mylar, which is the near-term material of choice for solar sails. Both spacecraft, but particularly Echo 2, were in low Earth orbits with characteristics similar to the proposed Space Technology 9 orbit. Therefore, the Echo balloons are excellent test cases for solar sail model validation. We present the results of studies of Echo trajectories that validate solar sail models of optics, solar radiation pressure, shape and low-thrust orbital dynamics.

  5. Using Model Checking to Validate AI Planner Domain Models

    NASA Technical Reports Server (NTRS)

    Penix, John; Pecheur, Charles; Havelund, Klaus

    1999-01-01

    This report describes an investigation into using model checking to assist validation of domain models for the HSTS planner. The planner models are specified using a qualitative temporal interval logic with quantitative duration constraints. We conducted several experiments to translate the domain modeling language into the SMV, Spin and Murphi model checkers. This allowed a direct comparison of how the different systems would support specific types of validation tasks. The preliminary results indicate that model checking is useful for finding faults in models that may not be easily identified by generating test plans.

  6. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    USGS Publications Warehouse

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, conveynot bury or "eliminate"uncertainties, and, thus, afford the active building of consensus decisions, instead of promoting passive or self-righteous decisions.

  7. Concepts of Model Verification and Validation

    SciTech Connect

    B.H.Thacker; S.W.Doebling; F.M.Hemez; M.C. Anderson; J.E. Pepin; E.A. Rodriguez

    2004-10-30

    Model verification and validation (V&V) is an enabling methodology for the development of computational models that can be used to make engineering predictions with quantified confidence. Model V&V procedures are needed by government and industry to reduce the time, cost, and risk associated with full-scale testing of products, materials, and weapon systems. Quantifying the confidence and predictive accuracy of model calculations provides the decision-maker with the information necessary for making high-consequence decisions. The development of guidelines and procedures for conducting a model V&V program are currently being defined by a broad spectrum of researchers. This report reviews the concepts involved in such a program. Model V&V is a current topic of great interest to both government and industry. In response to a ban on the production of new strategic weapons and nuclear testing, the Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship Program (SSP). An objective of the SSP is to maintain a high level of confidence in the safety, reliability, and performance of the existing nuclear weapons stockpile in the absence of nuclear testing. This objective has challenged the national laboratories to develop high-confidence tools and methods that can be used to provide credible models needed for stockpile certification via numerical simulation. There has been a significant increase in activity recently to define V&V methods and procedures. The U.S. Department of Defense (DoD) Modeling and Simulation Office (DMSO) is working to develop fundamental concepts and terminology for V&V applied to high-level systems such as ballistic missile defense and battle management simulations. The American Society of Mechanical Engineers (ASME) has recently formed a Standards Committee for the development of V&V procedures for computational solid mechanics models. The Defense Nuclear Facilities Safety Board (DNFSB) has been a proponent of model V&V for all safety-related nuclear facility design, analyses, and operations. In fact, DNFSB 2002-1 recommends to the DOE and National Nuclear Security Administration (NNSA) that a V&V process be performed for all safety related software and analysis. Model verification and validation are the primary processes for quantifying and building credibility in numerical models. Verification is the process of determining that a model implementation accurately represents the developer's conceptual description of the model and its solution. Validation is the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model. Both verification and validation are processes that accumulate evidence of a model's correctness or accuracy for a specific scenario; thus, V&V cannot prove that a model is correct and accurate for all possible scenarios, but, rather, it can provide evidence that the model is sufficiently accurate for its intended use. Model V&V is fundamentally different from software V&V. Code developers developing computer programs perform software V&V to ensure code correctness, reliability, and robustness. In model V&V, the end product is a predictive model based on fundamental physics of the problem being solved. In all applications of practical interest, the calculations involved in obtaining solutions with the model require a computer code, e.g., finite element or finite difference analysis. Therefore, engineers seeking to develop credible predictive models critically need model V&V guidelines and procedures. The expected outcome of the model V&V process is the quantified level of agreement between experimental data and model prediction, as well as the predictive accuracy of the model. This report attempts to describe the general philosophy, definitions, concepts, and processes for conducting a successful V&V program. This objective is motivated by the need for highly accurate numerical models for making predictions to s

  8. DEVELOPMENT OF THE MESOPUFF II DISPERSION MODEL

    EPA Science Inventory

    The development of the MESOPUFF II regional-scale air quality model is described. MESOPUFF II is a Lagrangian variable-trajectory puff superposition model suitable for modeling the transport, diffusion and removal of air pollutants from multiple point and area sources at transpor...

  9. Validation of the TES Forward Model for Passive Remote Sensing

    NASA Astrophysics Data System (ADS)

    Shephard, M. W.; Clough, S. A.; Cady-Pereira, K. E.

    2005-12-01

    The new capabilities and improved signal-to-noise ratio of present and future high-resolution passive sensors are placing a greater demand on the accuracy of the atmospheric Forward Model (FM) calculations used in the retrievals of atmospheric constituents. The Tropospheric Emission Spectrometer (TES) forward model utilizes absorption coefficient look-up tables generated by the Line-By-Line Radiative Transfer Model (LBLRTM). Validation of the line-by-line model using upwelling and downwelling infrared radiance measurements from recent S-HIS, AERI, and AIRS measurements show the need for more consistent spectroscopy. For example, AIRS/LBLRTM case studies show an inconsistency between the 2150-2450 cm-1 (v3 N2O; v3 CO2) spectral region and the 680-800 cm-1 (v2 CO2) spectral region, which is routinely used to retrieve atmospheric temperatures. TES is presently performing a simultaneous retrieval of temperature, water vapor, and ozone. In order to improve the residuals (TES - FM) and attain high retrieval accuracy, we need to have spectroscopic consistency: (i) between all bands of a given species; (ii) between all species for a given observation; and (iii) between observations from multiple instruments that use different spectral regions and techniques (i.e. microwave, thermal infrared, solar occultation, etc). Presented are recent model validation results using coincident and co-located S-HIS, TES, and AIRS observations over the Gulf of Mexico during the Aura Validation Experiment (AVE).

  10. Validating the Beck Depression Inventory-II for Hong Kong Community Adolescents

    ERIC Educational Resources Information Center

    Byrne, Barbara M.; Stewart, Sunita M.; Lee, Peter W. H.

    2004-01-01

    The primary purpose of this study was to test for the validity of a Chinese version of the Beck Depression Inventory-II (C-BDI-II) for use with Hong Kong community (i.e., nonclinical) adolescents. Based on a randomized triadic split of the data (N = 1460), we conducted exploratory factor analysis on Group1 (n = 486) and confirmatory factor

  11. SPR Hydrostatic Column Model Verification and Validation.

    SciTech Connect

    Bettin, Giorgia; Lord, David; Rudeen, David Keith

    2015-10-01

    A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extended nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.

  12. Using airborne laser scanning profiles to validate marine geoid models

    NASA Astrophysics Data System (ADS)

    Julge, Kalev; Gruno, Anti; Ellmann, Artu; Liibusk, Aive; Oja, Tõnis

    2014-05-01

    Airborne laser scanning (ALS) is a remote sensing method which utilizes LiDAR (Light Detection And Ranging) technology. The datasets collected are important sources for large range of scientific and engineering applications. Mostly the ALS is used to measure terrain surfaces for compilation of Digital Elevation Models but it can also be used in other applications. This contribution focuses on usage of ALS system for measuring sea surface heights and validating gravimetric geoid models over marine areas. This is based on the ALS ability to register echoes of LiDAR pulse from the water surface. A case study was carried out to analyse the possibilities for validating marine geoid models by using ALS profiles. A test area at the southern shores of the Gulf of Finland was selected for regional geoid validation. ALS measurements were carried out by the Estonian Land Board in spring 2013 at different altitudes and using different scan rates. The one wavelength Leica ALS50-II laser scanner on board of a small aircraft was used to determine the sea level (with respect to the GRS80 reference ellipsoid), which follows roughly the equipotential surface of the Earth's gravity field. For the validation a high-resolution (1'x2') regional gravimetric GRAV-GEOID2011 model was used. This geoid model covers the entire area of Estonia and surrounding waters of the Baltic Sea. The fit between the geoid model and GNSS/levelling data within the Estonian dry land revealed RMS of residuals ±1… ±2 cm. Note that such fitting validation cannot proceed over marine areas. Therefore, an ALS observation-based methodology was developed to evaluate the GRAV-GEOID2011 quality over marine areas. The accuracy of acquired ALS dataset were analyzed, also an optimal width of nadir-corridor containing good quality ALS data was determined. Impact of ALS scan angle range and flight altitude to obtainable vertical accuracy were investigated as well. The quality of point cloud is analysed by cross validation between overlapped flight lines and the comparison with tide gauge stations readings. The comparisons revealed that the ALS based profiles of sea level heights agree reasonably with the regional geoid model (within accuracy of the ALS data and after applying corrections due to sea level variations). Thus ALS measurements are suitable for measuring sea surface heights and validating marine geoid models.

  13. Mouse models of type II diabetes mellitus in drug discovery.

    PubMed

    Baribault, Helene

    2010-01-01

    Type II diabetes is a fast-growing epidemic in industrialized countries. Many recent advances have led to the discovery and marketing of efficient novel therapeutic medications. Yet, because of side effects of these medications and the variability in individual patient responsiveness, unmet needs subsist for the discovery of new drugs. The mouse has proven to be a reliable model for discovering and validating new treatments for type II diabetes mellitus. We review here the most common mouse models used for drug discovery for the treatment of type II diabetes. The methods presented focus on measuring the equivalent end points in mice to the clinical values of glucose metabolism used for the diagnostic of type II diabetes in humans: i.e., baseline fasting glucose and insulin, glucose tolerance test, and insulin sensitivity index. Improvements on these clinical values are essential for the progression of a novel potential therapeutic molecule through a preclinical and clinical pipeline. PMID:20012397

  14. Predicting Backdrafting and Spillage for Natural-Draft Gas Combustion Appliances: Validating VENT-II

    SciTech Connect

    Rapp, Vi H.; Pastor-Perez, Albert; Singer, Brett C.; Wray, Craig P.

    2013-04-01

    VENT-II is a computer program designed to provide detailed analysis of natural draft and induced draft combustion appliance vent-systems (i.e., furnace or water heater). This program is capable of predicting house depressurization thresholds that lead to backdrafting and spillage of combustion appliances; however, validation reports of the program being applied for this purpose are not readily available. The purpose of this report is to assess VENT-II’s ability to predict combustion gas spillage events due to house depressurization by comparing VENT-II simulated results with experimental data for four appliance configurations. The results show that VENT-II correctly predicts depressurizations resulting in spillage for natural draft appliances operating in cold and mild outdoor conditions, but not for hot conditions. In the latter case, the predicted depressurizations depend on whether the vent section is defined as part of the vent connector or the common vent when setting up the model. Overall, the VENTII solver requires further investigation before it can be used reliably to predict spillage caused by depressurization over a full year of weather conditions, especially where hot conditions occur.

  15. Developing Connectionist Models with MIRRORS/II

    PubMed Central

    D'Autrechy, C. Lynne; Reggla, James A.; Sutton, Granger G.; Goodall, Sharon M.; Tagamets, Malle A.

    1988-01-01

    Developing and evaluating connectionist models (neural models) is currently a difficult and time-consuming task. To address this issue, we implemented a software system called MIRRORS/II for developing connectionist models in biomedicine and other fields. MIRRORS/II is distinguished from related systems by its support of a very high-level non-procedural language, a general-purpose event-handling mechanism, and an indexed library for the accumulation of system resources. These features make MIRRORS/II a convenient software tool for individuals in biomedicine. This paper summartzes MIRRORS/II and gives a small example of its use.

  16. Initialization and validation of a simulation of cirrus using FIRE-II data

    SciTech Connect

    Westphal, D.L.; Kinne, S.

    1996-12-01

    Observations from a wide variety of instruments and platforms are used to validate many different aspects of a three-dimensional mesoscale simulation of the dynamics, cloud microphysics, and radiative transfer of a cirrus cloud system observed on 26 November 1991 during the second cirrus field program of the First International Satellite Cloud Climatology Program (ISCCP) Regional Experiment (FIRE-II) located in southeastern Kansas. The simulation was made with a mesoscale dynamical model utilizing a simplified bulk water cloud scheme and a spectral model of radiative transfer. Expressions for cirrus optical properties for solar and infrared wavelength intervals as functions of ice water content and effective particle radius are modified for the midlatitude cirrus observed during FIRE-II and are shown to compare favorably with explicit size-resolving calculations of the optical properties. Rawinsonde, Raman lidar, and satellite data are evaluated and combined to produce a time-height cross section of humidity at the central FIRE-II site for model verification. Due to the wide spacing of rawinsondes and their infrequent release, important moisture features go undetected and are absent in the conventional analyses. The upper-tropospheric humidities used for the initial conditions were generally less than 50% of those inferred from satellite data, yet over the course of a 24-h simulation the model produced a distribution that closely resembles the large-scale features of the satellite analysis. The simulated distribution and concentration of ice compares favorably with data from radar, lidar satellite, and aircraft. Direct comparison is made between the radiative transfer simulation and data from broadband and spectral sensors and inferred quantities such as cloud albedo, optical depth, and top-of-the-atmosphere 11-{mu}m brightness temperature, and the 6.7-{mu}m brightness temperature. 49 refs., 26 figs., 1 tab.

  17. Validation of Space Weather Models at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, M. M.; Hesse, M.; Pulkkinen, A.; Maddox, M.; Rastaetter, L.; Berrios, D.; Zheng, Y.; MacNeice, P. J.; Shim, J.; Taktakishvili, A.; Chulaki, A.

    2011-01-01

    The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership to support the research and developmental work necessary to substantially increase space weather modeling capabilities and to facilitate advanced models deployment in forecasting operations. Space weather models and coupled model chains hosted at the CCMC range from the solar corona to the Earth's upper atmosphere. CCMC has developed a number of real-time modeling systems, as well as a large number of modeling and data products tailored to address the space weather needs of NASA's robotic missions. The CCMC conducts unbiased model testing and validation and evaluates model readiness for operational environment. CCMC has been leading recent comprehensive modeling challenges under GEM, CEDAR and SHINE programs. The presentation will focus on experience in carrying out comprehensive and systematic validation of large sets of. space weather models

  18. Turbulence Modeling Validation, Testing, and Development

    NASA Technical Reports Server (NTRS)

    Bardina, J. E.; Huang, P. G.; Coakley, T. J.

    1997-01-01

    The primary objective of this work is to provide accurate numerical solutions for selected flow fields and to compare and evaluate the performance of selected turbulence models with experimental results. Four popular turbulence models have been tested and validated against experimental data often turbulent flows. The models are: (1) the two-equation k-epsilon model of Wilcox, (2) the two-equation k-epsilon model of Launder and Sharma, (3) the two-equation k-omega/k-epsilon SST model of Menter, and (4) the one-equation model of Spalart and Allmaras. The flows investigated are five free shear flows consisting of a mixing layer, a round jet, a plane jet, a plane wake, and a compressible mixing layer; and five boundary layer flows consisting of an incompressible flat plate, a Mach 5 adiabatic flat plate, a separated boundary layer, an axisymmetric shock-wave/boundary layer interaction, and an RAE 2822 transonic airfoil. The experimental data for these flows are well established and have been extensively used in model developments. The results are shown in the following four sections: Part A describes the equations of motion and boundary conditions; Part B describes the model equations, constants, parameters, boundary conditions, and numerical implementation; and Parts C and D describe the experimental data and the performance of the models in the free-shear flows and the boundary layer flows, respectively.

  19. Validation of Computational Models in Biomechanics

    PubMed Central

    Henninger, Heath B.; Reese, Shawn P.; Anderson, Andrew E.; Weiss, Jeffrey A.

    2010-01-01

    The topics of verification and validation (V&V) have increasingly been discussed in the field of computational biomechanics, and many recent articles have applied these concepts in an attempt to build credibility for models of complex biological systems. V&V are evolving techniques that, if used improperly, can lead to false conclusions about a system under study. In basic science these erroneous conclusions may lead to failure of a subsequent hypothesis, but they can have more profound effects if the model is designed to predict patient outcomes. While several authors have reviewed V&V as they pertain to traditional solid and fluid mechanics, it is the intent of this manuscript to present them in the context of computational biomechanics. Specifically, the task of model validation will be discussed with a focus on current techniques. It is hoped that this review will encourage investigators to engage and adopt the V&V process in an effort to increase peer acceptance of computational biomechanics models. PMID:20839648

  20. A decision support system (GesCoN) for managing fertigation in vegetable crops. Part IImodel calibration and validation under different environmental growing conditions on field grown tomato

    PubMed Central

    Conversa, Giulia; Bonasia, Anna; Di Gioia, Francesco; Elia, Antonio

    2015-01-01

    The GesCoN model was evaluated for its capability to simulate growth, nitrogen uptake, and productivity of open field tomato grown under different environmental and cultural conditions. Five datasets collected from experimental trials carried out in Foggia (IT) were used for calibration and 13 datasets collected from trials conducted in Foggia, Perugia (IT), and Florida (USA) were used for validation. The goodness of fitting was performed by comparing the observed and simulated shoot dry weight (SDW) and N crop uptake during crop seasons, total dry weight (TDW), N uptake and fresh yield (TFY). In SDW model calibration, the relative RMSE values fell within the good 1015% range, percent BIAS (PBIAS) ranged between ?11.5 and 7.4%. The Nash-Sutcliffe efficiency (NSE) was very close to the optimal value 1. In the N uptake calibration RRMSE and PBIAS were very low (7%, and ?1.78, respectively) and NSE close to 1. The validation of SDW (RRMSE = 16.7%; NSE = 0.96) and N uptake (RRMSE = 16.8%; NSE = 0.96) showed the good accuracy of GesCoN. A model under- or overestimation of the SDW and N uptake occurred when higher or a lower N rates and/or a more or less efficient system were used compared to the calibration trial. The in-season adjustment, using the SDWcheck procedure, greatly improved model simulations both in the calibration and in the validation phases. The TFY prediction was quite good except in Florida, where a large overestimation (+16%) was linked to a different harvest index (0.53) compared to the cultivars used for model calibration and validation in Italian areas. The soil water content at the 1030 cm depth appears to be well-simulated by the software, and the GesCoN proved to be able to adaptively control potential yield and DW accumulation under limited N soil availability scenarios and consequently to modify fertilizer application. The DSSwell simulate SDW accumulation and N uptake of different tomato genotypes grown under Mediterranean and subtropical conditions. PMID:26217351

  1. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  2. Bayes factor of model selection validates FLMP.

    PubMed

    Massaro, D W; Cohen, M M; Campbell, C S; Rodriguez, T

    2001-03-01

    The fuzzy logical model of perception (FLMP; Massaro, 1998) has been extremely successful at describing performance across a wide range of ecological domains as well as for a broad spectrum of individuals. An important issue is whether this descriptive ability is theoretically informative or whether it simply reflects the model's ability to describe a wider range of possible outcomes. Previous tests and contrasts of this model with others have been adjudicated on the basis of both a root mean square deviation (RMSD) for goodness-of-fit and an observed RMSD relative to a benchmark RMSD if the model was indeed correct. We extend the model evaluation by another technique called Bayes factor (Kass & Raftery, 1995; Myung & Pitt, 1997). The FLMP maintains its significant descriptive advantage with this new criterion. In a series of simulations, the RMSD also accurately recovers the correct model under actual experimental conditions. When additional variability was added to the results, the models continued to be recoverable. In addition to its descriptive accuracy, RMSD should not be ignored in model testing because it can be justified theoretically and provides a direct and meaningful index of goodness-of-fit. We also make the case for the necessity of free parameters in model testing. Finally, using Newton's law of universal gravitation as an analogy, we argue that it might not be valid to expect a model's fit to be invariant across the whole range of possible parameter values for the model. We advocate that model selection should be analogous to perceptual judgment, which is characterized by the optimal use of multiple sources of information (e.g., the FLMP). Conclusions about models should be based on several selection criteria. PMID:11340853

  3. Plasma Reactor Modeling and Validation Experiments

    NASA Technical Reports Server (NTRS)

    Meyyappan, M.; Bose, D.; Hash, D.; Hwang, H.; Cruden, B.; Sharma, S. P.; Rao, M. V. V. S.; Arnold, Jim (Technical Monitor)

    2001-01-01

    Plasma processing is a key processing stop in integrated circuit manufacturing. Low pressure, high density plum reactors are widely used for etching and deposition. Inductively coupled plasma (ICP) source has become popular recently in many processing applications. In order to accelerate equipment and process design, an understanding of the physics and chemistry, particularly, plasma power coupling, plasma and processing uniformity and mechanism is important. This understanding is facilitated by comprehensive modeling and simulation as well as plasma diagnostics to provide the necessary data for model validation which are addressed in this presentation. We have developed a complete code for simulating an ICP reactor and the model consists of transport of electrons, ions, and neutrals, Poisson's equation, and Maxwell's equation along with gas flow and energy equations. Results will be presented for chlorine and fluorocarbon plasmas and compared with data from Langmuir probe, mass spectrometry and FTIR.

  4. Validation of Kp Estimation and Prediction Models

    NASA Astrophysics Data System (ADS)

    McCollough, J. P., II; Young, S. L.; Frey, W.

    2014-12-01

    Specifification and forecast of geomagnetic indices is an important capability for space weather operations. The University Partnering for Operational Support (UPOS) effort at the Applied Physics Laboratory of Johns Hopkins University (JHU/APL) produced many space weather models, including the Kp Predictor and Kp Estimator. We perform a validation of index forecast products against definitive indices computed by the Deutches GeoForschungsZentstrum Potsdam (GFZ). We compute continuous predictant skill scores, as well as 2x2 contingency tables and associated scalar quantities for different index thresholds. We also compute a skill score against a nowcast persistence model. We discuss various sources of error for the models and how they may potentially be improved.

  5. ExodusII Finite Element Data Model

    Energy Science and Technology Software Center (ESTSC)

    2005-05-14

    EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface. (exodus II is based on netcdf)

  6. Validated predictive modelling of the environmental resistome

    PubMed Central

    Amos, Gregory CA; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-01-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome. PMID:25679532

  7. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  8. Integrated Medical Model Verification, Validation, and Credibility

    NASA Technical Reports Server (NTRS)

    Walton, Marlei; Kerstman, Eric; Foy, Millennia; Shah, Ronak; Saile, Lynn; Boley, Lynn; Butler, Doug; Myers, Jerry

    2014-01-01

    The Integrated Medical Model (IMM) was designed to forecast relative changes for a specified set of crew health and mission success risk metrics by using a probabilistic (stochastic process) model based on historical data, cohort data, and subject matter expert opinion. A probabilistic approach is taken since exact (deterministic) results would not appropriately reflect the uncertainty in the IMM inputs. Once the IMM was conceptualized, a plan was needed to rigorously assess input information, framework and code, and output results of the IMM, and ensure that end user requests and requirements were considered during all stages of model development and implementation. METHODS: In 2008, the IMM team developed a comprehensive verification and validation (VV) plan, which specified internal and external review criteria encompassing 1) verification of data and IMM structure to ensure proper implementation of the IMM, 2) several validation techniques to confirm that the simulation capability of the IMM appropriately represents occurrences and consequences of medical conditions during space missions, and 3) credibility processes to develop user confidence in the information derived from the IMM. When the NASA-STD-7009 (7009) was published, the IMM team updated their verification, validation, and credibility (VVC) project plan to meet 7009 requirements and include 7009 tools in reporting VVC status of the IMM. RESULTS: IMM VVC updates are compiled recurrently and include 7009 Compliance and Credibility matrices, IMM VV Plan status, and a synopsis of any changes or updates to the IMM during the reporting period. Reporting tools have evolved over the lifetime of the IMM project to better communicate VVC status. This has included refining original 7009 methodology with augmentation from the NASA-STD-7009 Guidance Document. End user requests and requirements are being satisfied as evidenced by ISS Program acceptance of IMM risk forecasts, transition to an operational model and simulation tool, and completion of service requests from a broad end user consortium including Operations, Science and Technology Planning, and Exploration Planning. CONCLUSIONS: The VVC approach established by the IMM project of combining the IMM VV Plan with 7009 requirements is comprehensive and includes the involvement of end users at every stage in IMM evolution. Methods and techniques used to quantify the VVC status of the IMM have not only received approval from the local NASA community but have also garnered recognition by other federal agencies seeking to develop similar guidelines in the medical modeling community.

  9. Boron-10 Lined Proportional Counter Model Validation

    SciTech Connect

    Lintereur, Azaree T.; Siciliano, Edward R.; Kouzes, Richard T.

    2012-06-30

    The Department of Energy Office of Nuclear Safeguards (NA-241) is supporting the project “Coincidence Counting With Boron-Based Alternative Neutron Detection Technology” at Pacific Northwest National Laboratory (PNNL) for the development of an alternative neutron coincidence counter. The goal of this project is to design, build and demonstrate a boron-lined proportional tube-based alternative system in the configuration of a coincidence counter. This report discusses the validation studies performed to establish the degree of accuracy of the computer modeling methods current used to simulate the response of boron-lined tubes. This is the precursor to developing models for the uranium neutron coincidence collar under Task 2 of this project.

  10. Model Validation of Power System Components Using Hybrid Dynamic Simulation

    SciTech Connect

    Huang, Zhenyu; Nguyen, Tony B.; Kosterev, Dmitry; Guttromson, Ross T.

    2008-05-31

    Hybrid dynamic simulation, with its capability of injecting external signals into dynamic simulation, opens the traditional dynamic simulation loop for interaction with actual field measurements. This simulation technique enables rigorous comparison between simulation results and actual measurements and model validation of individual power system components within a small subsystem. This paper uses a real example of generator model validation to illustrate the procedure and validity of the component model validation methodology using hybrid dynamic simulation. Initial model calibration has also been carried out to show how model validation results would be used to improve component models

  11. Model Validation of Power System Components Using Hybrid Dynamic Simulation

    SciTech Connect

    Huang, Zhenyu; Nguyen, Tony B.; Kosterev, Dmitry; Guttromson, Ross T.

    2006-05-21

    AbstractHybrid dynamic simulation, with its capability of injecting external signals into dynamic simulation, opens the traditional dynamic simulation loop for interaction with actual field measurements. This simulation technique enables rigorous comparison between simulation results and actual measurements and model validation of individual power system components within a small subsystem. This paper uses a real example of generator model validation to illustrate the procedure and validity of the component model validation methodology using hybrid dynamic simulation. Initial model calibration has also been carried out to show how model validation results would be used to improve component models.

  12. [Catalonia's primary healthcare accreditation model: a valid model].

    PubMed

    Davins, Josep; Gens, Montserrat; Pareja, Clara; Guzmn, Ramn; Marquet, Roser; Valls, Roser

    2014-07-01

    There are few experiences of accreditation models validated by primary care teams (EAP). The aim of this study was to detail the process of design, development, and subsequent validation of the consensus EAP accreditation model of Catalonia. An Operating Committee of the Health Department of Catalonia revised models proposed by the European Foundation for Quality Management, the Joint Commission International and the Institut Catal de la Salut and proposed 628 essential standards to the technical group (25 experts in primary care and quality of care), to establish consensus standards. The consensus document was piloted in 30 EAP for the purpose of validating the contents, testing standards and identifying evidence. Finally, we did a survey to assess acceptance and validation of the document. The Technical Group agreed on a total of 414 essential standards. The pilot selected a total of 379. Mean compliance with the standards of the final document in the 30 EAP was 70.4%. The standards results were the worst fulfilment percentage. The survey target that 83% of the EAP found it useful and 78% found the content of the accreditation manual suitable as a tool to assess the quality of the EAP, and identify opportunities for improvement. On the downside they highlighted its complexity and laboriousness. We have a model that fits the reality of the EAP, and covers all relevant issues for the functioning of an excellent EAP. The model developed in Catalonia is a model for easy understanding. PMID:25128364

  13. Validation and application of the SCALP model

    NASA Astrophysics Data System (ADS)

    Smith, D. A. J.; Martin, C. E.; Saunders, C. J.; Smith, D. A.; Stokes, P. H.

    The Satellite Collision Assessment for the UK Licensing Process (SCALP) model was first introduced in a paper presented at IAC 2003. As a follow-on, this paper details the steps taken to validate the model and describes some of its applications. SCALP was developed for the British National Space Centre (BNSC) to support liability assessments as part of the UK's satellite license application process. Specifically, the model determines the collision risk that a satellite will pose to other orbiting objects during both its operational and post-mission phases. To date SCALP has been used to assess several LEO and GEO satellites for BNSC, and subsequently to provide the necessary technical basis for licenses to be issued. SCALP utilises the current population of operational satellites residing in LEO and GEO (extracted from ESA's DISCOS database) as a starting point. Realistic orbital dynamics, including the approximate simulation of generic GEO station-keeping strategies are used to propagate the objects over time. The method takes into account all of the appropriate orbit perturbations for LEO and GEO altitudes and allows rapid run times for multiple objects over time periods of many years. The orbit of a target satellite is also propagated in a similar fashion. During these orbital evolutions, a collision prediction and close approach algorithm assesses the collision risk posed to the satellite population. To validate SCALP, specific cases were set up to enable the comparison of collision risk results with other established models, such as the ESA MASTER model. Additionally, the propagation of operational GEO satellites within SCALP was compared with the expected behaviour of controlled GEO objects. The sensitivity of the model to changing the initial conditions of the target satellite such as semi-major axis and inclination has also been demonstrated. A further study shows the effect of including extra objects from the GTO population (which can pass through the LEO and GEO regimes) in the determination of collision risk. Lastly, the effect of altering the simulation environment, by varying parameters such as the extent of the uncertainty volume used in the GEO collision assessment method has been investigated.

  14. Full-Scale Cookoff Model Validation Experiments

    SciTech Connect

    McClelland, M A; Rattanapote, M K; Heimdahl, E R; Erikson, W E; Curran, P O; Atwood, A I

    2003-11-25

    This paper presents the experimental results of the third and final phase of a cookoff model validation effort. In this phase of the work, two generic Heavy Wall Penetrators (HWP) were tested in two heating orientations. Temperature and strain gage data were collected over the entire test period. Predictions for time and temperature of reaction were made prior to release of the live data. Predictions were comparable to the measured values and were highly dependent on the established boundary conditions. Both HWP tests failed at a weld located near the aft closure of the device. More than 90 percent of unreacted explosive was recovered in the end heated experiment and less than 30 percent recovered in the side heated test.

  15. Constructing and Validating a Decadal Prediction Model

    NASA Astrophysics Data System (ADS)

    Foss, I.; Woolf, D. K.; Gagnon, A. S.; Merchant, C. J.

    2010-05-01

    For the purpose of identifying potential sources of predictability of Scottish mean air temperature (SMAT), a redundancy analysis (RA) was accomplished to quantitatively assess the predictability of SMAT from North Atlantic SSTs as well as the temporal consistency of this predictability. The RA was performed between the main principal components of North Atlantic SST anomalies and SMAT anomalies for two time periods: 1890-1960 and 1960-2006. The RA models developed using data from the 1890-1960 period were validated using the 1960-2006 period; in a similar way the model developed based on the 1960-2006 period was validated using data from the 1890-1960 period. The results indicate the potential to forecast decadal trends in SMAT for all seasons in 1960-2006 time period and all seasons with the exception of winter for the period 1890-1960 with the best predictability achieved in summer. The statistical models show the best performance when SST anomalies in the European shelf seas (45N-65N, 20W-20E) rather than those for the SSTs over the entire North Atlantic (30N-75N, 80W-30E) were used as a predictor. The results of the RA demonstrated that similar SSTs modes were responsible for predictions in the first and second half of the 20th century, establishing temporal consistency, though with stronger influence in the more recent half. The SST pattern responsible for explaining the largest amount of variance in SMAT was stronger in the second half of the 20th century and showed increasing influence from the area of the North Sea, possibly due to faster sea-surface warming in that region in comparison with the open North Atlantic. The Wavelet Transform (WT), Cross Wavelet Transform (XWT) and Wavelet Coherence (WTC) techniques were used to analyse RA-model-based forecasts of SMAT in the time-frequency domain. Wavelet-based techniques applied to the predicted and observed time series revealed a good performance of RA models to predict the frequency variability in the SMAT time series. A better performance was obtained for predicting the SMAT during the period 1960-2006 based on 1890-1960 than vice versa, with the exception of winter 1890-1960. In the same frequency bands and in the same time interval there was high coherence between observed and predicted time series. In particular, winter, spring and summer wavelets at 81.5 year band were highly correlated in both time periods, with higher correlation in 1960-2006 and in summer.

  16. Outward Bound Outcome Model Validation and Multilevel Modeling

    ERIC Educational Resources Information Center

    Luo, Yuan-Chun

    2011-01-01

    This study was intended to measure construct validity for the Outward Bound Outcomes Instrument (OBOI) and to predict outcome achievement from individual characteristics and course attributes using multilevel modeling. A sample of 2,340 participants was collected by Outward Bound USA between May and September 2009 using the OBOI. Two phases of…

  17. Outward Bound Outcome Model Validation and Multilevel Modeling

    ERIC Educational Resources Information Center

    Luo, Yuan-Chun

    2011-01-01

    This study was intended to measure construct validity for the Outward Bound Outcomes Instrument (OBOI) and to predict outcome achievement from individual characteristics and course attributes using multilevel modeling. A sample of 2,340 participants was collected by Outward Bound USA between May and September 2009 using the OBOI. Two phases of

  18. Simultaneous heat and water model: Model use, calibration and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A discussion of calibration and validation procedures used for the Simultaneous Heat and Water model is presented. Three calibration approaches are presented and compared for simulating soil water content. Approaches included a stepwise local search methodology, trial-and-error calibration, and an...

  19. Diurnal ocean surface layer model validation

    NASA Technical Reports Server (NTRS)

    Hawkins, Jeffrey D.; May, Douglas A.; Abell, Fred, Jr.

    1990-01-01

    The diurnal ocean surface layer (DOSL) model at the Fleet Numerical Oceanography Center forecasts the 24-hour change in a global sea surface temperatures (SST). Validating the DOSL model is a difficult task due to the huge areas involved and the lack of in situ measurements. Therefore, this report details the use of satellite infrared multichannel SST imagery to provide day and night SSTs that can be directly compared to DOSL products. This water-vapor-corrected imagery has the advantages of high thermal sensitivity (0.12 C), large synoptic coverage (nearly 3000 km across), and high spatial resolution that enables diurnal heating events to be readily located and mapped. Several case studies in the subtropical North Atlantic readily show that DOSL results during extreme heating periods agree very well with satellite-imagery-derived values in terms of the pattern of diurnal warming. The low wind and cloud-free conditions necessary for these events to occur lend themselves well to observation via infrared imagery. Thus, the normally cloud-limited aspects of satellite imagery do not come into play for these particular environmental conditions. The fact that the DOSL model does well in extreme events is beneficial from the standpoint that these cases can be associated with the destruction of the surface acoustic duct. This so-called afternoon effect happens as the afternoon warming of the mixed layer disrupts the sound channel and the propagation of acoustic energy.

  20. Boron-10 Lined Proportional Counter Model Validation

    SciTech Connect

    Lintereur, Azaree T.; Ely, James H.; Kouzes, Richard T.; Rogers, Jeremy L.; Siciliano, Edward R.

    2012-11-18

    The decreasing supply of 3He is stimulating a search for alternative neutron detectors; one potential 3He replacement is 10B-lined proportional counters. Simulations are being performed to predict the performance of systems designed with 10B-lined tubes. Boron-10-lined tubes are challenging to model accurately because the neutron capture material is not the same as the signal generating material. Thus, to simulate the efficiency, the neutron capture reaction products that escape the lining and enter the signal generating fill gas must be tracked. The tube lining thickness and composition are typically proprietary vendor information, and therefore add additional variables to the system simulation. The modeling methodologies used to predict the neutron detection efficiency of 10B-lined proportional counters were validated by comparing simulated to measured results. The measurements were made with a 252Cf source positioned at several distances from a moderated 2.54-cm diameter 10B-lined tube. Models were constructed of the experimental configurations using the Monte Carlo transport code MCNPX, which is capable of tracking the reaction products from the (n,10B) reaction. Several different lining thicknesses and compositions were simulated for comparison with the measured data. This paper presents the results of the evaluation of the experimental and simulated data, and a summary of how the different linings affect the performance of a coincidence counter configuration designed with 10B-lined proportional counters.

  1. Concurrent and Predictive Validity of the Phelps Kindergarten Readiness Scale-II

    ERIC Educational Resources Information Center

    Duncan, Jennifer; Rafter, Erin M.

    2005-01-01

    The purpose of this research was to establish the concurrent and predictive validity of the Phelps Kindergarten Readiness Scale, Second Edition (PKRS-II; L. Phelps, 2003). Seventy-four kindergarten students of diverse ethnic backgrounds enrolled in a northeastern suburban school participated in the study. The concurrent administration of the

  2. Concurrent Validity of the Online Version of the Keirsey Temperament Sorter II.

    ERIC Educational Resources Information Center

    Kelly, Kevin R.; Jugovic, Heidi

    2001-01-01

    Data from the Keirsey Temperament Sorter II online instrument and Myers Briggs Type Indicator (MBTI) for 203 college freshmen were analyzed. Positive correlations appeared between the concurrent MBTI and Keirsey measures of psychological type, giving preliminary support to the validity of the online version of Keirsey. (Contains 28 references.)

  3. Validation of human skin models in the MHz region.

    PubMed

    Huclova, Sonja; Frhlich, Jrg; Falco, Lisa; Dewarrat, Franois; Talary, Mark S; Vahldieck, Rdiger

    2009-01-01

    The human skin consists of several layers with distinct dielectric properties. Resolving the impact of changes in dielectric parameters of skin layers and predicting them allows for non-invasive sensing in medical diagnosis. So far no complete skin and underlying tissue model is available for this purpose in the MHz range. Focusing on this dispersiondominated frequency region multilayer skin models are investigated: First, containing homogeneous non-dispersive sublayers and second, with sublayers obtained from a three-phase Maxwell-Garnett mixture of shelled cell-like ellipsoids. Both models are numerically simulated using the Finite Element Method, a fringing field sensor on the top of the multilayer system serving as a probe. Furthermore, measurements with the sensor probing skin in vivo are performed. In order to validate the models the uppermost skin layer, the stratum corneum was i) included and ii) removed in models and measurements. It is found that only the Maxwell-Garnett mixture model can qualitatively reproduce the measured dispersion which still occurs without the stratum corneum and consequently, structural features of tissue have to be part of the model. PMID:19964633

  4. Validation of HEDR models. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid.

  5. Validation of Biomarker-based risk prediction models

    PubMed Central

    Taylor, Jeremy M. G.; Ankerst, Donna P.; Andridge, Rebecca R.

    2014-01-01

    The increasing availability and use of predictive models to facilitate informed decision making highlights the need for careful assessment of the validity of these models. In particular, models involving biomarkers require careful validation for two reasons: issues with overfitting when complex models involve a large number of biomarkers, and inter-laboratory variation in assays used to measure biomarkers. In this paper we distinguish between internal and external statistical validation. Internal validation, involving training-testing splits of the available data or cross-validation, is a necessary component of the model building process and can provide valid assessments of model performance. External validation consists of assessing model performance on one or more datasets collected by different investigators from different institutions. External validation is a more rigorous procedure necessary for evaluating whether the predictive model will generalize to populations other than the one on which it was developed. We stress the need for an external dataset to be truly external, that is, to play no role in model development and ideally be completely unavailable to the researchers building the model. In addition to reviewing different types of validation, we describe different types and features of predictive models and strategies for model building, as well as measures appropriate for assessing their performance in the context of validation. No single measure can characterize the different components of the prediction, and the use of multiple summary measures is recommended. PMID:18829476

  6. Kinetic modeling of light limitation and sulfur deprivation effects in the induction of hydrogen production with Chlamydomonas reinhardtii. Part II: Definition of model-based protocols and experimental validation.

    PubMed

    Degrenne, B; Pruvost, J; Titica, M; Takache, H; Legrand, J

    2011-10-01

    Photosynthetic hydrogen production under light by the green microalga Chlamydomonas reinhardtii was investigated in a torus-shaped PBR in sulfur-deprived conditions. Culture conditions, represented by the dry biomass concentration of the inoculum, sulfate concentration, and incident photon flux density (PFD), were optimized based on a previously published model (Fouchard et al., 2009. Biotechnol Bioeng 102:232-245). This allowed a strictly autotrophic production, whereas the sulfur-deprived protocol is usually applied in photoheterotrophic conditions. Experimental results combined with additional information from kinetic simulations emphasize effects of sulfur deprivation and light attenuation in the PBR in inducing anoxia and hydrogen production. A broad range of PFD was tested (up to 500 mol?photons m(-2) s(-1) ). Maximum hydrogen productivities were 1.0 0.2 mL?H? /h/L (or 25 5 mL H? /m(2) h) and 3.1 mL 0.4 H? /h L (or 77.5 10 mL?H? /m(2) h), at 110 and 500 mol?photons m(-2) s(-1) , respectively. These values approached a maximum specific productivity of approximately 1.9 mL 0.4 H? /h/g of biomass dry weight, clearly indicative of a limitation in cell capacity to produce hydrogen. The efficiency of the process and further optimizations are discussed. PMID:21520019

  7. Validating agent based models through virtual worlds.

    SciTech Connect

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior. Results from our work indicate that virtual worlds have the potential for serving as a proxy in allocating and populating behaviors that would be used within further agent-based modeling studies.

  8. EXODUS II: A finite element data model

    SciTech Connect

    Schoof, L.A.; Yarberry, V.R.

    1994-09-01

    EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface (API).

  9. Brief Report: Construct Validity of Two Identity Status Measures: The EIPQ and the EOM-EIS-II

    ERIC Educational Resources Information Center

    Schwartz, Seth J.

    2004-01-01

    The present study was designed to examine construct validity of two identity status measures, the Ego Identity Process Questionnaire (EIPQ; J. Adolescence 18 (1995) 179) and the Extended Objective Measure of Ego Identity Status II (EOM-EIS-II; J. Adolescent Res. 1 (1986) 183). Construct validity was operationalized in terms of how identity status

  10. Geochemistry Model Validation Report: Material Degradation and Release Model

    SciTech Connect

    H. Stockman

    2001-09-28

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17).

  11. End-to-end modelling of He II flow systems

    NASA Technical Reports Server (NTRS)

    Mord, A. J.; Snyder, H. A.; Newell, D. A.

    1992-01-01

    A practical computer code has been developed which uses the accepted two-fluid model to simulate He II flow in complicated systems. The full set of equations are used, retaining the coupling between the pressure, temperature and velocity fields. This permits modeling He II flow over the full range of conditions, from strongly or weakly driven flow through large pipes, narrow channels and porous media. The system may include most of the components used in modern superfluid flow systems: non-ideal thermomechanical pumps, tapered sections, constrictions, lines with heated side walls and heat exchangers. The model is validated by comparison with published experimental data. It is applied to a complex system to show some of the non-intuitive feedback effects that can occur. This code is ready to be used as a design tool for practical applications of He II. It can also be used for the design of He II experiments and as a tool for comparison of experimental data with the standard two-fluid model.

  12. Micromachined accelerometer design, modeling and validation

    SciTech Connect

    Davies, B.R.; Bateman, V.I.; Brown, F.A.; Montague, S.; Murray, J.R.; Rey, D.; Smith, J.H.

    1998-04-01

    Micromachining technologies enable the development of low-cost devices capable of sensing motion in a reliable and accurate manner. The development of various surface micromachined accelerometers and gyroscopes to sense motion is an ongoing activity at Sandia National Laboratories. In addition, Sandia has developed a fabrication process for integrating both the micromechanical structures and microelectronics circuitry of Micro-Electro-Mechanical Systems (MEMS) on the same chip. This integrated surface micromachining process provides substantial performance and reliability advantages in the development of MEMS accelerometers and gyros. A Sandia MEMS team developed a single-axis, micromachined silicon accelerometer capable of surviving and measuring very high accelerations, up to 50,000 times the acceleration due to gravity or 50 k-G (actually measured to 46,000 G). The Sandia integrated surface micromachining process was selected for fabrication of the sensor due to the extreme measurement sensitivity potential associated with integrated microelectronics. Measurement electronics capable of measuring at to Farad (10{sup {minus}18} Farad) changes in capacitance were required due to the very small accelerometer proof mass (< 200 {times} 10{sup {minus}9} gram) used in this surface micromachining process. The small proof mass corresponded to small sensor deflections which in turn required very sensitive electronics to enable accurate acceleration measurement over a range of 1 to 50 k-G. A prototype sensor, based on a suspended plate mass configuration, was developed and the details of the design, modeling, and validation of the device will be presented in this paper. The device was analyzed using both conventional lumped parameter modeling techniques and finite element analysis tools. The device was tested and performed well over its design range.

  13. Design and Development Research: A Model Validation Case

    ERIC Educational Resources Information Center

    Tracey, Monica W.

    2009-01-01

    This is a report of one case of a design and development research study that aimed to validate an overlay instructional design model incorporating the theory of multiple intelligences into instructional systems design. After design and expert review model validation, The Multiple Intelligence (MI) Design Model, used with an Instructional Systems

  14. Techniques and Issues in Agent-Based Modeling Validation

    SciTech Connect

    Pullum, Laura L; Cui, Xiaohui

    2012-01-01

    Validation of simulation models is extremely important. It ensures that the right model has been built and lends confidence to the use of that model to inform critical decisions. Agent-based models (ABM) have been widely deployed in different fields for studying the collective behavior of large numbers of interacting agents. However, researchers have only recently started to consider the issues of validation. Compared to other simulation models, ABM has many differences in model development, usage and validation. An ABM is inherently easier to build than a classical simulation, but more difficult to describe formally since they are closer to human cognition. Using multi-agent models to study complex systems has attracted criticisms because of the challenges involved in their validation [1]. In this report, we describe the challenge of ABM validation and present a novel approach we recently developed for an ABM system.

  15. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    SciTech Connect

    Freeman, J.; Whitmore, J.; Kaffine, L.; Blair, N.; Dobos, A. P.

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  16. An innovative education program: the peer competency validator model.

    PubMed

    Ringerman, Eileen; Flint, Lenora L; Hughes, DiAnn E

    2006-01-01

    This article describes the development, implementation, and evaluation of a creative peer competency validation model leading to successful outcomes including a more proficient and motivated staff, the replacement of annual skill labs with ongoing competency validation, and significant cost savings. Trained staff assessed competencies of their coworkers directly in the practice setting. Registered nurses, licensed vocational nurses, and medical assistants recruited from patient care staff comprise the validator group. The model is applicable to any practice setting. PMID:16760770

  17. Statistical Validation of Normal Tissue Complication Probability Models

    SciTech Connect

    Xu Chengjian; Schaaf, Arjen van der; Veld, Aart A. van't; Langendijk, Johannes A.; Schilstra, Cornelis; Radiotherapy Institute Friesland, Leeuwarden

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  18. Antibody modeling assessment II. Structures and models.

    PubMed

    Teplyakov, Alexey; Luo, Jinquan; Obmolova, Galina; Malia, Thomas J; Sweet, Raymond; Stanfield, Robyn L; Kodangattil, Sreekumar; Almagro, Juan Carlos; Gilliland, Gary L

    2014-08-01

    To assess the state-of-the-art in antibody structure modeling, a blinded study was conducted. Eleven unpublished Fab crystal structures were used as a benchmark to compare Fv models generated by seven structure prediction methodologies. In the first round, each participant submitted three non-ranked complete Fv models for each target. In the second round, CDR-H3 modeling was performed in the context of the correct environment provided by the crystal structures with CDR-H3 removed. In this report we describe the reference structures and present our assessment of the models. Some of the essential sources of errors in the predictions were traced to the selection of the structure template, both in terms of the CDR canonical structures and VL/VH packing. On top of this, the errors present in the Protein Data Bank structures were sometimes propagated in the current models, which emphasized the need for the curated structural database devoid of errors. Modeling non-canonical structures, including CDR-H3, remains the biggest challenge for antibody structure prediction. PMID:24633955

  19. Three Dimensional Vapor Intrusion Modeling: Model Validation and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Akbariyeh, S.; Patterson, B.; Rakoczy, A.; Li, Y.

    2013-12-01

    Volatile organic chemicals (VOCs), such as chlorinated solvents and petroleum hydrocarbons, are prevalent groundwater contaminants due to their improper disposal and accidental spillage. In addition to contaminating groundwater, VOCs may partition into the overlying vadose zone and enter buildings through gaps and cracks in foundation slabs or basement walls, a process termed vapor intrusion. Vapor intrusion of VOCs has been recognized as a detrimental source for human exposures to potential carcinogenic or toxic compounds. The simulation of vapor intrusion from a subsurface source has been the focus of many studies to better understand the process and guide field investigation. While multiple analytical and numerical models were developed to simulate the vapor intrusion process, detailed validation of these models against well controlled experiments is still lacking, due to the complexity and uncertainties associated with site characterization and soil gas flux and indoor air concentration measurement. In this work, we present an effort to validate a three-dimensional vapor intrusion model based on a well-controlled experimental quantification of the vapor intrusion pathways into a slab-on-ground building under varying environmental conditions. Finally, a probabilistic approach based on Monte Carlo simulations is implemented to determine the probability distribution of indoor air concentration based on the most uncertain input parameters.

  20. Cost model validation: a technical and cultural approach

    NASA Technical Reports Server (NTRS)

    Hihn, J.; Rosenberg, L.; Roust, K.; Warfield, K.

    2001-01-01

    This paper summarizes how JPL's parametric mission cost model (PMCM) has been validated using both formal statistical methods and a variety of peer and management reviews in order to establish organizational acceptance of the cost model estimates.

  1. Validation subset selections for extrapolation oriented QSPAR models.

    PubMed

    Szntai-Kis, Csaba; Kvesdi, Istvn; Kri, Gyrgy; Orfi, Lszl

    2003-01-01

    One of the most important features of QSPAR models is their predictive ability. The predictive ability of QSPAR models should be checked by external validation. In this work we examined three different types of external validation set selection methods for their usefulness in in-silico screening. The usefulness of the selection methods was studied in such a way that: 1) We generated thousands of QSPR models and stored them in 'model banks'. 2) We selected a final top model from the model banks based on three different validation set selection methods. 3) We predicted large data sets, which we called 'chemical universe sets', and calculated the corresponding SEPs. The models were generated from small fractions of the available water solubility data during a GA Variable Subset Selection procedure. The external validation sets were constructed by random selections, uniformly distributed selections or by perimeter-oriented selections. We found that the best performing models on the perimeter-oriented external validation sets usually gave the best validation results when the remaining part of the available data was overwhelmingly large, i.e., when the model had to make a lot of extrapolations. We also compared the top final models obtained from external validation set selection methods in three independent and different sizes of 'chemical universe sets'. PMID:14768902

  2. Validation of the Functional Status II questionnaire in the assessment of extremely-low-birthweight infants

    PubMed Central

    COSTA, DAVID DA; BANN, CARLA M; HANSEN, NELLIE I; SHANKARAN, SEETHA; DELANEY-BLACK, VIRGINIA

    2011-01-01

    AIM The increased survival of infants born at extremely low birthweight (ELBW) has been associated with significant morbidity, including higher rates of neurodevelopmental disability. However, formalized testing to evaluate these problems is both time-consuming and costly. The revised Functional Status questionnaire (FS-II) was designed to assess caregivers perceptions of the functional status of children with chronic diseases. METHOD We evaluated the reliability and validity of the FS-II for ELBW infants at 18 to 22 months corrected age using data from the US Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) Neonatal Research Network (NRN). Exploratory factor analyses were conducted using data from the networks first follow-up study of 1080 children born in 1993 to 1994 (508 males, 572 females [53%]), and results were confirmed using data from the next network follow-up of 4022 children born in 1995 to 2000 (1864 males, 2158 females [54%]). RESULTS Results suggest that a two-factor solution comprising measures of general health and independence is most appropriate for ELBW infants. These factors differed from those found among chronically ill children, and new, more appropriate scales are presented for screening ELBW survivors. Both scales demonstrated good internal consistency: Cronbachs ?=0.87 for general health and ?=0.75 for independence. Construct validity of the scales was assessed by comparing mean scores on the scales according to scores on the Bayley Scales of Infant Development, second edition (BSID-II), and medical conditions. INTERPRETATION As hypothesized, infants with greater functional impairments according to their BSID-II scores or medical conditions had lower scores on the general health and independence scales, supporting the validity of the scales. PMID:19459909

  3. SAGE III Aerosol Extinction Validation in the Arctic Winter: Comparisons with SAGE II and POAM III

    NASA Technical Reports Server (NTRS)

    Thomason, L. W.; Poole, L. R.; Randall, C. E.

    2007-01-01

    The use of SAGE III multiwavelength aerosol extinction coefficient measurements to infer PSC type is contingent on the robustness of both the extinction magnitude and its spectral variation. Past validation with SAGE II and other similar measurements has shown that the SAGE III extinction coefficient measurements are reliable though the comparisons have been greatly weighted toward measurements made at mid-latitudes. Some aerosol comparisons made in the Arctic winter as a part of SOLVE II suggested that SAGE III values, particularly at longer wavelengths, are too small with the implication that both the magnitude and the wavelength dependence are not reliable. Comparisons with POAM III have also suggested a similar discrepancy. Herein, we use SAGE II data as a common standard for comparison of SAGE III and POAM III measurements in the Arctic winters of 2002/2003 through 2004/2005. During the winter, SAGE II measurements are made infrequently at the same latitudes as these instruments. We have mitigated this problem through the use potential vorticity as a spatial coordinate and thus greatly increased the number of coincident events. We find that SAGE II and III extinction coefficient measurements show a high degree of compatibility at both 1020 nm and 450 nm except a 10-20% bias at both wavelengths. In addition, the 452 to 1020-nm extinction ratio shows a consistent bias of approx. 30% throughout the lower stratosphere. We also find that SAGE II and POAM III are on average consistent though the comparisons show a much higher variability and larger bias than SAGE II/III comparisons. In addition, we find that the two data sets are not well correlated below 18 km. Overall, we find both the extinction values and the spectral dependence from SAGE III are robust and we find no evidence of a significant defect within the Arctic vortex.

  4. Development and Validation of the Juvenile Sexual Offense Recidivism Risk Assessment Tool-II.

    PubMed

    Epperson, Douglas L; Ralston, Christopher A

    2015-12-01

    This article describes the development and initial validation of the Juvenile Sexual Offense Recidivism Risk Assessment Tool-II (JSORRAT-II). Potential predictor variables were extracted from case file information for an exhaustive sample of 636 juveniles in Utah who sexually offended between 1990 and 1992. Simultaneous and hierarchical logistic regression analyses were used to identify the group of variables that was most predictive of subsequent juvenile sexual recidivism. A simple categorical scoring system was applied to these variables without meaningful loss of accuracy in the development sample for any sexual (area under the curve [AUC] = .89) and sexually violent (AUC = .89) juvenile recidivism. The JSORRAT-II was cross-validated on an exhaustive sample of 566 juveniles who had sexually offended in Utah in 1996 and 1997. Reliability of scoring the tool across five coders was quite high (intraclass correlation coefficient [ICC] = .96). Relative to the development sample, however, there was considerable shrinkage in the indices of predictive accuracy for any sexual (AUC = .65) and sexually violent (AUC = .65) juvenile recidivism. The reduced level of accuracy was not explained by severity of the index sexual offense, time at risk, or missing data. Capitalization on chance and other explanations for the possible reduction in predictive accuracy are explored, and potential uses and limitations of the tool are discussed. PMID:24492618

  5. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  6. ECOLOGICAL MODEL TESTING: VERIFICATION, VALIDATION OR NEITHER?

    EPA Science Inventory

    Consider the need to make a management decision about a declining animal population. Two models are available to help. Before a decision is made based on model results, the astute manager or policy maker may ask, "Do the models work?" Or, "Have the models been verified or validat...

  7. Validation of Numerical Shallow Water Models for Tidal Lagoons

    SciTech Connect

    Eliason, D.; Bourgeois, A.

    1999-11-01

    An analytical solution is presented for the case of a stratified, tidally forced lagoon. This solution, especially its energetics, is useful for the validation of numerical shallow water models under stratified, tidally forced conditions. The utility of the analytical solution for validation is demonstrated for a simple finite difference numerical model. A comparison is presented of the energetics of the numerical and analytical solutions in terms of the convergence of model results to the analytical solution with increasing spatial and temporal resolution.

  8. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753

  9. Using virtual reality to validate system models

    SciTech Connect

    Winter, V.L.; Caudell, T.P.

    1999-12-09

    To date most validation techniques are highly biased towards calculations involving symbolic representations of problems. These calculations are either formal (in the case of consistency and completeness checks), or informal in the case of code inspections. The authors believe that an essential type of evidence of the correctness of the formalization process must be provided by (i.e., must originate from) human-based calculation. They further believe that human calculation can by significantly amplified by shifting from symbolic representations to graphical representations. This paper describes their preliminary efforts in realizing such a representational shift.

  10. Translation, adaptation and validation of a Portuguese version of the Moorehead-Ardelt Quality of Life Questionnaire II.

    PubMed

    Maciel, João; Infante, Paulo; Ribeiro, Susana; Ferreira, André; Silva, Artur C; Caravana, Jorge; Carvalho, Manuel G

    2014-11-01

    The prevalence of obesity has increased worldwide. An assessment of the impact of obesity on health-related quality of life (HRQoL) requires specific instruments. The Moorehead-Ardelt Quality of Life Questionnaire II (MA-II) is a widely used instrument to assess HRQoL in morbidly obese patients. The objective of this study was to translate and validate a Portuguese version of the MA-II.The study included forward and backward translations of the original MA-II. The reliability of the Portuguese MA-II was estimated using the internal consistency and test-retest methods. For validation purposes, the Spearman's rank correlation coefficient was used to evaluate the correlation between the Portuguese MA-II and the Portuguese versions of two other questionnaires, the 36-item Short Form Health Survey (SF-36) and the Impact of Weight on Quality of Life-Lite (IWQOL-Lite).One hundred and fifty morbidly obese patients were randomly assigned to test the reliability and validity of the Portuguese MA-II. Good internal consistency was demonstrated by a Cronbach's alpha coefficient of 0.80, and a very good agreement in terms of test-retest reliability was recorded, with an overall intraclass correlation coefficient (ICC) of 0.88. The total sums of MA-II scores and each item of MA-II were significantly correlated with all domains of SF-36 and IWQOL-Lite. A statistically significant negative correlation was found between the MA-II total score and BMI. Moreover, age, gender and surgical status were independent predictors of MA-II total score.A reliable and valid Portuguese version of the MA-II was produced, thus enabling the routine use of MA-II in the morbidly obese Portuguese population. PMID:24817428

  11. Teacher Change Beliefs: Validating a Scale with Structural Equation Modelling

    ERIC Educational Resources Information Center

    Kin, Tai Mei; Abdull Kareem, Omar; Nordin, Mohamad Sahari; Wai Bing, Khuan

    2015-01-01

    The objectives of the study were to validate a substantiated Teacher Change Beliefs Model (TCBM) and an instrument to identify critical components of teacher change beliefs (TCB) in Malaysian secondary schools. Five different pilot test approaches were applied to ensure the validity and reliability of the instrument. A total of 936 teachers from

  12. VALIDATION METHODS FOR CHEMICAL EXPOSURE AND HAZARD ASSESSMENT MODELS

    EPA Science Inventory

    Mathematical models and computer simulation codes designed to aid in hazard assessment for environmental protection must be verified and validated before they can be used with confidence in a decision-making or priority-setting context. Operational validation, or full-scale testi...

  13. External Validation of the Strategy Choice Model for Addition.

    ERIC Educational Resources Information Center

    Geary, David C.; Burlingham-Dubree, Maryann

    1989-01-01

    Suggested that strategy choices for solving addition problems were related to numerical and spatial ability domains, while the speed of executing the component process of fact retrieval was related to arithmetic ability only. Findings supported the convergent validity of the strategy choice model and its discriminant validity. (RH)

  14. Teacher Change Beliefs: Validating a Scale with Structural Equation Modelling

    ERIC Educational Resources Information Center

    Kin, Tai Mei; Abdull Kareem, Omar; Nordin, Mohamad Sahari; Wai Bing, Khuan

    2015-01-01

    The objectives of the study were to validate a substantiated Teacher Change Beliefs Model (TCBM) and an instrument to identify critical components of teacher change beliefs (TCB) in Malaysian secondary schools. Five different pilot test approaches were applied to ensure the validity and reliability of the instrument. A total of 936 teachers from…

  15. Validation of SAGE II aerosol measurements by comparison with correlative sensors

    NASA Technical Reports Server (NTRS)

    Swissler, T. J.

    1986-01-01

    The SAGE II limb-scanning radiometer carried on the Earth Radiation Budget Satellite functions at wavelengths of 0.385, 0.45, 0.525, and 1.02 microns to identify vertical profiles of aerosol density by atmospheric extinction measurements from cloud tops upward. The data are being validated by correlating the satellite data with data gathered with, e.g., lidar, sunphotometer, and dustsonde instruments. Work thus far has shown that the 1 micron measurements from the ground and satellite are highly correlated and are therefore accurate to within measurement uncertainty.

  16. A framework for biodynamic feedthrough analysis--part II: validation and application.

    PubMed

    Venrooij, Joost; van Paassen, Marinus M; Mulder, Mark; Abbink, David A; Mulder, Max; van der Helm, Frans C T; Bulthoff, Heinrich H

    2014-09-01

    Biodynamic feedthrough (BDFT) is a complex phenomenon, that has been studied for several decades. However, there is little consensus on how to approach the BDFT problem in terms of definitions, nomenclature, and mathematical descriptions. In this paper, the framework for BDFT analysis, as presented in Part I of this dual publication, is validated and applied. The goal of this framework is twofold. First of all, it provides some common ground between the seemingly large range of different approaches existing in BDFT literature. Secondly, the framework itself allows for gaining new insights into BDFT phenomena. Using recently obtained measurement data, parts of the framework that were not already addressed elsewhere, are validated. As an example of a practical application of the framework, it will be demonstrated how the effects of control device dynamics on BDFT can be understood and accurately predicted. Other ways of employing the framework are illustrated by interpreting the results of three selected studies from the literature using the BDFT framework. The presentation of the BDFT framework is divided into two parts. This paper, Part II, addresses the validation and application of the framework. Part I, which is also published in this journal issue, addresses the theoretical foundations of the framework. The work is presented in two separate papers to allow for a detailed discussion of both the framework's theoretical background and its validation. PMID:25137695

  17. Exploring the Validity of Valproic Acid Animal Model of Autism.

    PubMed

    Mabunga, Darine Froy N; Gonzales, Edson Luck T; Kim, Ji-Woon; Kim, Ki Chan; Shin, Chan Young

    2015-12-01

    The valproic acid (VPA) animal model of autism spectrum disorder (ASD) is one of the most widely used animal model in the field. Like any other disease models, it can't model the totality of the features seen in autism. Then, is it valid to model autism? This model demonstrates many of the structural and behavioral features that can be observed in individuals with autism. These similarities enable the model to define relevant pathways of developmental dysregulation resulting from environmental manipulation. The uncovering of these complex pathways resulted to the growing pool of potential therapeutic candidates addressing the core symptoms of ASD. Here, we summarize the validity points of VPA that may or may not qualify it as a valid animal model of ASD. PMID:26713077

  18. Economic analysis of model validation for a challenge problem

    DOE PAGESBeta

    Paez, Paul J.; Paez, Thomas L.; Hasselman, Timothy K.

    2016-02-19

    It is now commonplace for engineers to build mathematical models of the systems they are designing, building, or testing. And, it is nearly universally accepted that phenomenological models of physical systems must be validated prior to use for prediction in consequential scenarios. Yet, there are certain situations in which testing only or no testing and no modeling may be economically viable alternatives to modeling and its associated testing. This paper develops an economic framework within which benefit–cost can be evaluated for modeling and model validation relative to other options. The development is presented in terms of a challenge problem. Asmore » a result, we provide a numerical example that quantifies when modeling, calibration, and validation yield higher benefit–cost than a testing only or no modeling and no testing option.« less

  19. Exploring the Validity of Valproic Acid Animal Model of Autism

    PubMed Central

    Mabunga, Darine Froy N.; Gonzales, Edson Luck T.; Kim, Ji-woon; Kim, Ki Chan

    2015-01-01

    The valproic acid (VPA) animal model of autism spectrum disorder (ASD) is one of the most widely used animal model in the field. Like any other disease models, it can't model the totality of the features seen in autism. Then, is it valid to model autism? This model demonstrates many of the structural and behavioral features that can be observed in individuals with autism. These similarities enable the model to define relevant pathways of developmental dysregulation resulting from environmental manipulation. The uncovering of these complex pathways resulted to the growing pool of potential therapeutic candidates addressing the core symptoms of ASD. Here, we summarize the validity points of VPA that may or may not qualify it as a valid animal model of ASD. PMID:26713077

  20. Validating Computational Cognitive Process Models across Multiple Timescales

    NASA Astrophysics Data System (ADS)

    Myers, Christopher; Gluck, Kevin; Gunzelmann, Glenn; Krusmark, Michael

    2010-12-01

    Model comparison is vital to evaluating progress in the fields of artificial general intelligence (AGI) and cognitive architecture. As they mature, AGI and cognitive architectures will become increasingly capable of providing a single model that completes a multitude of tasks, some of which the model was not specifically engineered to perform. These models will be expected to operate for extended periods of time and serve functional roles in real-world contexts. Questions arise regarding how to evaluate such models appropriately, including issues pertaining to model comparison and validation. In this paper, we specifically address model validation across multiple levels of abstraction, using an existing computational process model of unmanned aerial vehicle basic maneuvering to illustrate the relationship between validity and timescales of analysis.

  1. ESEEM Analysis of Multi-Histidine Cu(II)-Coordination in Model Complexes, Peptides, and Amyloid-?

    PubMed Central

    2015-01-01

    We validate the use of ESEEM to predict the number of 14N nuclei coupled to a Cu(II) ion by the use of model complexes and two small peptides with well-known Cu(II) coordination. We apply this method to gain new insight into less explored aspects of Cu(II) coordination in amyloid-? (A?). A? has two coordination modes of Cu(II) at physiological pH. A controversy has existed regarding the number of histidine residues coordinated to the Cu(II) ion in component II, which is dominant at high pH (?8.7) values. Importantly, with an excess amount of Zn(II) ions, as is the case in brain tissues affected by Alzheimers disease, component II becomes the dominant coordination mode, as Zn(II) selectively substitutes component I bound to Cu(II). We confirm that component II only contains single histidine coordination, using ESEEM and set of model complexes. The ESEEM experiments carried out on systematically 15N-labeled peptides reveal that, in component II, His 13 and His 14 are more favored as equatorial ligands compared to His 6. Revealing molecular level details of subcomponents in metal ion coordination is critical in understanding the role of metal ions in Alzheimers disease etiology. PMID:25014537

  2. Gear Windage Modeling Progress - Experimental Validation Status

    NASA Technical Reports Server (NTRS)

    Kunz, Rob; Handschuh, Robert F.

    2008-01-01

    In the Subsonics Rotary Wing (SRW) Project being funded for propulsion work at NASA Glenn Research Center, performance of the propulsion system is of high importance. In current rotorcraft drive systems many gearing components operate at high rotational speed (pitch line velocity > 24000 ft/ min). In our testing of high speed helical gear trains at NASA Glenn we have found that the work done on the air - oil mist within the gearbox can become a significant part of the power loss of the system. This loss mechanism is referred to as windage. The effort described in this presentation is to try to understand the variables that affect windage, develop a good experimental data base to validate, the analytical project being conducted at Penn State University by Dr. Rob Kunz under a NASA SRW NRA. The presentation provides an update to the status of these efforts.

  3. MPC model validation using reverse analysis method

    NASA Astrophysics Data System (ADS)

    Lee, Sukho; Shin, So-Eun; Shon, Jungwook; Park, Jisoong; Shin, Inkyun; Jeon, Chan-Uk

    2015-10-01

    It became more challenging to guarantee the overall mask Critical Dimension (CD) quality according to the increase of hot spots and assist features at leading edge devices. Therefore, mask CD correction methodology has been changing from the rule-based (and/or selective) correction to model-based MPC (Mask Process Correction) to compensate for the through-pitch linearity and hot spot CD errors. In order to improve mask quality, it is required to have accurate MPC model which properly describes current mask fabrication process. There are limits on making and defining accurate MPC model because it is hard to know the actual CD trend such as CD linearity and through-pitch owing to the process dispersion and measurement error. To mitigate such noises, we normally measure several sites of each pattern types and then utilize the mean value of each measurement for MPC modeling. Through those procedures, the noise level of mask data will be reduced but it does not always guarantee improvement of model accuracy, even though measurement overhead is increasing. Root mean square (RMS) values which is usually used for accuracy indicator after modeling actually does not give any information on accuracy of MPC model since it is only related with data noise dispersion. In this paper, we reversely approached to identify the model accuracy. We create the data regarded as actual CD trend and then create scattered data by adding controlled dispersion of denoting the process and measurement error to the data. Then we make MPC model based on the scattered data to examine how much the model is deviated from the actual CD trend, from which model accuracy can be investigated. It is believed that we can come up with appropriate method to define the reliability of MPC model developed for optimized process corrections.

  4. International Space Station Power System Model Validated

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey S.; Delleur, Ann M.

    2002-01-01

    System Power Analysis for Capability Evaluation (SPACE) is a computer model of the International Space Station's (ISS) Electric Power System (EPS) developed at the NASA Glenn Research Center. This uniquely integrated, detailed model can predict EPS capability, assess EPS performance during a given mission with a specified load demand, conduct what-if studies, and support on-orbit anomaly resolution.

  5. Validating the Mexican American Intergenerational Caregiving Model

    ERIC Educational Resources Information Center

    Escandon, Socorro

    2011-01-01

    The purpose of this study was to substantiate and further develop a previously formulated conceptual model of Role Acceptance in Mexican American family caregivers by exploring the theoretical strengths of the model. The sample consisted of women older than 21 years of age who self-identified as Hispanic, were related through consanguinal or…

  6. SWAT: Model use, calibration, and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT including manual calibration procedures...

  7. HEDR model validation plan. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ``tools`` for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ``validate`` these tools. In the sense of the HEDR Project, ``validation`` is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model.

  8. Highlights of Transient Plume Impingement Model Validation and Applications

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael

    2011-01-01

    This paper describes highlights of an ongoing validation effort conducted to assess the viability of applying a set of analytic point source transient free molecule equations to model behavior ranging from molecular effusion to rocket plumes. The validation effort includes encouraging comparisons to both steady and transient studies involving experimental data and direct simulation Monte Carlo results. Finally, this model is applied to describe features of two exotic transient scenarios involving NASA Goddard Space Flight Center satellite programs.

  9. VERIFICATION AND VALIDATION OF THE SPARC MODEL

    EPA Science Inventory

    Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values--that is, the physical and chemical constants that govern reactivity. Although empirical structure-activity relationships that allow estimation of some ...

  10. Validating Predictions from Climate Envelope Models

    PubMed Central

    Watling, James I.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Mazzotti, Frank J.; Romaach, Stephanie S.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 19671971 (t1) and evaluated using occurrence data from 19982002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species. PMID:23717452

  11. Development and validation of model for sand

    NASA Astrophysics Data System (ADS)

    Church, P.; Ingamells, V.; Wood, A.; Gould, P.; Perry, J.; Jardine, A.; Tyas, A.

    2015-09-01

    There is a growing requirement within QinetiQ to develop models for assessments when there is very little experimental data. A theoretical approach to developing equations of state for geological materials has been developed using Quantitative Structure Property Modelling based on the Porter-Gould model approach. This has been applied to well-controlled sand with different moisture contents and particle shapes. The Porter-Gould model describes an elastic response and gives good agreement at high impact pressures with experiment indicating that the response under these conditions is dominated by the molecular response. However at lower pressures the compaction behaviour is dominated by a micro-mechanical response which drives the need for additional theoretical tools and experiments to separate the volumetric and shear compaction behaviour. The constitutive response is fitted to existing triaxial cell data and Quasi-Static (QS) compaction data. This data is then used to construct a model in the hydrocode. The model shows great promise in predicting plate impact, Hopkinson bar, fragment penetration and residual velocity of fragments through a finite thickness of sand.

  12. Validation of Model Forecasts of the Ambient Solar Wind

    NASA Technical Reports Server (NTRS)

    Macneice, P. J.; Hesse, M.; Kuznetsova, M. M.; Rastaetter, L.; Taktakishvili, A.

    2009-01-01

    Independent and automated validation is a vital step in the progression of models from the research community into operational forecasting use. In this paper we describe a program in development at the CCMC to provide just such a comprehensive validation for models of the ambient solar wind in the inner heliosphere. We have built upon previous efforts published in the community, sharpened their definitions, and completed a baseline study. We also provide first results from this program of the comparative performance of the MHD models available at the CCMC against that of the Wang-Sheeley-Arge (WSA) model. An important goal of this effort is to provide a consistent validation to all available models. Clearly exposing the relative strengths and weaknesses of the different models will enable forecasters to craft more reliable ensemble forecasting strategies. Models of the ambient solar wind are developing rapidly as a result of improvements in data supply, numerical techniques, and computing resources. It is anticipated that in the next five to ten years, the MHD based models will supplant semi-empirical potential based models such as the WSA model, as the best available forecast models. We anticipate that this validation effort will track this evolution and so assist policy makers in gauging the value of past and future investment in modeling support.

  13. Validation of geometric models for fisheye lenses

    NASA Astrophysics Data System (ADS)

    Schneider, D.; Schwalbe, E.; Maas, H.-G.

    The paper focuses on the photogrammetric investigation of geometric models for different types of optical fisheye constructions (equidistant, equisolid-angle, sterographic and orthographic projection). These models were implemented and thoroughly tested in a spatial resection and a self-calibrating bundle adjustment. For this purpose, fisheye images were taken with a Nikkor 8 mm fisheye lens on a Kodak DSC 14n Pro digital camera in a hemispherical calibration room. Both, the spatial resection and the bundle adjustment resulted in a standard deviation of unit weight of 1/10 pixel with a suitable set of simultaneous calibration parameters introduced into the camera model. The camera-lens combination was treated with all of the four basic models mentioned above. Using the same set of additional lens distortion parameters, the differences between the models can largely be compensated, delivering almost the same precision parameters. The relative object space precision obtained from the bundle adjustment was ca. 1:10 000 of the object dimensions. This value can be considered as a very satisfying result, as fisheye images generally have a lower geometric resolution as a consequence of their large field of view and also have a inferior imaging quality in comparison to most central perspective lenses.

  14. Circumplex Structure and Personality Disorder Correlates of the Interpersonal Problems Model (IIP-C): Construct Validity and Clinical Implications

    ERIC Educational Resources Information Center

    Monsen, Jon T.; Hagtvet, Knut A.; Havik, Odd E.; Eilertsen, Dag E.

    2006-01-01

    This study assessed the construct validity of the circumplex model of the Inventory of Interpersonal Problems (IIP-C) in Norwegian clinical and nonclinical samples. Structure was examined by evaluating the fit of the circumplex model to data obtained by the IIP-C. Observer-rated personality disorder criteria (DSM-IV, Axis II) were used as external

  15. Psychometric validation of the BDI-II among HIV-positive CHARTER study participants.

    PubMed

    Hobkirk, Andra L; Starosta, Amy J; De Leo, Joseph A; Marra, Christina M; Heaton, Robert K; Earleywine, Mitch

    2015-06-01

    Rates of depression are high among individuals living with HIV. Accurate assessment of depressive symptoms among this population is important for ensuring proper diagnosis and treatment. The Beck Depression Inventory-II (BDI-II) is a widely used measure for assessing depression, however its psychometric properties have not yet been investigated for use with HIV-positive populations in the United States. The current study was the first to assess the psychometric properties of the BDI-II among a large cohort of HIV-positive participants sampled at multiple sites across the United States as part of the CNS HIV Antiretroviral Therapy Effects Research (CHARTER) study. The BDI-II test scores showed good internal consistency (? = .93) and adequate test-retest reliability (internal consistency coefficient = 0.83) over a 6-mo period. Using a "gold standard" of major depressive disorder determined by the Composite International Diagnostic Interview, sensitivity and specificity were maximized at a total cut-off score of 17 and a receiver operating characteristic analysis confirmed that the BDI-II is an adequate diagnostic measure for the sample (area under the curve = 0.83). The sensitivity and specificity of each score are provided graphically. Confirmatory factor analyses confirmed the best fit for a three-factor model over one-factor and two-factor models and models with a higher-order factor included. The results suggest that the BDI-II is an adequate measure for assessing depressive symptoms among U.S. HIV-positive patients. Cut-off scores should be adjusted to enhance sensitivity or specificity as needed and the measure can be differentiated into cognitive, affective, and somatic depressive symptoms. PMID:25419643

  16. Shuttle Spacesuit: Fabric/LCVG Model Validation

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tweed, J.; Zeitlin, C.; Kim, M.-H. Y.; Anderson, B. M.; Cucinotta, F. A.; Ware, J.; Persans, A. E.

    2001-01-01

    A detailed spacesuit computational model is being developed at the Langley Research Center for radiation exposure evaluation studies. The details of the construction of the spacesuit are critical to estimation of exposures and assessing the risk to the astronaut on EVA. Past evaluations of spacesuit shielding properties assumed the basic fabric lay-up (Thermal Micrometeroid Garment, fabric restraints, and pressure envelope) and Liquid Cooling and Ventilation Garment (LCVG) could be homogenized as a single layer overestimating the protective properties over 60 percent of the fabric area. The present spacesuit model represents the inhomogeneous distributions of LCVG materials (mainly the water filled cooling tubes). An experimental test is performed using a 34-MeV proton beam and highresolution detectors to compare with model-predicted transmission factors. Some suggestions are made on possible improved construction methods to improve the spacesuit's protection properties.

  17. Shuttle Spacesuit: Fabric/LCVG Model Validation

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tweed, J.; Zeitflin, C.; Kim, M.-H.; Anderson, B. M.; Cucinotta, F. A.; Ware, J.; Persans, A. E.

    2001-01-01

    A detailed spacesuit computational model is being developed at the Langley Research Center for radiation exposure evaluation studies. The details of the construction of the spacesuit are critical to estimation of exposures and assessing the risk to the astronaut on EVA. Past evaluations of spacesuit shielding properties assumed the basic fabric lay-up (Thermal Micrometeroid Garment, fabric restraints, and pressure envelope) and Liquid Cooling and Ventilation Garment (LCVG) could be homogenized as a single layer overestimating the protective properties over 60 percent of the fabric area. The present spacesuit model represents the inhomogeneous distributions of LCVG materials (mainly the water filled cooling tubes). An experimental test is performed using a 34-MeV proton beam and high- resolution detectors to compare with model-predicted transmission factors. Some suggestions are made on possible improved construction methods to improve the spacesuit's protection properties.

  18. Measurements of Humidity in the Atmosphere and Validation Experiments (Mohave, Mohave II): Results Overview

    NASA Technical Reports Server (NTRS)

    Leblanc, Thierry; McDermid, Iain S.; McGee, Thomas G.; Twigg, Laurence W.; Sumnicht, Grant K.; Whiteman, David N.; Rush, Kurt D.; Cadirola, Martin P.; Venable, Demetrius D.; Connell, R.; Demoz, Belay B.; Vomel, Holger; Miloshevich, L.

    2008-01-01

    The Measurements of Humidity in the Atmosphere and Validation Experiments (MOHAVE, MOHAVE-II) inter-comparison campaigns took place at the Jet Propulsion Laboratory (JPL) Table Mountain Facility (TMF, 34.5(sup o)N) in October 2006 and 2007 respectively. Both campaigns aimed at evaluating the capability of three Raman lidars for the measurement of water vapor in the upper troposphere and lower stratosphere (UT/LS). During each campaign, more than 200 hours of lidar measurements were compared to balloon borne measurements obtained from 10 Cryogenic Frost-point Hygrometer (CFH) flights and over 50 Vaisala RS92 radiosonde flights. During MOHAVE, fluorescence in all three lidar receivers was identified, causing a significant wet bias above 10-12 km in the lidar profiles as compared to the CFH. All three lidars were reconfigured after MOHAVE, and no such bias was observed during the MOHAVE-II campaign. The lidar profiles agreed very well with the CFH up to 13-17 km altitude, where the lidar measurements become noise limited. The results from MOHAVE-II have shown that the water vapor Raman lidar will be an appropriate technique for the long-term monitoring of water vapor in the UT/LS given a slight increase in its power-aperture, as well as careful calibration.

  19. Detailed validation of the bidirectional effect in various Case I and Case II waters.

    PubMed

    Gleason, Arthur C R; Voss, Kenneth J; Gordon, Howard R; Twardowski, Michael; Sullivan, James; Trees, Charles; Weidemann, Alan; Berthon, Jean-Franois; Clark, Dennis; Lee, Zhong-Ping

    2012-03-26

    Simulated bidirectional reflectance distribution functions (BRDF) were compared with measurements made just beneath the water's surface. In Case I water, the set of simulations that varied the particle scattering phase function depending on chlorophyll concentration agreed more closely with the data than other models. In Case II water, however, the simulations using fixed phase functions agreed well with the data and were nearly indistinguishable from each other, on average. The results suggest that BRDF corrections in Case II water are feasible using single, average, particle scattering phase functions, but that the existing approach using variable particle scattering phase functions is still warranted in Case I water. PMID:22453442

  20. Reliability and validity of the modified Conconi test on concept II rowing ergometers.

    PubMed

    Celik, Ozgür; Koşar, Sükran Nazan; Korkusuz, Feza; Bozkurt, Murat

    2005-11-01

    The purpose of this study was to assess the reliability and validity of the modified Conconi test on Concept II rowing ergometers. Twenty-eight oarsmen conducted 3 performance tests on separate days. Reliability was assessed using the break point in heart rate (HR) linearity called the Conconi test (CT) and Conconi retest (CRT) for the noninvasive measurement of anaerobic threshold (AT). Blood lactate measurement was considered the gold standard for the assessment of the AT, and the validity of the CT was assessed by blood samples taken during an incremental load test (ILT) on ergometers. According to the results, the mean power output (PO) scores for the CT, CRT, and ILT were 234.2 +/- 40.3 W, 232.5 +/- 39.7 W, and 229.7 +/- 39.6 W, respectively. The mean HR values at the AT for the CT, CRT, and ILT were 165.4 +/- 11.2 b.min, 160.4 +/- 10.8 b.min, and 158.3 +/- 8.8 b.min, respectively. Interclass correlation coefficient (ICC) analysis indicated a significant correlation between the 3 tests with one another. Also, Bland and Altman plots showed that there was an association between noninvasive tests and the ILT PO scores and HRs (95% confidence interval [CI]). In conclusion, this study showed that the modified CT is a reliable and valid method for determining the AT of elite men rowers. PMID:16287355

  1. Effects of Mg II and Ca II ionization on ab-initio solar chromosphere models

    NASA Technical Reports Server (NTRS)

    Rammacher, W.; Cuntz, M.

    1991-01-01

    Acoustically heated solar chromosphere models are computed considering radiation damping by (non-LTE) emission from H(-) and by Mg II and Ca II emission lines. The radiative transfer equations for the Mg II k and Ca II K emission lines are solved using the core-saturation method with complete redistribution. The Mg II k and Ca II K cooling rates are compared with the VAL model C. Several substantial improvements over the work of Ulmschneider et al. (1987) are included. It is found that the rapid temperature rises caused by the ionization of Mg II are not formed in the middle chromosphere, but occur at larger atmospheric heights. These models represent the temperature structure of the 'real' solar chromosphere much better. This result is a major precondition for the study of ab-initio models for solar flux tubes based on MHD wave propagation and also for ab-initio models for the solar transition layer.

  2. Validation of the Poisson Stochastic Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    Zhuravleva, Tatiana; Marshak, Alexander

    2004-01-01

    A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.

  3. WEPP: Model use, calibration and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Water Erosion Prediction Project (WEPP) model is a process-based, continuous simulation, distributed parameter, hydrologic and soil erosion prediction system. It has been developed over the past 25 years to allow for easy application to a large number of land management scenarios. Most general o...

  4. WEPP: Model use, calibration, and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Water Erosion Prediction Project (WEPP) model is a process-based, continuous simulation, distributed parameter, hydrologic and soil erosion prediction system. It has been developed over the past 25 years to allow for easy application to a large number of land management scenarios. Most general o...

  5. Validation of the MULCH-II code for thermal-hydraulic safety analysis of the MIT research reactor conversion to LEU

    SciTech Connect

    Ko, Y.-C.; Hu, L.-W. Olson, Arne P.; Dunn, Floyd E.

    2008-07-15

    An in-house thermal hydraulics code was developed for the steady-state and loss of primary flow analysis of the MIT Research Reactor (MITR). This code is designated as MULti-CHannel-II or MULCH-II. The MULCH-II code is being used for the MITR LEU conversion design study. Features of the MULCH-II code include a multi-channel analysis, the capability to model the transition from forced to natural convection during a loss of primary flow transient, and the ability to calculate safety limits and limiting safety system settings for licensing applications. This paper describes the validation of the code against PLTEMP/ANL 3.0 for steady-state analysis, and against RELAP5-3D for loss of primary coolant transient analysis. Coolant temperature measurements obtained from loss of primary flow transients as part of the MITR-II startup testing were also used for validating this code. The agreement between MULCH-II and the other computer codes is satisfactory. (author)

  6. Experiments for foam model development and validation.

    SciTech Connect

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F.; Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  7. Validating Requirements for Fault Tolerant Systems Using Model Checking

    NASA Technical Reports Server (NTRS)

    Schneider, Francis; Easterbrook, Steve M.; Callahan, John R.; Holzmann, Gerard J.

    1997-01-01

    Model checking is shown to be an effective tool in validating the behavior of a fault tolerant embedded spacecraft controller. The case study presented here shows that by judiciously abstracting away extraneous complexity, the state space of the model could be exhaustively searched allowing critical functional requirements to be validated down to the design level. Abstracting away detail not germane to the problem of interest leaves by definition a partial specification behind. The success of this procedure shows that it is feasible to effectively validate a partial specification with this technique. Three anomalies were found in the system one of which is an error in the detailed requirements, and the other two are missing/ambiguous requirements. Because the method allows validation of partial specifications, it also is an effective methodology towards maintaining fidelity between a co-evolving specification and an implementation.

  8. Long-range transport model validation studies

    SciTech Connect

    Machta, L. )

    1987-01-01

    Policy decisions about the possible regulation of emissions leading to acid rain require a source-receptor relationship. This may involve emission reductions in selective geographical areas which will be more beneficial to a receptor area than other regions or a way of deciding how much emission reduction is needed to achieve a given receptor benefit even if a general roll-back is mandated. A number of approaches were examined and rejected before a model simulation of nature's transport and deposition was chosen to formulate a source-receptor relationship. But it is recognized that any mathematical simulation of nature, however, plausible, must have its predictions compared with observations. This is planned in two ways. First, comparison of predictions of deposition and air concentration of acidic materials with observations. And second, comparing features of the internal workings of the model with reality. The writer expresses some skepticism about the ability of the latter diagnostic phase, especially, to succeed within a two- or three-year period.

  9. Model-Based Verification and Validation of Spacecraft Avionics

    NASA Technical Reports Server (NTRS)

    Khan, M. Omair; Sievers, Michael; Standley, Shaun

    2012-01-01

    Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.

  10. Solution Verification Linked to Model Validation, Reliability, and Confidence

    SciTech Connect

    Logan, R W; Nitta, C K

    2004-06-16

    The concepts of Verification and Validation (V&V) can be oversimplified in a succinct manner by saying that 'verification is doing things right' and 'validation is doing the right thing'. In the world of the Finite Element Method (FEM) and computational analysis, it is sometimes said that 'verification means solving the equations right' and 'validation means solving the right equations'. In other words, if one intends to give an answer to the equation '2+2=', then one must run the resulting code to assure that the answer '4' results. However, if the nature of the physics or engineering problem being addressed with this code is multiplicative rather than additive, then even though Verification may succeed (2+2=4 etc), Validation may fail because the equations coded are not those needed to address the real world (multiplicative) problem. We have previously provided a 4-step 'ABCD' quantitative implementation for a quantitative V&V process: (A) Plan the analyses and validation testing that may be needed along the way. Assure that the code[s] chosen have sufficient documentation of software quality and Code Verification (i.e., does 2+2=4?). Perform some calibration analyses and calibration based sensitivity studies (these are not validated sensitivities but are useful for planning purposes). Outline the data and validation analyses that will be needed to turn the calibrated model (and calibrated sensitivities) into validated quantities. (B) Solution Verification: For the system or component being modeled, quantify the uncertainty and error estimates due to spatial, temporal, and iterative discretization during solution. (C) Validation over the data domain: Perform a quantitative validation to provide confidence-bounded uncertainties on the quantity of interest over the domain of available data. (D) Predictive Adequacy: Extend the model validation process of 'C' out to the application domain of interest, which may be outside the domain of available data in one or more planes of multi-dimensional space. Part 'D' should provide the numerical information about the model and its predictive capability such that given a requirement, an adequacy assessment can be made to determine of more validation analyses or data are needed.

  11. Validation of Orthorectified Interferometric Radar Imagery and Digital Elevation Models

    NASA Technical Reports Server (NTRS)

    Smith Charles M.

    2004-01-01

    This work was performed under NASA's Verification and Validation (V&V) Program as an independent check of data supplied by EarthWatch, Incorporated, through the Earth Science Enterprise Scientific Data Purchase (SDP) Program. This document serves as the basis of reporting results associated with validation of orthorectified interferometric interferometric radar imagery and digital elevation models (DEM). This validation covers all datasets provided under the first campaign (Central America & Virginia Beach) plus three earlier missions (Indonesia, Red River: and Denver) for a total of 13 missions.

  12. Spatial statistical modeling of shallow landslides—Validating predictions for different landslide inventories and rainfall events

    NASA Astrophysics Data System (ADS)

    von Ruette, Jonas; Papritz, Andreas; Lehmann, Peter; Rickli, Christian; Or, Dani

    2011-10-01

    Statistical models that exploit the correlation between landslide occurrence and geomorphic properties are often used to map the spatial occurrence of shallow landslides triggered by heavy rainfalls. In many landslide susceptibility studies, the true predictive power of the statistical model remains unknown because the predictions are not validated with independent data from other events or areas. This study validates statistical susceptibility predictions with independent test data. The spatial incidence of landslides, triggered by an extreme rainfall in a study area, was modeled by logistic regression. The fitted model was then used to generate susceptibility maps for another three study areas, for which event-based landslide inventories were also available. All the study areas lie in the northern foothills of the Swiss Alps. The landslides had been triggered by heavy rainfall either in 2002 or 2005. The validation was designed such that the first validation study area shared the geomorphology and the second the triggering rainfall event with the calibration study area. For the third validation study area, both geomorphology and rainfall were different. All explanatory variables were extracted for the logistic regression analysis from high-resolution digital elevation and surface models (2.5 m grid). The model fitted to the calibration data comprised four explanatory variables: (i) slope angle (effect of gravitational driving forces), (ii) vegetation type (grassland and forest; root reinforcement), (iii) planform curvature (convergent water flow paths), and (iv) contributing area (potential supply of water). The area under the Receiver Operating Characteristic (ROC) curve ( AUC) was used to quantify the predictive performance of the logistic regression model. The AUC values were computed for the susceptibility maps of the three validation study areas (validation AUC), the fitted susceptibility map of the calibration study area (apparent AUC: 0.80) and another susceptibility map obtained for the calibration study area by 20-fold cross-validation (cross-validation AUC: 0.74). The AUC values of the first and second validation study areas (0.72 and 0.69, respectively) and the cross-validation AUC matched fairly well, and all AUC values were distinctly smaller than the apparent AUC. Based on the apparent AUC one would have clearly overrated the predictive performance for the first two validation areas. Rather surprisingly, the AUC value of the third validation study area (0.82) was larger than the apparent AUC. A large part of the third validation study area consists of gentle slopes, and the regression model correctly predicted that no landslides occur in the flat parts. This increased the predictive performance of the model considerably. The predicted susceptibility maps were further validated by summing the predicted susceptibilities for the entire validation areas and by comparing the sums with the observed number of landslides. The sums exceeded the observed counts for all the validation areas. Hence, the logistic regression model generally over-estimated the risk of landslide occurrence. Obviously, a predictive model that is based on static geomorphic properties alone cannot take a full account of the complex and time dependent processes in the subsurface. However, such a model is still capable of distinguishing zones highly or less prone to shallow landslides.

  13. Angular Distribution Models for Top-of-Atmosphere Radiative Flux Estimation from the Clouds and the Earth's Radiant Energy System Instrument on the Tropical Rainfall Measuring Mission Satellite. Part II; Validation

    NASA Technical Reports Server (NTRS)

    Loeb, N. G.; Loukachine, K.; Wielicki, B. A.; Young, D. F.

    2003-01-01

    Top-of-atmosphere (TOA) radiative fluxes from the Clouds and the Earth s Radiant Energy System (CERES) are estimated from empirical angular distribution models (ADMs) that convert instantaneous radiance measurements to TOA fluxes. This paper evaluates the accuracy of CERES TOA fluxes obtained from a new set of ADMs developed for the CERES instrument onboard the Tropical Rainfall Measuring Mission (TRMM). The uncertainty in regional monthly mean reflected shortwave (SW) and emitted longwave (LW) TOA fluxes is less than 0.5 W/sq m, based on comparisons with TOA fluxes evaluated by direct integration of the measured radiances. When stratified by viewing geometry, TOA fluxes from different angles are consistent to within 2% in the SW and 0.7% (or 2 W/sq m) in the LW. In contrast, TOA fluxes based on ADMs from the Earth Radiation Budget Experiment (ERBE) applied to the same CERES radiance measurements show a 10% relative increase with viewing zenith angle in the SW and a 3.5% (9 W/sq m) decrease with viewing zenith angle in the LW. Based on multiangle CERES radiance measurements, 18 regional instantaneous TOA flux errors from the new CERES ADMs are estimated to be 10 W/sq m in the SW and, 3.5 W/sq m in the LW. The errors show little or no dependence on cloud phase, cloud optical depth, and cloud infrared emissivity. An analysis of cloud radiative forcing (CRF) sensitivity to differences between ERBE and CERES TRMM ADMs, scene identification, and directional models of albedo as a function of solar zenith angle shows that ADM and clear-sky scene identification differences can lead to an 8 W/sq m root-mean-square (rms) difference in 18 daily mean SW CRF and a 4 W/sq m rms difference in LW CRF. In contrast, monthly mean SW and LW CRF differences reach 3 W/sq m. CRF is found to be relatively insensitive to differences between the ERBE and CERES TRMM directional models.

  14. Functional state modelling approach validation for yeast and bacteria cultivations

    PubMed Central

    Roeva, Olympia; Pencheva, Tania

    2014-01-01

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778

  15. Validation of nuclear models used in space radiation shielding applications

    SciTech Connect

    Norman, Ryan B.; Blattnig, Steve R.

    2013-01-15

    A program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as these models are developed over time. In this work, simple validation metrics applicable to testing both model accuracy and consistency with experimental data are developed. The developed metrics treat experimental measurement uncertainty as an interval and are therefore applicable to cases in which epistemic uncertainty dominates the experimental data. To demonstrate the applicability of the metrics, nuclear physics models used by NASA for space radiation shielding applications are compared to an experimental database consisting of over 3600 experimental cross sections. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by examining subsets of the model parameter space.

  16. Validation of a rock bed thermal energy storage model

    NASA Astrophysics Data System (ADS)

    Vonfuchs, F. G.

    The development of a one dimensional rock bed model utilizing historical models and those used by current researchers is discussed. Heat transfer coefficients and assumptions, pressure drop, rock size and void fraction, axial conductivity, and other parameters are considered. Since the solution of the model differential equations can introduce instabilities and inaccuracies in the results, a discussion of the mathematical techniques used is included. The validation process consisted of formulating a test plan, instrumenting the rock bed, performing the tests then comparing the results with a simulation using the same parameters used in the test. Validation was quite good and is illustrated by several figures comparing observed and predicted axial temperature distributions.

  17. Theory and Implementation of Nuclear Safety System Codes - Part II: System Code Closure Relations, Validation, and Limitations

    SciTech Connect

    Glenn A Roth; Fatih Aydogan

    2014-09-01

    This is Part II of two articles describing the details of thermal-hydraulic sys- tem codes. In this second part of the article series, the system code closure relationships (used to model thermal and mechanical non-equilibrium and the coupling of the phases) for the governing equations are discussed and evaluated. These include several thermal and hydraulic models, such as heat transfer coefficients for various flow regimes, two phase pressure correlations, two phase friction correlations, drag coefficients and interfacial models be- tween the fields. These models are often developed from experimental data. The experiment conditions should be understood to evaluate the efficacy of the closure models. Code verification and validation, including Separate Effects Tests (SETs) and Integral effects tests (IETs) is also assessed. It can be shown from the assessments that the test cases cover a significant section of the system code capabilities, but some of the more advanced reactor designs will push the limits of validation for the codes. Lastly, the limitations of the codes are discussed by considering next generation power plants, such as Small Modular Reactors (SMRs), analyz- ing not only existing nuclear power plants, but also next generation nuclear power plants. The nuclear industry is developing new, innovative reactor designs, such as Small Modular Reactors (SMRs), High-Temperature Gas-cooled Reactors (HTGRs) and others. Sub-types of these reactor designs utilize pebbles, prismatic graphite moderators, helical steam generators, in- novative fuel types, and many other design features that may not be fully analyzed by current system codes. This second part completes the series on the comparison and evaluation of the selected reactor system codes by discussing the closure relations, val- idation and limitations. These two articles indicate areas where the models can be improved to adequately address issues with new reactor design and development.

  18. Validating regional-scale surface energy balance models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    One of the major challenges in developing reliable regional surface flux models is the relative paucity of scale-appropriate validation data. Direct comparisons between coarse-resolution model flux estimates and flux tower data can often be dominated by sub-pixel heterogeneity effects, making it di...

  19. Validation of 1-D transport and sawtooth models for ITER

    SciTech Connect

    Connor, J.W.; Turner, M.F.; Attenberger, S.E.; Houlberg, W.A.

    1996-12-31

    In this paper the authors describe progress on validating a number of local transport models by comparing their predictions with relevant experimental data from a range of tokamaks in the ITER profile database. This database, the testing procedure and results are discussed. In addition a model for sawtooth oscillations is used to investigate their effect in an ITER plasma with alpha-particles.

  20. Validating a Technology Enhanced Student-Centered Learning Model

    ERIC Educational Resources Information Center

    Kang, Myunghee; Hahn, Jungsun; Chung, Warren

    2015-01-01

    The Technology Enhanced Student Centered Learning (TESCL) Model in this study presents the core factors that ensure the quality of learning in a technology-supported environment. Although the model was conceptually constructed using a student-centered learning framework and drawing upon previous studies, it should be validated through real-world

  1. Validating a Technology Enhanced Student-Centered Learning Model

    ERIC Educational Resources Information Center

    Kang, Myunghee; Hahn, Jungsun; Chung, Warren

    2015-01-01

    The Technology Enhanced Student Centered Learning (TESCL) Model in this study presents the core factors that ensure the quality of learning in a technology-supported environment. Although the model was conceptually constructed using a student-centered learning framework and drawing upon previous studies, it should be validated through real-world…

  2. Making Validated Educational Models Central in Preschool Standards.

    ERIC Educational Resources Information Center

    Schweinhart, Lawrence J.

    This paper presents some ideas to preschool educators and policy makers about how to make validated educational models central in standards for preschool education and care programs that are available to all 3- and 4-year-olds. Defining an educational model as a coherent body of program practices, curriculum content, program and child, and teacher

  3. Testing the Testing: Validity of a State Growth Model

    ERIC Educational Resources Information Center

    Brown, Kim Trask

    2008-01-01

    Possible threats to the validity of North Carolina's accountability model used to predict academic growth were investigated in two ways: the state's regression equations were replicated but updated to utilize current testing data and not that from years past as in the state's current model; and the updated equations were expanded to include…

  4. Institutional Effectiveness: A Model for Planning, Assessment & Validation.

    ERIC Educational Resources Information Center

    Truckee Meadows Community Coll., Sparks, NV.

    The report presents Truckee Meadows Community College's (Colorado) model for assessing institutional effectiveness and validating the College's mission and vision, and the strategic plan for carrying out the institutional effectiveness model. It also outlines strategic goals for the years 1999-2001. From the system-wide directive that education

  5. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  6. Validating Finite Element Models of Assembled Shell Structures

    NASA Technical Reports Server (NTRS)

    Hoff, Claus

    2006-01-01

    The validation of finite element models of assembled shell elements is presented. The topics include: 1) Problems with membrane rotations in assembled shell models; 2) Penalty stiffness for membrane rotations; 3) Physical stiffness for membrane rotations using shell elements with 6 dof per node; and 4) Connections avoiding rotations.

  7. Validation of a terrestrial food chain model

    SciTech Connect

    Travis, C.C.; Blaylock, B.P. )

    1992-04-01

    An increasingly important topic in risk assessment is the estimation of human exposure to environmental pollutants through pathways other than inhalation. The Environmental Protection Agency (EPA) has recently developed a computerized methodology (EPA, 1990) to estimate indirect exposure to toxic pollutants from Municipal Waste Combuster emissions. This methodology estimates health risks from exposure to toxic pollutants from the terrestrial food chain (TFC), soil ingestion, drinking water ingestion, fish ingestion, and dermal absorption via soil and water. Of these, one of the most difficult to estimate is exposure through the food chain. This paper estimates the accuracy of the EPA methodology for estimating food chain contamination. To our knowledge, no data exist on measured concentrations of pollutants in food grown around Municipal Waste Incinerators, and few field-scale studies have been performed on the uptake of pollutants in the food chain. Therefore, to evaluate the EPA methodology, we compare actual measurements of background contaminant levels in food with estimates made using EPA's computerized methodology. Background levels of contaminants in air, water, and soil were used as input to the EPA food chain model to predict background levels of contaminants in food. These predicted values were then compared with the measured background contaminant levels. Comparisons were performed for dioxin, pentachlorophenol, polychlorinated biphenyls, benzene, benzo(a)pyrene, mercury, and lead.

  8. Predicting the ungauged basin: Model validation and realism assessment

    NASA Astrophysics Data System (ADS)

    van Emmerik, Tim; Mulder, Gert; Eilander, Dirk; Piet, Marijn; Savenije, Hubert

    2015-10-01

    The hydrological decade on Predictions in Ungauged Basins (PUB) led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this paper we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. This paper does not present a generic approach that can be transferred to other ungauged catchments, but it aims to show how clever model design and alternative data acquisition can result in a valuable hydrological model for an ungauged catchment.

  9. Spectral modeling of Type II SNe

    NASA Astrophysics Data System (ADS)

    Dessart, Luc

    2015-08-01

    The red supergiant phase represents the final stage of evolution in the life of moderate mass (8-25Msun) massive stars. Hidden from view, the core changes considerably its structure, progressing through the advanced stages of nuclear burning, and eventually becomes degenerate. Upon reaching the Chandrasekhar mass, this Fe or ONeMg core collapses, leading to the formation of a proto neutron star. A type II supernova results if the shock that forms at core bounce, eventually wins over the envelope accretion and reaches the progenitor surface.The electromagnetic display of such core-collapse SNe starts with this shock breakout, and persists for months as the ejecta releases the energy deposited initially by the shock or continuously through radioactive decay. Over a timescale of weeks to months, the originally optically-thick ejecta thins out and turns nebular. SN radiation contains a wealth of information about the explosion physics (energy, explosive nucleosynthesis), the progenitor properties (structure and composition). Polarised radiation also offers signatures that can help constrain the morphology of the ejecta.In this talk, I will review the current status of type II SN spectral modelling, and emphasise that a proper solution requires a time dependent treatment of the radiative transfer problem. I will discuss the wealth of information that can be gleaned from spectra as well as light curves, from both the early times (photospheric phase) and late times (nebular phase). I will discuss the diversity of Type SNe properties and how they are related to the diversity of red supergiant stars from which they originate.SN radiation offers an alternate means of constraining the properties of red-supergiant stars. To wrap up, I will illustrate how SNe II-P can also be used as probes, for example to constrain the metallicity of their environment.

  10. On the development and validation of QSAR models.

    PubMed

    Gramatica, Paola

    2013-01-01

    The fundamental and more critical steps that are necessary for the development and validation of QSAR models are presented in this chapter as best practices in the field. These procedures are discussed in the context of predictive QSAR modelling that is focused on achieving models of the highest statistical quality and with external predictive power. The most important and most used statistical parameters needed to verify the real performances of QSAR models (of both linear regression and classification) are presented. Special emphasis is placed on the validation of models, both internally and externally, as well as on the need to define model applicability domains, which should be done when models are employed for the prediction of new external compounds. PMID:23086855

  11. Control Oriented Modeling and Validation of Aeroservoelastic Systems

    NASA Technical Reports Server (NTRS)

    Crowder, Marianne; deCallafon, Raymond (Principal Investigator)

    2002-01-01

    Lightweight aircraft design emphasizes the reduction of structural weight to maximize aircraft efficiency and agility at the cost of increasing the likelihood of structural dynamic instabilities. To ensure flight safety, extensive flight testing and active structural servo control strategies are required to explore and expand the boundary of the flight envelope. Aeroservoelastic (ASE) models can provide online flight monitoring of dynamic instabilities to reduce flight time testing and increase flight safety. The success of ASE models is determined by the ability to take into account varying flight conditions and the possibility to perform flight monitoring under the presence of active structural servo control strategies. In this continued study, these aspects are addressed by developing specific methodologies and algorithms for control relevant robust identification and model validation of aeroservoelastic structures. The closed-loop model robust identification and model validation are based on a fractional model approach where the model uncertainties are characterized in a closed-loop relevant way.

  12. A prediction model for ocular damage - Experimental validation.

    PubMed

    Heussner, Nico; Vagos, Mrcia; Spitzer, Martin S; Stork, Wilhelm

    2015-08-01

    With the increasing number of laser applications in medicine and technology, accidental as well as intentional exposure of the human eye to laser sources has become a major concern. Therefore, a prediction model for ocular damage (PMOD) is presented within this work and validated for long-term exposure. This model is a combination of a raytracing model with a thermodynamical model of the human and an application which determines the thermal damage by the implementation of the Arrhenius integral. The model is based on our earlier work and is here validated against temperature measurements taken with porcine eye samples. For this validation, three different powers were used: 50mW, 100mW and 200mW with a spot size of 1.9mm. Also, the measurements were taken with two different sensing systems, an infrared camera and a fibre optic probe placed within the tissue. The temperatures were measured up to 60s and then compared against simulations. The measured temperatures were found to be in good agreement with the values predicted by the PMOD-model. To our best knowledge, this is the first model which is validated for both short-term and long-term irradiations in terms of temperature and thus demonstrates that temperatures can be accurately predicted within the thermal damage regime. PMID:26267496

  13. Validation of the Serpent 2 code on TRIGA Mark II benchmark experiments.

    PubMed

    ?ali?, Duan; erovnik, Gaper; Trkov, Andrej; Snoj, Luka

    2016-01-01

    The main aim of this paper is the development and validation of a 3D computational model of TRIGA research reactor using Serpent 2 code. The calculated parameters were compared to the experimental results and to calculations performed with the MCNP code. The results show that the calculated normalized reaction rates and flux distribution within the core are in good agreement with MCNP and experiment, while in the reflector the flux distribution differ up to 3% from the measurements. PMID:26516989

  14. The Validation of Climate Models: The Development of Essential Practice

    NASA Astrophysics Data System (ADS)

    Rood, R. B.

    2011-12-01

    It is possible from both a scientific and philosophical perspective to state that climate models cannot be validated. However, with the realization that the scientific investigation of climate change is as much a subject of politics as of science, maintaining this formal notion of "validation" has significant consequences. For example, it relegates the bulk of work of many climate scientists to an exercise of model evaluation that can be construed as ill-posed. Even within the science community this motivates criticism of climate modeling as an exercise of weak scientific practice. Stepping outside of the science community, statements that validation is impossible are used in political arguments to discredit the scientific investigation of climate, to maintain doubt about projections of climate change, and hence, to prohibit the development of public policy to regulate the emissions of greenhouse gases. With the acceptance of the impossibility of validation, scientists often state that the credibility of models can be established through an evaluation process. A robust evaluation process leads to the quantitative description of the modeling system against a standard set of measures. If this process is standardized as institutional practice, then this provides a measure of model performance from one modeling release to the next. It is argued, here, that such a robust and standardized evaluation of climate models can be structured and quantified as "validation." Arguments about the nuanced meaning of validation and evaluation are a subject about which the climate modeling community needs to develop a standard. It does injustice to a body of science-based knowledge to maintain that validation is "impossible." Rather than following such a premise, which immediately devalues the knowledge base, it is more useful to develop a systematic, standardized approach to robust, appropriate validation. This stands to represent the complexity of the Earth's climate and its investigation. This serves not only the scientific method, but the communication of the results of that scientific investigation to other scientists and to those with a stake in those scientific results. It sets a standard, which is essential practice for simulation science with societal ramifications.

  15. Analysis of Experimental Data for High Burnup PWR Spent Fuel Isotopic Validation - Vandellos II Reactor

    SciTech Connect

    Ilas, Germina; Gauld, Ian C

    2011-01-01

    This report is one of the several recent NUREG/CR reports documenting benchmark-quality radiochemical assay data and the use of the data to validate computer code predictions of isotopic composition for spent nuclear fuel, to establish the uncertainty and bias associated with code predictions. The experimental data analyzed in the current report were acquired from a high-burnup fuel program coordinated by Spanish organizations. The measurements included extensive actinide and fission product data of importance to spent fuel safety applications, including burnup credit, decay heat, and radiation source terms. Six unique spent fuel samples from three uranium oxide fuel rods were analyzed. The fuel rods had a 4.5 wt % {sup 235}U initial enrichment and were irradiated in the Vandellos II pressurized water reactor operated in Spain. The burnups of the fuel samples range from 42 to 78 GWd/MTU. The measurements were used to validate the two-dimensional depletion sequence TRITON in the SCALE computer code system.

  16. The GEMMA Crustal Model: First Validation and Data Distribution

    NASA Astrophysics Data System (ADS)

    Sampietro, D.; Reguzzoni, M.; Negretti, M.

    2013-12-01

    In the GEMMA project, funded by ESA-STSE and ASI, a new crustal model constrained by GOCE gravity field observations has been developed. This model has a resolution of 0.50.5 and it is composed of seven layers describing geometry and density of oceans, ice sheets, upper, medium and lower sediments, crystalline crust and upper mantle. In the present work the GEMMA model is validated against other global and regional models, showing a good consistency where validation data are reliable. Apart from that the development of a WPS (Web Processing Service) for the distribution of the GEMMA model is also presented. The service gives the possibility to download, interpolate and display the whole crustal model, providing for each layer the depth of its upper and lower boundary, its density as well as its gravitational effect in terms of second radial derivative of the gravitational potential at GOCE altitude.

  17. An Examination of the Validity of the Family Affluence Scale II (FAS II) in a General Adolescent Population of Canada

    ERIC Educational Resources Information Center

    Boudreau, Brock; Poulin, Christiane

    2009-01-01

    This study examined the performance of the FAS II in a general population of 17,545 students in grades 7, 9, 10 and 12 in the Atlantic provinces of Canada. The FAS II was assessed against two other measures of socioeconomic status: mother's highest level of education and family structure. Our study found that the FAS II reduces the likelihood of

  18. Human surrogate models of neuropathic pain: validity and limitations.

    PubMed

    Binder, Andreas

    2016-02-01

    Human surrogate models of neuropathic pain in healthy subjects are used to study symptoms, signs, and the hypothesized underlying mechanisms. Although different models are available, different spontaneous and evoked symptoms and signs are inducible; 2 key questions need to be answered: are human surrogate models conceptually valid, ie, do they share the sensory phenotype of neuropathic pain states, and are they sufficiently reliable to allow consistent translational research? PMID:26785155

  19. Validation techniques of agent based modelling for geospatial simulations

    NASA Astrophysics Data System (ADS)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  20. A model for the separation of cloud and aerosol in SAGE II occultation data

    NASA Technical Reports Server (NTRS)

    Kent, G. S.; Winker, D. M.; Osborn, M. T.; Skeens, K. M.

    1993-01-01

    The Stratospheric Aerosol and Gas Experiment (SAGE) II satellite experiment measures the extinction due to aerosols and thin cloud, at wavelengths of 0.525 and 1.02 micrometers, down to an altitude of 6 km. The wavelength dependence of the extinction due to aerosols differs from that of the extinction due to cloud and is used as the basis of a model for separating these two components. The model is presented and its validation using airborne lidar data, obtained coincident with SAGE II observations, is described. This comparison shows that smaller SAGE II cloud extinction values correspond to the presence of subvisible cirrus cloud in the lidar record. Examples of aerosol and cloud data products obtained using this model to interpret SAGE II upper tropospheric and lower stratospheric data are also shown.

  1. Using the split Hopkinson pressure bar to validate material models

    PubMed Central

    Church, Philip; Cornish, Rory; Cullis, Ian; Gould, Peter; Lewtas, Ian

    2014-01-01

    This paper gives a discussion of the use of the split-Hopkinson bar with particular reference to the requirements of materials modelling at QinetiQ. This is to deploy validated material models for numerical simulations that are physically based and have as little characterization overhead as possible. In order to have confidence that the models have a wide range of applicability, this means, at most, characterizing the models at low rate and then validating them at high rate. The split Hopkinson pressure bar (SHPB) is ideal for this purpose. It is also a very useful tool for analysing material behaviour under non-shock wave loading. This means understanding the output of the test and developing techniques for reliable comparison of simulations with SHPB data. For materials other than metals comparison with an output stress v strain curve is not sufficient as the assumptions built into the classical analysis are generally violated. The method described in this paper compares the simulations with as much validation data as can be derived from deployed instrumentation including the raw strain gauge data on the input and output bars, which avoids any assumptions about stress equilibrium. One has to take into account PochhammerChree oscillations and their effect on the specimen and recognize that this is itself also a valuable validation test of the material model. PMID:25071238

  2. Modeling and Validation of Microwave Ablations with Internal Vaporization

    PubMed Central

    Chiang, Jason; Birla, Sohan; Bedoya, Mariajose; Jones, David; Subbiah, Jeyam; Brace, Christopher L.

    2014-01-01

    Numerical simulation is increasingly being utilized for computer-aided design of treatment devices, analysis of ablation growth, and clinical treatment planning. Simulation models to date have incorporated electromagnetic wave propagation and heat conduction, but not other relevant physics such as water vaporization and mass transfer. Such physical changes are particularly noteworthy during the intense heat generation associated with microwave heating. In this work, a numerical model was created that integrates microwave heating with water vapor generation and transport by using porous media assumptions in the tissue domain. The heating physics of the water vapor model was validated through temperature measurements taken at locations 5, 10 and 20 mm away from the heating zone of the microwave antenna in homogenized ex vivo bovine liver setup. Cross-sectional area of water vapor transport was validated through intra-procedural computed tomography (CT) during microwave ablations in homogenized ex vivo bovine liver. Iso-density contours from CT images were compared to vapor concentration contours from the numerical model at intermittent time points using the Jaccard Index. In general, there was an improving correlation in ablation size dimensions as the ablation procedure proceeded, with a Jaccard Index of 0.27, 0.49, 0.61, 0.67 and 0.69 at 1, 2, 3, 4, and 5 minutes. This study demonstrates the feasibility and validity of incorporating water vapor concentration into thermal ablation simulations and validating such models experimentally. PMID:25330481

  3. Modeling and validation of microwave ablations with internal vaporization.

    PubMed

    Chiang, Jason; Birla, Sohan; Bedoya, Mariajose; Jones, David; Subbiah, Jeyam; Brace, Christopher L

    2015-02-01

    Numerical simulation is increasingly being utilized for computer-aided design of treatment devices, analysis of ablation growth, and clinical treatment planning. Simulation models to date have incorporated electromagnetic wave propagation and heat conduction, but not other relevant physics such as water vaporization and mass transfer. Such physical changes are particularly noteworthy during the intense heat generation associated with microwave heating. In this paper, a numerical model was created that integrates microwave heating with water vapor generation and transport by using porous media assumptions in the tissue domain. The heating physics of the water vapor model was validated through temperature measurements taken at locations 5, 10, and 20 mm away from the heating zone of the microwave antenna in homogenized ex vivo bovine liver setup. Cross-sectional area of water vapor transport was validated through intraprocedural computed tomography (CT) during microwave ablations in homogenized ex vivo bovine liver. Iso-density contours from CT images were compared to vapor concentration contours from the numerical model at intermittent time points using the Jaccard index. In general, there was an improving correlation in ablation size dimensions as the ablation procedure proceeded, with a Jaccard index of 0.27, 0.49, 0.61, 0.67, and 0.69 at 1, 2, 3, 4, and 5 min, respectively. This study demonstrates the feasibility and validity of incorporating water vapor concentration into thermal ablation simulations and validating such models experimentally. PMID:25330481

  4. Experimental Validation of a Thermoelastic Model for SMA Hybrid Composites

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.

    2001-01-01

    This study presents results from experimental validation of a recently developed model for predicting the thermomechanical behavior of shape memory alloy hybrid composite (SMAHC) structures, composite structures with an embedded SMA constituent. The model captures the material nonlinearity of the material system with temperature and is capable of modeling constrained, restrained, or free recovery behavior from experimental measurement of fundamental engineering properties. A brief description of the model and analysis procedures is given, followed by an overview of a parallel effort to fabricate and characterize the material system of SMAHC specimens. Static and dynamic experimental configurations for the SMAHC specimens are described and experimental results for thermal post-buckling and random response are presented. Excellent agreement is achieved between the measured and predicted results, fully validating the theoretical model for constrained recovery behavior of SMAHC structures.

  5. Progress in Geant4 Electromagnetic Physics Modelling and Validation

    NASA Astrophysics Data System (ADS)

    Apostolakis, J.; Asai, M.; Bagulya, A.; Brown, J. M. C.; Burkhardt, H.; Chikuma, N.; Cortes-Giraldo, M. A.; Elles, S.; Grichine, V.; Guatelli, S.; Incerti, S.; Ivanchenko, V. N.; Jacquemier, J.; Kadri, O.; Maire, M.; Pandola, L.; Sawkey, D.; Toshito, T.; Urban, L.; Yamashita, T.

    2015-12-01

    In this work we report on recent improvements in the electromagnetic (EM) physics models of Geant4 and new validations of EM physics. Improvements have been made in models of the photoelectric effect, Compton scattering, gamma conversion to electron and muon pairs, fluctuations of energy loss, multiple scattering, synchrotron radiation, and high energy positron annihilation. The results of these developments are included in the new Geant4 version 10.1 and in patches to previous versions 9.6 and 10.0 that are planned to be used for production for run-2 at LHC. The Geant4 validation suite for EM physics has been extended and new validation results are shown in this work. In particular, the effect of gamma-nuclear interactions on EM shower shape at LHC energies is discussed.

  6. Validation of Hydrological Models Using Stable Isotope Tracers.

    NASA Astrophysics Data System (ADS)

    Stadnyk, T. A.; Kouwen, N.; Edwards, T.

    2004-05-01

    The delineation of source areas for groundwater recharge is the first step in protecting groundwater resources as a source of water for human consumption and ecological preservation. To accomplish this task, a thorough understanding of water pathways from precipitation to streamflow is required. The rainfall-runoff process can be modelled using hydrological models, in which conservative tracers can be incorporated and used to disaggregate streamflow into its various origins and pathways. The measurement of naturally occurring isotopes in streamflow can then provide a relatively simplistic and inexpensive validation tool by verifying that flow paths and residence times are being correctly modelled. The objective of this research is to validate flowpaths in hydrological models by comparing modelled conservative tracers to measured isotopic data, where it is available. A tracer module has been integrated with the WATFLOOD model; a fully distributed, physically based, meso-scale hydrologic model for watersheds having response times larger than one hour. Conservative tracers are used to track water through the model by quantifying and segregating the various contributions to the total streamflow. Groundwater flow separation is accomplished using simplified storage routing of groundwater through the subsurface and into the stream. A specified concentration of tracer is added to the groundwater at its origin and upon reaching the stream; a mass balance is performed to determine the concentration of tracer in the stream, allowing for a separation of groundwater from streamflow. Other flow tracers have also been modelled, including ones for surface water, interflow, flows from different landcovers, and flows from different sub-basins. Validation of the WATFLOOD models flowpaths will be made using the flow separation tracers and measured isotope data from the lower Liard River Basin near Fort Simpson, Northwest Territories. Examples of flow separations using additional tracers will be presented for the Grand River watershed, where isotope data is not yet available for validation purposes, but other baseflow separation techniques have been applied and can be used for comparison.

  7. Pharmacophore modeling studies of type I and type II kinase inhibitors of Tie2.

    PubMed

    Xie, Qing-Qing; Xie, Huan-Zhang; Ren, Ji-Xia; Li, Lin-Li; Yang, Sheng-Yong

    2009-02-01

    In this study, chemical feature based pharmacophore models of type I and type II kinase inhibitors of Tie2 have been developed with the aid of HipHop and HypoRefine modules within Catalyst program package. The best HipHop pharmacophore model Hypo1_I for type I kinase inhibitors contains one hydrogen-bond acceptor, one hydrogen-bond donor, one general hydrophobic, one hydrophobic aromatic, and one ring aromatic feature. And the best HypoRefine model Hypo1_II for type II kinase inhibitors, which was characterized by the best correlation coefficient (0.976032) and the lowest RMSD (0.74204), consists of two hydrogen-bond donors, one hydrophobic aromatic, and two general hydrophobic features, as well as two excluded volumes. These pharmacophore models have been validated by using either or both test set and cross validation methods, which shows that both the Hypo1_I and Hypo1_II have a good predictive ability. The space arrangements of the pharmacophore features in Hypo1_II are consistent with the locations of the three portions making up a typical type II kinase inhibitor, namely, the portion occupying the ATP binding region (ATP-binding-region portion, AP), that occupying the hydrophobic region (hydrophobic-region portion, HP), and that linking AP and HP (bridge portion, BP). Our study also reveals that the ATP-binding-region portion of the type II kinase inhibitors plays an important role to the bioactivity of the type II kinase inhibitors. Structural modifications on this portion should be helpful to further improve the inhibitory potency of type II kinase inhibitors. PMID:19138543

  8. Radiative transfer model validations during the First ISLSCP Field Experiment

    NASA Technical Reports Server (NTRS)

    Frouin, Robert; Breon, Francois-Marie; Gautier, Catherine

    1990-01-01

    Two simple radiative transfer models, the 5S model based on Tanre et al. (1985, 1986) and the wide-band model of Morcrette (1984) are validated by comparing their outputs with results obtained during the First ISLSCP Field Experiment on concomitant radiosonde, aerosol turbidity, and radiation measurements and sky photographs. Results showed that the 5S model overestimates the short-wave irradiance by 13.2 W/sq m, whereas the Morcrette model underestimated the long-wave irradiance by 7.4 W/sq m.

  9. Criteria for Validating Mouse Models of Psychiatric Diseases

    PubMed Central

    Chadman, Kathryn K.; Yang, Mu; Crawley, Jacqueline N.

    2010-01-01

    Animal models of human diseases are in widespread use for biomedical research. Mouse models with a mutation in a single gene or multiple genes are excellent research tools for understanding the role of a specific gene in the etiology of a human genetic disease. Ideally, the mouse phenotypes will recapitulate the human phenotypes exactly. However, exact matches are rare, particularly in mouse models of neuropsychiatric disorders. This article summarizes the current strategies for optimizing the validity of a mouse model of a human brain dysfunction. We address the common question raised by molecular geneticists and clinical researchers in psychiatry, what is a good enough mouse model? PMID:18484083

  10. Climate Model Validation Using Spectrally Resolved Shortwave Radiation Measurements

    NASA Astrophysics Data System (ADS)

    Roberts, Y.; Taylor, P. C.; Lukashin, C.; Feldman, D.; Pilewskie, P.; Collins, W.

    2013-12-01

    The climate science community has made significant strides in the development and improvement of Global Climate Models (GCMs) to predict how the Earth's climate system will change over the next several decades. It is crucial to evaluate how well these models reproduce observed climate variability using strict validation techniques to assist the climate modeling community with improving GCM prediction. The ability of climate models to simulate Earth's present-day climate is an initial evaluation of their ability to predict future changes in climate. Models are evaluated in several ways, including model intercomparison projects and comparing model simulations of physical variables with observations of those variables. We are developing new methods for rigorous climate model validation and physical attribution of the cause of model errors using existing, direct measurements of hyperspectral shortwave reflectance. We have also developed a SCIAMACHY-based (Scanning Imaging Absorption Spectrometer for Atmospheric Cartography) hyperspectral shortwave climate validation product to demonstrate using the product to validate GCMs. We are also investigating the information content added by using multispectral and hyperspectral data to study climate variability and validate climate models. The goal is to determine if it is necessary to use data with continuous spectral sampling across the shortwave spectral range, or if it is sufficient to use a subset of carefully selected spectral bands (e.g. MODIS-like data) to study climate trends and evaluate climate model performance. We are carrying out this activity by comparing the information content within broadband, multispectral (discrete-band sampling), and hyperspectral (high spectral resolution with continuous spectral sampling) data sets. Changes in climate-relevant atmospheric and surface variables impact the spectral, spatial, and temporal variability of Earth-reflected solar radiation (0.3-2.5 μm) through spectrally dependent scattering and absorption processes. Previous studies have demonstrated that highly accurate, hyperspectral (spectrally contiguous and overlapping) measurements of shortwave reflectance are important for monitoring climate variability from space. We are continuing to work to demonstrate that highly accurate, high information content hyperspectral shortwave measurements can be used to detect changes in climate, identify climate variance drivers, and validate GCMs.

  11. Validation of a 3-D hemispheric nested air pollution model

    NASA Astrophysics Data System (ADS)

    Frohn, L. M.; Christensen, J. H.; Brandt, J.; Geels, C.; Hansen, K. M.

    2003-07-01

    Several air pollution transport models have been developed at the National Environmental Research Institute in Denmark over the last decade (DREAM, DEHM, ACDEP and DEOM). A new 3-D nested Eulerian transport-chemistry model: REGIonal high resolutioN Air pollution model (REGINA) is based on modules and parameterisations from these models as well as new methods. The model covers the majority of the Northern Hemisphere with currently one nest implemented. The horizontal resolution in the mother domain is 150 km 150 km, and the nesting factor is three. A chemical scheme (originally 51 species) has been extended with a detailed description of the ammonia chemistry and implemented in the model. The mesoscale numerical weather prediction model MM5v2 is used as meteorological driver for the model. The concentrations of air pollutants, such as sulphur and nitrogen in various forms, have been calculated, applying zero nesting and one nest. The model setup is currently being validated by comparing calculated values of concentrations to measurements from approximately 100 stations included in the European Monitoring and Evalutation Programme (EMEP). The present paper describes the physical processes and parameterisations of the model together with the modifications of the chemical scheme. Validation of the model calculations by comparison to EMEP measurements for a summer and a winter month is shown and discussed. Furthermore, results from a sensitivity study of the model performance with respect to resolution in emission and meteorology input data is presented. Finally the future prospects of the model are discussed. The overall validation shows that the model performs well with respect to correlation for both monthly and daily mean values.

  12. A Model for Investigating Predictive Validity at Highly Selective Institutions.

    ERIC Educational Resources Information Center

    Gross, Alan L.; And Others

    A statistical model for investigating predictive validity at highly selective institutions is described. When the selection ratio is small, one must typically deal with a data set containing relatively large amounts of missing data on both criterion and predictor variables. Standard statistical approaches are based on the strong assumption that

  13. Hydrologic and water quality models: Key calibration and validation topics

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As a continuation of efforts to provide a common background and platform for accordant development of calibration and validation (C/V) engineering practices, ASABE members worked to determine critical topics related to model C/V, perform a synthesis of the Moriasi et al. (2012) special collection of...

  14. Hydrologic and water quality models: Use, calibration, and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper introduces a special collection of 22 research articles that present and discuss calibration and validation concepts in detail for hydrologic and water quality models by their developers and presents a broad framework for developing the American Society of Agricultural and Biological Engi...

  15. Validating soil phosphorus routines in the SWAT model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phosphorus transfer from agricultural soils to surface waters is an important environmental issue. Commonly used models like SWAT have not always been updated to reflect improved understanding of soil P transformations and transfer to runoff. Our objective was to validate the ability of the P routin...

  16. Validation of a tuber blight (Phytophthora infestans) prediction model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Potato tuber blight caused by Phytophthora infestans accounts for significant losses in storage. There is limited published quantitative data on predicting tuber blight. We validated a tuber blight prediction model developed in New York with cultivars Allegany, NY 101, and Katahdin using independent...

  17. MODELS FOR SUBMARINE OUTFALL - VALIDATION AND PREDICTION UNCERTAINTIES

    EPA Science Inventory

    This address reports on some efforts to verify and validate dilution models, including those found in Visual Plumes. This is done in the context of problem experience: a range of problems, including different pollutants such as bacteria; scales, including near-field and far-field...

  18. ID Model Construction and Validation: A Multiple Intelligences Case

    ERIC Educational Resources Information Center

    Tracey, Monica W.; Richey, Rita C.

    2007-01-01

    This is a report of a developmental research study that aimed to construct and validate an instructional design (ID) model that incorporates the theory and practice of multiple intelligences (MI). The study consisted of three phases. In phase one, the theoretical foundations of multiple Intelligences and ID were examined to guide the development

  19. Linear Model to Assess the Scale's Validity of a Test

    ERIC Educational Resources Information Center

    Tristan, Agustin; Vidal, Rafael

    2007-01-01

    Wright and Stone had proposed three features to assess the quality of the distribution of the items difficulties in a test, on the so called "most probable response map": line, stack and gap. Once a line is accepted as a design model for a test, gaps and stacks are practically eliminated, producing an evidence of the "scale validity" of the test.

  20. Validating Work Discrimination and Coping Strategy Models for Sexual Minorities

    ERIC Educational Resources Information Center

    Chung, Y. Barry; Williams, Wendi; Dispenza, Franco

    2009-01-01

    The purpose of this study was to validate and expand on Y. B. Chung's (2001) models of work discrimination and coping strategies among lesbian, gay, and bisexual persons. In semistructured individual interviews, 17 lesbians and gay men reported 35 discrimination incidents and their related coping strategies. Responses were coded based on Chung's

  1. Modeling Topaz-II system performance

    SciTech Connect

    Lee, H.H.; Klein, A.C. )

    1993-01-01

    The US acquisition of the Topaz-11 in-core thermionic space reactor test system from Russia provides a good opportunity to perform a comparison of the Russian reported data and the results from computer codes such as MCNP (Ref. 3) and TFEHX (Ref. 4). The comparison study includes both neutronic and thermionic performance analyses. The Topaz II thermionic reactor is modeled with MCNP using actual Russian dimensions and parameters. The computation of the neutronic performance considers several important aspects such as the fuel enrichment and location of the thermionic fuel elements (TFES) in the reactor core. The neutronic analysis included the calculation of both radial and axial power distribution, which are then used in the TFEHX code for electrical performance. The reactor modeled consists of 37 single-cell TFEs distributed in a 13-cm-radius zirconium hydride block surrounded by 8 cm of beryllium metal reflector. The TFEs use 90% enriched [sup 235]U and molybdenum coated with a thin layer of [sup 184]W for emitter surface. Electrons emitted are captured by a collector surface with a gap filled with cesium vapor between the collector and emitter surfaces. The collector surface is electrically insulated with alumina. Liquid NaK provides the cooling system for the TFEs. The axial thermal power distribution is obtained by dividing the TFE into 40 axial nodes. Comparison of the true axial power distribution with that produced by electrical heaters was also performed.

  2. Low-order dynamic modeling of the Experimental Breeder Reactor II

    SciTech Connect

    Berkan, R.C. . Dept. of Nuclear Engineering); Upadhyaya, B.R.; Kisner, R.A. )

    1990-07-01

    This report describes the development of a low-order, linear model of the Experimental Breeder Reactor II (EBR-II), including the primary system, intermediate heat exchanger, and steam generator subsystems. The linear model is developed to represent full-power steady state dynamics for low-level perturbations. Transient simulations are performed using model building and simulation capabilities of the computer software Matrix{sub x}. The inherently safe characteristics of the EBR-II are verified through the simulation studies. The results presented in this report also indicate an agreement between the linear model and the actual dynamics of the plant for several transients. Such models play a major role in the learning and in the improvement of nuclear reactor dynamics for control and signal validation studies. This research and development is sponsored by the Advanced Controls Program in the Instrumentation and Controls Division of the Oak Ridge National Laboratory. 17 refs., 67 figs., 15 tabs.

  3. Solar swimming pool heating: Description of a validated model

    SciTech Connect

    Haaf, W.; Luboschik, U.; Tesche, B. )

    1994-07-01

    In the framework of a European Demonstration Programme, co-financed by CEC and national bodies, a model was elaborated and validated for open-air swimming pools having a minimal surface of 100 m[sup 2] and a minimal depth of 0.5 m. The model consists of two parts, the energy balance of the pool and the solar plant. The theoretical background of the energy balance of an open-air swimming pool was found to be poor. Special monitoring campaigns were used to validate the dynamic model using mathematical parameter identification methods. The final model was simplified in order to shorten calculation time and to improve the user-friendliness by reducing the input values to the most important one. The programme is commercially available. However, it requires the hourly meteorological data of a test reference year (TRY) as an input. The users are mainly designing engineers.

  4. Inter-comparison and validation of ozone measurements by SAGE II and SBUV/2 instruments

    NASA Astrophysics Data System (ADS)

    Khatun, Sufia

    Ozone is an important trace gas in the Earth's atmosphere. Its distribution and temporal trends are monitored by a number of instruments. The measurement of atmospheric ozone is complicated and inter-comparison of measurements using different techniques is important in validating and developing a confidence when using these different ozone datasets. In this work measurement of ozone by two classes of space-based instruments SAGE II and SBUV/2 are compared. Twenty-one original layers of SBUV/2 ozone data are merged into five thick layers. The agreement in most regions for all four seasons between the SBUV/2 instruments are within 5-10 DU. The behavior of partial ozone column in layers 2 and 3 has a more dynamic nature. This may be due to Dobson circulation carrying ozone-rich air from equatorial higher altitudes to mid-latitude lower altitudes. Monthly averaged data for the four SBUV/2 instruments show annual cycles and in some cases semi-annual cycles. There are about six months in phase difference between the peaks and valleys in the northern mid-latitude region relative to equatorial region. This suggests the dynamic nature of ozone migration from equatorial region towards the pole. Pearson correlation coefficients for the total column ozone for the equatorial zone between NOAA 17 and the other three SBUV/2 instruments are 0.93, 1.00 and 0.99 respectively. Vast majority of the data falls within +/-1%. The maximum ozone is observed in the equatorial region at altitudes corresponding to pressure levels between 20-35 mbar. As we move away from the equatorial region the maximum ozone is observed at lower altitudes. The very highest values of ozone are concentrated in three or four patches with locations corresponding to centers of the Hadley and Ferrel cells. The agreement between SAGE II and SBUV/2 instruments is within 1-3 DU above the 50 mbar pressure level where the ozone content varies from a few to 65 DU. Time dependent comparisons between SAGE II and NOAA 09 & NOAA 11 ozone data for merged lower layer show an annual cycle in all regions. All three instruments agree well with some exceptions for a period following Mount Pinatubo eruption where SAGE II departed from NOAA 09 and NOAA 11 in the equatorial region especially in lower layer 2. There is no apparent effect of Pinatubo aerosols in other regions or layers other than 2. In layers 3 and 4 there is a cyclic behavior, but not as clear as observed in layer 2. All three instruments agree within 10-15% in layers 3 and 4. In the northern mid-latitude SAGE II reports consistently higher values of ozone in layer 2. All time histories presented do not show any significant increase or decrease of ozone content over long duration, although there are short periods of increasing or decreasing ozone. Ozone content from the longest running instrument, i.e., SAGE II, shows a slight decrease of "global" ozone over the period of more than 20 years. The best fit linear trend suggests ozone depletion rate of about 0.15% per year.

  5. Formulation and validation of a mathematical model of phytoplankton growth

    SciTech Connect

    Collins, C.D.

    1980-06-01

    A submodel for internal nutrients, critical to prediction of phytoplankton growth dynamics, is described. Process equations were developed to represent storage of nutrients with separate mechanisms for nutrient uptake and assimilation, multiple-nutrient limitation by the threshold hypothesis, and light-dependent photorespiration. These have been incorporated into the ecosystem model MS. CLEANER and calibrated for Ovre Heimdalsvatn, Norway, an ultra-oligotropic, subarctic lake. The model has been validated for Vorderer Finstertaler See, Austria.

  6. Concurrent Validity of the WISC-IV and DAS-II in Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Kuriakose, Sarah

    2014-01-01

    Cognitive assessments are used for a variety of research and clinical purposes in children with autism spectrum disorder (ASD). This study establishes concurrent validity of the Wechsler Intelligence Scales for Children-fourth edition (WISC-IV) and Differential Ability Scales-second edition (DAS-II) in a sample of children with ASD with a broad

  7. Validating the Thinking Styles Inventory-Revised II among Chinese University Students with Hearing Impairment through Test Accommodations

    ERIC Educational Resources Information Center

    Cheng, Sanyin; Zhang, Li-Fang

    2014-01-01

    The present study pioneered in adopting test accommodations to validate the Thinking Styles Inventory-Revised II (TSI-R2; Sternberg, Wagner, & Zhang, 2007) among Chinese university students with hearing impairment. A series of three studies were conducted that drew their samples from the same two universities, in which accommodating test…

  8. Concurrent Validity of the WISC-IV and DAS-II in Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Kuriakose, Sarah

    2014-01-01

    Cognitive assessments are used for a variety of research and clinical purposes in children with autism spectrum disorder (ASD). This study establishes concurrent validity of the Wechsler Intelligence Scales for Children-fourth edition (WISC-IV) and Differential Ability Scales-second edition (DAS-II) in a sample of children with ASD with a broad…

  9. Validity of NBME Parts I and II for the Selection of Residents: The Case of Orthopaedic Surgery.

    ERIC Educational Resources Information Center

    Case, Susan M.

    The predictive validity of scores on the National Board of Medical Examiners (NBME) Part I and Part II examinations for the selection of residents in orthopaedic surgery was investigated. Use of NBME scores has been criticized because of the time lag between taking Part I and entering residency and because Part I content is not directly linked to

  10. Prodiag : a process-independent transient diagnostic system - II : validation tests.

    SciTech Connect

    Reifman, J.; Wei, T. Y. C.

    1999-01-01

    The unique capabilities of the first-principles-based PRODIAG diagnostic system to identify unanticipated process component faults and to be ported across different processes/plants through modification of only input data files are demonstrated in two validation tests. The Braidwood Nuclear Power Plant full-scope operator training simulator is used to generate transient data for two plant systems used in the validation tests. The first test consists of a blind test performed with 39 simulated transients of 20 distinct types in the Braidwood chemical and volume control system. Of the 39 transients, 37 are correctly identified with varying precision within the first 40 s into the transient while the remaining two transients are not identified. The second validation test consists of a double-blind test performed with 14 simulated transients in the Braidwood component coolant water system. In addition to having no prior knowledge of the identity of the transients, in the double-blind test we also had no prior information regarding the identity of the component faults that the simulator was capable of modeling. All 14 transient events are correctly identified with varying precision within the first 30 s into the transient. The test results provide enough evidence to successfully confirm the unique capabilities of the plant-level PRODIAG diagnostic system.

  11. Open-Source MFIX-DEM Software for Gas-Solids Flows: Part II - Validation Studies

    SciTech Connect

    Li, Tingwen

    2012-04-01

    With rapid advancements in computer hardware and numerical algorithms, computational fluid dynamics (CFD) has been increasingly employed as a useful tool for investigating the complex hydrodynamics inherent in multiphase flows. An important step during the development of a CFD model and prior to its application is conducting careful and comprehensive verification and validation studies. Accordingly, efforts to verify and validate the open-source MFIX-DEM software, which can be used for simulating the gassolids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles, have been made at the National Energy Technology Laboratory (NETL). In part I of this paper, extensive verification studies were presented and in this part, detailed validation studies of MFIX-DEM are presented. A series of test cases covering a range of gassolids flow applications were conducted. In particular the numerical results for the random packing of a binary particle mixture, the repose angle of a sandpile formed during a side charge process, velocity, granular temperature, and voidage profiles from a bounded granular shear flow, lateral voidage and velocity profiles from a monodisperse bubbling fluidized bed, lateral velocity profiles from a spouted bed, and the dynamics of segregation of a binary mixture in a bubbling bed were compared with available experimental data, and in some instances with empirical correlations. In addition, sensitivity studies were conducted for various parameters to quantify the error in the numerical simulation.

  12. Open-source MFIX-DEM software for gas-solids flows: Part II Validation studies

    SciTech Connect

    Li, Tingwen; Garg, Rahul; Galvin, Janine; Pannala, Sreekanth

    2012-01-01

    With rapid advancements in computer hardware and numerical algorithms, computational fluid dynamics (CFD) has been increasingly employed as a useful tool for investigating the complex hydrodynamics inherent in multiphase flows. An important step during the development of a CFD model and prior to its application is conducting careful and comprehensive verification and validation studies. Accordingly, efforts to verify and validate the open-source MFIX-DEM software, which can be used for simulating the gas solids flow using an Eulerian reference frame for the continuum fluid and a Lagrangian discrete framework (Discrete Element Method) for the particles, have been made at the National Energy Technology Laboratory (NETL). In part I of this paper, extensive verification studies were presented and in this part, detailed validation studies of MFIX-DEM are presented. A series of test cases covering a range of gas solids flow applications were conducted. In particular the numerical results for the random packing of a binary particle mixture, the repose angle of a sandpile formed during a side charge process, velocity, granular temperature, and voidage profiles from a bounded granular shear flow, lateral voidage and velocity profiles from a monodisperse bubbling fluidized bed, lateral velocity profiles from a spouted bed, and the dynamics of segregation of a binary mixture in a bubbling bed were compared with available experimental data, and in some instances with empirical correlations. In addition, sensitivity studies were conducted for various parameters to quantify the error in the numerical simulation.

  13. Finite Element Model Development and Validation for Aircraft Fuselage Structures

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Fleming, Gary A.; Pappa, Richard S.; Grosveld, Ferdinand W.

    2000-01-01

    The ability to extend the valid frequency range for finite element based structural dynamic predictions using detailed models of the structural components and attachment interfaces is examined for several stiffened aircraft fuselage structures. This extended dynamic prediction capability is needed for the integration of mid-frequency noise control technology. Beam, plate and solid element models of the stiffener components are evaluated. Attachment models between the stiffener and panel skin range from a line along the rivets of the physical structure to a constraint over the entire contact surface. The finite element models are validated using experimental modal analysis results. The increased frequency range results in a corresponding increase in the number of modes, modal density and spatial resolution requirements. In this study, conventional modal tests using accelerometers are complemented with Scanning Laser Doppler Velocimetry and Electro-Optic Holography measurements to further resolve the spatial response characteristics. Whenever possible, component and subassembly modal tests are used to validate the finite element models at lower levels of assembly. Normal mode predictions for different finite element representations of components and assemblies are compared with experimental results to assess the most accurate techniques for modeling aircraft fuselage type structures.

  14. Experimental Validation of Modified Barton's Model for Rock Fractures

    NASA Astrophysics Data System (ADS)

    Asadollahi, Pooyan; Invernizzi, Marco C. A.; Addotto, Simone; Tonon, Fulvio

    2010-09-01

    Among the constitutive models for rock fractures developed over the years, Bartons empirical model has been widely used. Although Bartons failure criterion predicts peak shear strength of rock fractures with acceptable precision, it has some limitations in estimating the peak shear displacement, post-peak shear strength, dilation, and surface degradation. The first author modified Bartons original model in order to address these limitations. In this study, the modified Bartons model (the peak shear displacement, the shear stress-displacement curve, and the dilation displacement) is validated by conducting a series of direct shear tests.

  15. Validating the BHR RANS model for variable density turbulence

    SciTech Connect

    Israel, Daniel M; Gore, Robert A; Stalsberg - Zarling, Krista L

    2009-01-01

    The BHR RANS model is a turbulence model for multi-fluid flows in which density variation plays a strong role in the turbulence processes. In this paper they demonstrate the usefulness of BHR over a wide range of flows which include the effects of shear, buoyancy, and shocks. The results are in good agreement with experimental and DNS data across the entire set of validation cases, with no need to retune model coefficients between cases. The model has potential application to a number of aerospace related flow problems.

  16. Validation and upgrading of physically based mathematical models

    NASA Technical Reports Server (NTRS)

    Duval, Ronald

    1992-01-01

    The validation of the results of physically-based mathematical models against experimental results was discussed. Systematic techniques are used for: (1) isolating subsets of the simulator mathematical model and comparing the response of each subset to its experimental response for the same input conditions; (2) evaluating the response error to determine whether it is the result of incorrect parameter values, incorrect structure of the model subset, or unmodeled external effects of cross coupling; and (3) modifying and upgrading the model and its parameter values to determine the most physically appropriate combination of changes.

  17. Experimentally validated finite element model of electrocaloric multilayer ceramic structures

    SciTech Connect

    Smith, N. A. S. E-mail: maciej.rokosz@npl.co.uk Correia, T. M. E-mail: maciej.rokosz@npl.co.uk; Rokosz, M. K. E-mail: maciej.rokosz@npl.co.uk

    2014-07-28

    A novel finite element model to simulate the electrocaloric response of a multilayer ceramic capacitor (MLCC) under real environment and operational conditions has been developed. The two-dimensional transient conductive heat transfer model presented includes the electrocaloric effect as a source term, as well as accounting for radiative and convective effects. The model has been validated with experimental data obtained from the direct imaging of MLCC transient temperature variation under application of an electric field. The good agreement between simulated and experimental data, suggests that the novel experimental direct measurement methodology and the finite element model could be used to support the design of optimised electrocaloric units and operating conditions.

  18. Validation of a Model for Teaching Canine Fundoscopy.

    PubMed

    Nibblett, Belle Marie D; Pereira, Mary Mauldin; Williamson, Julie A; Sithole, Fortune

    2015-01-01

    A validated teaching model for canine fundoscopic examination was developed to improve Day One fundoscopy skills while at the same time reducing use of teaching dogs. This novel eye model was created from a hollow plastic ball with a cutout for the pupil, a suspended 20-diopter lens, and paint and paper simulation of relevant eye structures. This eye model was mounted on a wooden stand with canine head landmarks useful in performing fundoscopy. Veterinary educators performed fundoscopy using this model and completed a survey to establish face and content validity. Subsequently, veterinary students were randomly assigned to pre-laboratory training with or without the use of this teaching model. After completion of an ophthalmology laboratory on teaching dogs, student outcome was assessed by measuring students' ability to see a symbol inserted on the simulated retina in the model. Students also completed a survey regarding their experience with the model and the laboratory. Overall, veterinary educators agreed that this eye model was well constructed and useful in teaching good fundoscopic technique. Student performance of fundoscopy was not negatively impacted by the use of the model. This novel canine model shows promise as a teaching and assessment tool for fundoscopy. PMID:25769909

  19. The validity and reliability of the Knowledge of Women's Issues and Epilepsy (KOWIE) Questionnaires I and II.

    PubMed

    Long, Lucretia; McAuley, James W; Shneker, Bassel; Moore, J Layne

    2005-04-01

    The Knowledge of Women's Issues in Epilepsy (KOWIE) Questionnaires I and II were developed to assess what women with epilepsy (WWE) and practitioners know about relevant topics and concerns. Prior to disseminating any tool, an instrument should be both valid and reliable. The purpose of this study was to report the validity and reliability of the KOWIE Questionnaires I and II. To establish validity, the original KOWIE was sent to five experts who critiqued the relevance of each item. A content validity inventory (CVI) was developed later and sent to 20 additional epilepsy experts across the country. Tool stability was evaluated by test-retest procedures. Patients and practitioners completed corresponding tools on day one, and 24 hours later, on day two. Participants were asked to not review information on the topic of interest until after study procedures were completed. Sixteen of 20 expert responses were included in data analysis; 4 were excluded due to incomplete data. The CVI correlation coefficient was 0.92. Test-retest results from all 9 patients and 18 of 20 healthcare professionals were included in data analysis. Correlation coefficients were 0.88 and 0.83 for the KOWIE I and II, respectively, confirming these questionnaires are valid and reliable. While future knowledge may require altering both tools, the current instrument may be used as an assessment tool and guide intervention as it pertains to outcomes in WWE. PMID:15902950

  20. Propeller aircraft interior noise model utilization study and validation

    NASA Technical Reports Server (NTRS)

    Pope, L. D.

    1984-01-01

    Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.

  1. Interpretation of KABC-II scores: An evaluation of the incremental validity of Cattell-Horn-Carroll (CHC) factor scores in predicting achievement.

    PubMed

    McGill, Ryan J

    2015-12-01

    This study is an examination of the incremental validity of Cattell-Horn-Carroll (CHC) factor scores from the Kaufman Assessment Battery for Children-second edition (KABC-II) for predicting scores on the Kaufman Test of Educational Achievement-second edition (KTEA-II). The participants were children and adolescents, ages 7-18, (N = 2,025) drawn from the KABC-II standardization sample. The sample was nationally stratified and proportional to U.S. census estimates for sex, ethnicity, geographic region, and parent education level. Hierarchical multiple regression analyses were used to assess for factor-level effects after controlling for the variance accounted for by the full scale Fluid-Crystallized Index (FCI) score. The results were interpreted using the R2/ΔR2 statistic as effect size indices. Consistent with similar incremental validity studies, the FCI accounted for statistically and clinically significant portions of KTEA-II score variance, with R2 values ranging from .30 to .65. KABC-II CHC factor scores collectively provided statistically significant incremental variance beyond the FCI in all of the regression models, although the effect size estimates were consistently negligible to small (Average ÄRCHC2 = .03). Individually, the KABC-II factor scores accounted for mostly small portions of achievement variance across the prediction models, with none of the individual CHC factors accounting for clinically significant incremental prediction beyond the FCI. Additionally, most of the unique first-order predictive variance was captured by the Crystallized Ability factor alone. The potential clinical and theoretical implications of these results are discussed. PMID:25894708

  2. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and Space Station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  3. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  4. Validation of impaired renal function chick model with uranyl nitrate

    SciTech Connect

    Harvey, R.B.; Kubena, L.F.; Phillips, T.D.; Heidelbaugh, N.D.

    1986-01-01

    Uranium is a highly toxic element when soluble salts are administered parenterally, whereas the index of toxicity is very low when ingested. In the salt form, uranium is one of the oldest substances used experimentally to induce mammalian renal failure. Renal damage occurs when uranium reacts chemically with the protein of columnar cells lining the tubular epithelium, leading to cellular injury and necrosis. Uranyl nitrate (UN) is the most common uranium salt utilized for nephrotoxic modeling. The development of an impaired renal function (IRF) chick model required a suitable nephrotoxic compound, such as UN, for validation, yet toxicity data for chickens were notably absent in the literature. The objective of the present study was to validate the IRF model with UN, based upon preliminary nephrotoxic dosages developed in this laboratory.

  5. Validation of Knowledge Acquisition for Surgical Process Models

    PubMed Central

    Neumuth, Thomas; Jannin, Pierre; Strauss, Gero; Meixensberger, Juergen; Burgert, Oliver

    2009-01-01

    Objective Surgical Process Models (SPMs) are models of surgical interventions. The objectives of this study are to validate acquisition methods for Surgical Process Models and to assess the performance of different observer populations. Design The study examined 180 SPM of simulated Functional Endoscopic Sinus Surgeries (FESS), recorded with observation software. About 150,000 single measurements in total were analyzed. Measurements Validation metrics were used for assessing the granularity, content accuracy, and temporal accuracy of structures of SPMs. Results Differences between live observations and video observations are not statistically significant. Observations performed by subjects with medical backgrounds gave better results than observations performed by subjects with technical backgrounds. Granularity was reconstructed correctly by 90%, content by 91%, and the mean temporal accuracy was 1.8 s. Conclusion The study shows the validity of video as well as live observations for modeling Surgical Process Models. For routine use, the authors recommend live observations due to their flexibility and effectiveness. If high precision is needed or the SPM parameters are altered during the study, video observations are the preferable approach. PMID:18952942

  6. Model validation and selection based on inverse fuzzy arithmetic

    NASA Astrophysics Data System (ADS)

    Haag, Thomas; Carvajal González, Sergio; Hanss, Michael

    2012-10-01

    In this work, a method for the validation of models in general, and the selection of the most appropriate model in particular, is presented. As an industrially relevant example, a Finite Element (FE) model of a brake pad is investigated and identified with particular respect to uncertainties. The identification is based on inverse fuzzy arithmetic and consists of two stages. In the first stage, the eigenfrequencies of the brake pad are considered, and for three different material models, a set of fuzzy-valued parameters is identified on the basis of measurement values. Based on these identified parameters and a resimulation of the system with these parameters, a model validation is performed which takes into account both the model uncertainties and the output uncertainties. In the second stage, the most appropriate material model is used in the FE model for the computation of frequency response functions between excitation point and three measurement points. Again, the parameters of the model are identified on the basis of three corresponding measurement signals and a resimulation is conducted.

  7. Prediction of driving ability: Are we building valid models?

    PubMed

    Hoggarth, Petra A; Innes, Carrie R H; Dalrymple-Alford, John C; Jones, Richard D

    2015-04-01

    The prediction of on-road driving ability using off-road measures is a key aim in driving research. The primary goal in most classification models is to determine a small number of off-road variables that predict driving ability with high accuracy. Unfortunately, classification models are often over-fitted to the study sample, leading to inflation of predictive accuracy, poor generalization to the relevant population and, thus, poor validity. Many driving studies do not report sufficient details to determine the risk of model over-fitting and few report any validation technique, which is critical to test the generalizability of a model. After reviewing the literature, we generated a model using a moderately large sample size (n=279) employing best practice techniques in the context of regression modelling. By then randomly selecting progressively smaller sample sizes we show that a low ratio of participants to independent variables can result in over-fitted models and spurious conclusions regarding model accuracy. We conclude that more stable models can be constructed by following a few guidelines. PMID:25667204

  8. Numerical modeling, calibration, and validation of an ultrasonic separator.

    PubMed

    Cappon, Hans; Keesman, Karel J

    2013-03-01

    Our overall goal is to apply acoustic separation technology for the recovery of valuable particulate matter from wastewater in industry. Such large-scale separator systems require detailed design and evaluation to optimize the system performance at the earliest stage possible. Numerical models can facilitate and accelerate the design of this application; therefore, a finite element (FE) model of an ultrasonic particle separator is a prerequisite. In our application, the particle separator consists of a glass resonator chamber with a piezoelectric transducer attached to the glass by means of epoxy adhesive. Separation occurs most efficiently when the system is operated at its main eigenfrequency. The goal of the paper is to calibrate and validate a model of a demonstrator ultrasonic separator, preserving known physical parameters and estimating the remaining unknown or less-certain parameters to allow extrapolation of the model beyond the measured system. A two-step approach was applied to obtain a validated model of the separator. The first step involved the calibration of the piezoelectric transducer. The second step, the subject of this paper, involves the calibration and validation of the entire separator using nonlinear optimization techniques. The results show that the approach lead to a fully calibrated 2-D model of the empty separator, which was validated with experiments on a filled separator chamber. The large sensitivity of the separator to small variations indicated that such a system should either be made and operated within tight specifications to obtain the required performance or the operation of the system should be adaptable to cope with a slightly off-spec system, requiring a feedback controller. PMID:23475927

  9. A verification and validation process for model-driven engineering

    NASA Astrophysics Data System (ADS)

    Delmas, R.; Pires, A. F.; Polacsek, T.

    2013-12-01

    Model Driven Engineering practitioners already benefit from many well established verification tools, for Object Constraint Language (OCL), for instance. Recently, constraint satisfaction techniques have been brought to Model-Driven Engineering (MDE) and have shown promising results on model verification tasks. With all these tools, it becomes possible to provide users with formal support from early model design phases to model instantiation phases. In this paper, a selection of such tools and methods is presented, and an attempt is made to define a verification and validation process for model design and instance creation centered on UML (Unified Modeling Language) class diagrams and declarative constraints, and involving the selected tools. The suggested process is illustrated with a simple example.

  10. Aggregating validity indicators embedded in Conners' CPT-II outperforms individual cutoffs at separating valid from invalid performance in adults with traumatic brain injury.

    PubMed

    Erdodi, Laszlo A; Roth, Robert M; Kirsch, Ned L; Lajiness-O'neill, Renee; Medoff, Brent

    2014-08-01

    Continuous performance tests (CPT) provide a useful paradigm to assess vigilance and sustained attention. However, few established methods exist to assess the validity of a given response set. The present study examined embedded validity indicators (EVIs) previously found effective at dissociating valid from invalid performance in relation to well-established performance validity tests in 104 adults with TBI referred for neuropsychological testing. Findings suggest that aggregating EVIs increases their signal detection performance. While individual EVIs performed well at their optimal cutoffs, two specific combinations of these five indicators generally produced the best classification accuracy. A CVI-5A ≥3 had a specificity of .92-.95 and a sensitivity of .45-.54. At ≥4 the CVI-5B had a specificity of .94-.97 and sensitivity of .40-.50. The CVI-5s provide a single numerical summary of the cumulative evidence of invalid performance within the CPT-II. Results support the use of a flexible, multivariate approach to performance validity assessment. PMID:24957927

  11. Robust cross-validation of linear regression QSAR models.

    PubMed

    Konovalov, Dmitry A; Llewellyn, Lyndon E; Vander Heyden, Yvan; Coomans, Danny

    2008-10-01

    A quantitative structure-activity relationship (QSAR) model is typically developed to predict the biochemical activity of untested compounds from the compounds' molecular structures. "The gold standard" of model validation is the blindfold prediction when the model's predictive power is assessed from how well the model predicts the activity values of compounds that were not considered in any way during the model development/calibration. However, during the development of a QSAR model, it is necessary to obtain some indication of the model's predictive power. This is often done by some form of cross-validation (CV). In this study, the concepts of the predictive power and fitting ability of a multiple linear regression (MLR) QSAR model were examined in the CV context allowing for the presence of outliers. Commonly used predictive power and fitting ability statistics were assessed via Monte Carlo cross-validation when applied to percent human intestinal absorption, blood-brain partition coefficient, and toxicity values of saxitoxin QSAR data sets, as well as three known benchmark data sets with known outlier contamination. It was found that (1) a robust version of MLR should always be preferred over the ordinary-least-squares MLR, regardless of the degree of outlier contamination and that (2) the model's predictive power should only be assessed via robust statistics. The Matlab and java source code used in this study is freely available from the QSAR-BENCH section of www.dmitrykonovalov.org for academic use. The Web site also contains the java-based QSAR-BENCH program, which could be run online via java's Web Start technology (supporting Windows, Mac OSX, Linux/Unix) to reproduce most of the reported results or apply the reported procedures to other data sets. PMID:18826208

  12. Implementation and validation of RSTT's model for the Euromed region

    NASA Astrophysics Data System (ADS)

    Guilbert, J.; Merrer, S.; Godey, S.; Cano, Y.; Bossu, R.

    2013-05-01

    The RSTT model and software package is currently the only model available to the scientific community for performing quick calculations of localization in 3D model. This package was implemented at French NDC for validation in 2012. The first tests of validation were made using GT5 events recorded by the French seismic network (LDG' seismic network). For this first test we compute and compare the location of GT5 events using only the LDG seismic stations. The result obtained shown that inside the GT5 precision (<5km) it is not possible to decide between AK135 model and RSTT (ie 50% of events). For the other 50% of GT5 events for which the geometry of the LDG's network is not well adapted, the RSTT model perform a good correction of travel time for regional seismic wave (Pg, Pn, Sg, Sn) and allows to improve the final quality of location comparatively to AK135 model. To extend the statistics on the qualities of the RSTT's model, we apply the same procedure over EuroMed region in collaboration with the EMSC team. We perform multiple tests to ensure the ability of RSTT model to improve the location of GT5 event for different geometries of seismic stations. The results of the RSTT stress tests will be presented and discussed.

  13. KINEROS2-AGWA: Model Use, Calibration, and Validation

    NASA Technical Reports Server (NTRS)

    Goodrich, D C.; Burns, I. S.; Unkrich, C. L.; Semmens, D. J.; Guertin, D. P.; Hernandez, M.; Yatheendradas, S.; Kennedy, J. R.; Levick, L. R..

    2013-01-01

    KINEROS (KINematic runoff and EROSion) originated in the 1960s as a distributed event-based model that conceptualizes a watershed as a cascade of overland flow model elements that flow into trapezoidal channel model elements. KINEROS was one of the first widely available watershed models that interactively coupled a finite difference approximation of the kinematic overland flow equations to a physically based infiltration model. Development and improvement of KINEROS continued from the 1960s on a variety of projects for a range of purposes, which has resulted in a suite of KINEROS-based modeling tools. This article focuses on KINEROS2 (K2), a spatially distributed, event-based watershed rainfall-runoff and erosion model, and the companion ArcGIS-based Automated Geospatial Watershed Assessment (AGWA) tool. AGWA automates the time-consuming tasks of watershed delineation into distributed model elements and initial parameterization of these elements using commonly available, national GIS data layers. A variety of approaches have been used to calibrate and validate K2 successfully across a relatively broad range of applications (e.g., urbanization, pre- and post-fire, hillslope erosion, erosion from roads, runoff and recharge, and manure transport). The case studies presented in this article (1) compare lumped to stepwise calibration and validation of runoff and sediment at plot, hillslope, and small watershed scales; and (2) demonstrate an uncalibrated application to address relative change in watershed response to wildfire.

  14. Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns

    PubMed Central

    Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain

    2015-01-01

    Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model’s predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917

  15. Development of a Validated Model of Ground Coupling

    SciTech Connect

    Metz, P.D.

    1980-01-01

    A research program at Brookhaven National Laboratory (BNL) studies ground coupling, the use of the earth as a heat source/sink or storage element for solar heat pump space conditioning systems. This paper outlines the analytical and experimental research to date toward the development of an experimentally validated model of ground coupling and based on experimental results from December, 1978 to September, 1979, expores sensitivity of present model predictions to variations in thermal conductivity and other factors. Ways in which the model can be further refined are discussed.

  16. Verifying and Validating Proposed Models for FSW Process Optimization

    NASA Technical Reports Server (NTRS)

    Schneider, Judith

    2008-01-01

    This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms

  17. Validation of the SUNY Satellite Model in a Meteosat Evironment

    SciTech Connect

    Perez, R.; Schlemmer, J.; Renne, D.; Cowlin, S.; George, R.; Bandyopadhyay, B.

    2009-01-01

    The paper presents a validation of the SUNY satellite-to-irradiance model against four ground-truth stations from the Indian solar radiation network located in and around the province of Rajasthan, India. The SUNY model had initially been developed and tested to process US weather satellite data from the GOES series and has been used as part of the production of the US National Solar Resource Data Base (NSRDB). Here the model is applied to processes data from the European weather satellites Meteosat 5 and 7.

  18. Modeling and experimental validation of buckling dielectric elastomer actuators

    NASA Astrophysics Data System (ADS)

    Vertechy, Rocco; Frisoli, Antonio; Bergamasco, Massimo; Carpi, Federico; Frediani, Gabriele; De Rossi, Danilo

    2012-09-01

    Buckling dielectric elastomer actuators are a special type of electromechanical transducers that exploit electro-elastic instability phenomena to generate large out-of-plane axial-symmetric deformations of circular membranes made of non-conductive rubbery material. In this paper a simplified explicit analytical model and a general monolithic finite element model are described for the coupled electromechanical analysis and simulation of buckling dielectric elastomer membranes which undergo large electrically induced displacements. Experimental data are also reported which validate the developed models.

  19. Validation results of wind diesel simulation model TKKMOD

    NASA Astrophysics Data System (ADS)

    Manninen, L. M.

    The document summarizes the results of TKKMOD validation procedure. TKKMOD is a simulation model developed at Helsinki University of Technology for a specific wind-diesel system layout. The model has been included into the European wind-diesel modeling software package WDLTOOLS under the CEC JOULE project Engineering Design Tools for Wind-Diesel Systems (JOUR-0078). The simulation model is utilized for calculation of long-term performance of the reference system configuration for given wind and load conditions. The main results are energy flows, energy losses in the system components, diesel fuel consumption, and the number of diesel engine starts. The work has been funded through the Finnish Advanced Energy System R&D Programme (NEMO). The validation has been performed using the data from EFI (Norwegian Electric Power Institute), since data from the Finnish reference system is not yet available. The EFI system has a slightly different configuration with similar overall operating principles and approximately same battery capacity. The validation data set, 394 hours of measured data, is from the first prototype wind-diesel system on the island FROYA off the Norwegian coast.

  20. Climate Model Datasets on Earth System Grid II (ESG II)

    DOE Data Explorer

    Earth System Grid (ESG) is a project that combines the power and capacity of supercomputers, sophisticated analysis servers, and datasets on the scale of petabytes. The goal is to provide a seamless distributed environment that allows scientists in many locations to work with large-scale data, perform climate change modeling and simulation,and share results in innovative ways. Though ESG is more about the computing environment than the data, still there are several catalogs of data available at the web site that can be browsed or search. Most of the datasets are restricted to registered users, but several are open to any access.

  1. Calibration of Predictor Models Using Multiple Validation Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  2. Rationality Validation of a Layered Decision Model for Network Defense

    SciTech Connect

    Wei, Huaqiang; Alves-Foss, James; Zhang, Du; Frincke, Deb

    2007-08-31

    We propose a cost-effective network defense strategy built on three key: three decision layers: security policies, defense strategies, and real-time defense tactics for countering immediate threats. A layered decision model (LDM) can be used to capture this decision process. The LDM helps decision-makers gain insight into the hierarchical relationships among inter-connected entities and decision types, and supports the selection of cost-effective defense mechanisms to safeguard computer networks. To be effective as a business tool, it is first necessary to validate the rationality of model before applying it to real-world business cases. This paper describes our efforts in validating the LDM rationality through simulation.

  3. Approaches to Validation of Models for Low Gravity Fluid Behavior

    NASA Technical Reports Server (NTRS)

    Chato, David J.; Marchetta, Jeffery; Hochstein, John I.; Kassemi, Mohammad

    2005-01-01

    This paper details the author experiences with the validation of computer models to predict low gravity fluid behavior. It reviews the literature of low gravity fluid behavior as a starting point for developing a baseline set of test cases. It examines authors attempts to validate their models against these cases and the issues they encountered. The main issues seem to be that: Most of the data is described by empirical correlation rather than fundamental relation; Detailed measurements of the flow field have not been made; Free surface shapes are observed but through thick plastic cylinders, and therefore subject to a great deal of optical distortion; and Heat transfer process time constants are on the order of minutes to days but the zero-gravity time available has been only seconds.

  4. Microelectronics package design using experimentally-validated modeling and simulation.

    SciTech Connect

    Johnson, Jay Dean; Young, Nathan Paul; Ewsuk, Kevin Gregory

    2010-11-01

    Packaging high power radio frequency integrated circuits (RFICs) in low temperature cofired ceramic (LTCC) presents many challenges. Within the constraints of LTCC fabrication, the design must provide the usual electrical isolation and interconnections required to package the IC, with additional consideration given to RF isolation and thermal management. While iterative design and prototyping is an option for developing RFIC packaging, it would be expensive and most likely unsuccessful due to the complexity of the problem. To facilitate and optimize package design, thermal and mechanical simulations were used to understand and control the critical parameters in LTCC package design. The models were validated through comparisons to experimental results. This paper summarizes an experimentally-validated modeling approach to RFIC package design, and presents some results and key findings.

  5. A benchmark for the validation of solidification modelling algorithms

    NASA Astrophysics Data System (ADS)

    Kaschnitz, E.; Heugenhauser, S.; Schumacher, P.

    2015-06-01

    This work presents two three-dimensional solidification models, which were solved by several commercial solvers (MAGMASOFT, FLOW-3D, ProCAST, WinCast, ANSYS, and OpenFOAM). Surprisingly, the results show noticeable differences. The results are analyzed similar to a round-robin test procedure to obtain reference values for temperatures and their uncertainties at selected positions in the model. The first model is similar to an adiabatic calorimeter with an aluminum alloy solidifying in a copper block. For this model, an analytical solution for the overall temperature at steady state can be calculated. The second model implements additional heat transfer boundary conditions at outer faces. The geometry of the models, the initial and boundary conditions as well as the material properties are kept as simple as possible but, nevertheless, close to a realistic solidification situation. The gained temperature results can be used to validate self-written solidification solvers and check the accuracy of commercial solidification programs.

  6. Optimization and validation of a micellar electrokinetic chromatographic method for the analysis of several angiotensin-II-receptor antagonists.

    PubMed

    Hillaert, S; De Beer, T R M; De Beer, J O; Van den Bossche, W

    2003-01-10

    We have optimized a micellar electrokinetic capillary chromatographic method for the separation of six angiotensin-II-receptor antagonists (ARA-IIs): candesartan, eprosartan mesylate, irbesartan, losartan potassium, telmisartan, and valsartan. A face-centred central composite design was applied to study the effect of the pH, the molarity of the running buffer, and the concentration of the micelle-forming agent on the separation properties. A combination of the studied parameters permitted the separation of the six ARA-IIs, which was best carried out using a 55-mM sodium phosphate buffer solution (pH 6.5) containing 15 mM of sodium dodecyl sulfate. The same system can also be applied for the quantitative determination of these compounds, but only for the more stable ARA-IIs (candesartan, eprosartan mesylate, losartan potassium, and valsartan). Some system parameters (linearity, precision, and accuracy) were validated. PMID:12564683

  7. Seine estuary modelling and AirSWOT measurements validation

    NASA Astrophysics Data System (ADS)

    Chevalier, Laetitia; Lyard, Florent; Laignel, Benoit

    2013-04-01

    In the context of global climate change, knowing water fluxes and storage, from the global scale to the local scale, is a crucial issue. The future satellite SWOT (Surface Water and Ocean Topography) mission, dedicated to the surface water observation, is proposed to meet this challenge. SWOT main payload will be a Ka-band Radar Interferometer (KaRIn). To validate this new kind of measurements, preparatory airborne campaigns (called AirSWOT) are currently being designed. AirSWOT will carry an interferometer similar to Karin: Kaspar-Ka-band SWOT Phenomenology Airborne Radar. Some campaigns are planned in France in 2014. During these campaigns, the plane will fly over the Seine River basin, especially to observe its estuary, the upstream river main channel (to quantify river-aquifer exchange) and some wetlands. The present work objective is to validate the ability of AirSWOT and SWOT, using a Seine estuary hydrodynamic modelling. In this context, field measurements will be collected by different teams such as GIP (Public Interest Group) Seine Aval, the GPMR (Rouen Seaport), SHOM (Hydrographic and Oceanographic Service of the Navy), the IFREMER (French Research Institute for Sea Exploitation), Mercator-Ocean, LEGOS (Laboratory of Space Study in Geophysics and Oceanography), ADES (Data Access Groundwater) ... . These datasets will be used first to validate locally AirSWOT measurements, and then to improve a hydrodynamic simulations (using tidal boundary conditions, river and groundwater inflows ...) for AirSWOT data 2D validation. This modelling will also be used to estimate the benefit of the future SWOT mission for mid-latitude river hydrology. To do this modelling,the TUGOm barotropic model (Toulouse Unstructured Grid Ocean model 2D) is used. Preliminary simulations have been performed by first modelling and then combining to different regions: first the Seine River and its estuarine area and secondly the English Channel. These two simulations h are currently being improved, by testing different roughness coefficients, adding tributary inflows. Groundwater contributions will also be introduced (digital TUGOm development in progress) . The model outputs will be validated using data from the GPMR tide gauge data and measurements from the Topex/Poseidon and Jason-1/-2 altimeters for year 2007.

  8. Closed coronal structures. II. generalized hydrostatic model

    SciTech Connect

    Serio, S.; Peres, G.; Vaiana, G.S.; Golub, L.; Rosner, R.

    1981-01-01

    We use numerical computations of stationary solar coronal loop atmospheres to extend previous analytical work by Rosner, Tucker, and Vaiana. The two classes of loops examined include; (i) symmetric loops with a temperature maximum at the top, but now having length L greater than the pressure scale height s/sub p/, and (ii) loops which have local temperature mimimum at the top. For the first class we find new scaling laws, similar to those found by Rosner, Tucker, and Vaiana, which relate the base pressure p/sub 0/ and loop length to the base heating E/sub 0/, the heating deposition scale height s/sub H/, and the pressure scale height: Troughly-equal1.4 x 10/sup 3/(p/sub 0/L)/sup 0.33/ exp(-0.04L(2/s/sub H/+1/s/sub p/)) and E/sub 0/roughly-equal10/sup 5/p/sub 0/ /sup 1.17/L/sup -0.83/ exp (0.5L(1/s/sub H/-1/s/sub p/)). Loop for which L is greater than approx.2 to 3 times the pressure scale height do not have stable solutions unless they have a temperature minimum at the top. Computed models with a temperature inversion at the top are allowed in a wider range of s/sub H/ values than are loops with T/sub max/ at the top. We discuss these results in relation to observations showing a dependence of prominence formation and stability on the state of evolution of magnetic structures, and we suggest a general scenario for the understanding of loop evolution from emergence in active regions through the large-scale structure phase to opening in coronal holes.

  9. Laboratory validation of a sparse aperture image quality model

    NASA Astrophysics Data System (ADS)

    Salvaggio, Philip S.; Schott, John R.; McKeown, Donald M.

    2015-09-01

    The majority of image quality studies in the field of remote sensing have been performed on systems with conventional aperture functions. These systems have well-understood image quality tradeoffs, characterized by the General Image Quality Equation (GIQE). Advanced, next-generation imaging systems present challenges to both post-processing and image quality prediction. Examples include sparse apertures, synthetic apertures, coded apertures and phase elements. As a result of the non-conventional point spread functions of these systems, post-processing becomes a critical step in the imaging process and artifacts arise that are more complicated than simple edge overshoot. Previous research at the Rochester Institute of Technology's Digital Imaging and Remote Sensing Laboratory has resulted in a modeling methodology for sparse and segmented aperture systems, the validation of which will be the focus of this work. This methodology has predicted some unique post-processing artifacts that arise when a sparse aperture system with wavefront error is used over a large (panchromatic) spectral bandpass. Since these artifacts are unique to sparse aperture systems, they have not yet been observed in any real-world data. In this work, a laboratory setup and initial results for a model validation study will be described. Initial results will focus on the validation of spatial frequency response predictions and verification of post-processing artifacts. The goal of this study is to validate the artifact and spatial frequency response predictions of this model. This will allow model predictions to be used in image quality studies, such as aperture design optimization, and the signal-to-noise vs. post-processing artifact tradeoff resulting from choosing a panchromatic vs. multispectral system.

  10. In-Drift Microbial Communities Model Validation Calculation

    SciTech Connect

    D. M. Jolley

    2001-10-31

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  11. IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS

    SciTech Connect

    D.M. Jolley

    2001-12-18

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data.

  12. In-Drift Microbial Communities Model Validation Calculations

    SciTech Connect

    D. M. Jolley

    2001-09-24

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  13. Methods for Geometric Data Validation of 3d City Models

    NASA Astrophysics Data System (ADS)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges are detected, among additional criteria. Self-intersection might lead to different results, e.g. intersection points, lines or areas. Depending on the geometric constellation, they might represent gaps between bounding polygons of the solids, overlaps, or violations of the 2-manifoldness. Not least due to the floating point problem in digital numbers, tolerances must be considered in some algorithms, e.g. planarity and solid self-intersection. Effects of different tolerance values and their handling is discussed; recommendations for suitable values are given. The goal of the paper is to give a clear understanding of geometric validation in the context of 3D city models. This should also enable the data holder to get a better comprehension of the validation results and their consequences on the deployment fields of the validated data set.

  14. Estimating the predictive validity of diabetic animal models in rosiglitazone studies.

    PubMed

    Varga, O E; Zsíros, N; Olsson, I A S

    2015-06-01

    For therapeutic studies, predictive validity of animal models - arguably the most important feature of animal models in terms of human relevance - can be calculated retrospectively by obtaining data on treatment efficacy from human and animal trials. Using rosiglitazone as a case study, we aim to determine the predictive validity of animal models of diabetes, by analysing which models perform most similarly to humans during rosiglitazone treatment in terms of changes in standard diabetes diagnosis parameters (glycosylated haemoglobin [HbA1c] and fasting glucose levels). A further objective of this paper was to explore the impact of four covariates on the predictive capacity: (i) diabetes induction method; (ii) drug administration route; (iii) sex of animals and (iv) diet during the experiments. Despite the variable consistency of animal species-based models with the human reference for glucose and HbA1c treatment effects, our results show that glucose and HbA1c treatment effects in rats agreed better with the expected values based on human data than in other species. Induction method was also found to be a substantial factor affecting animal model performance. The study concluded that regular reassessment of animal models can help to identify human relevance of each model and adapt research design for actual research goals. PMID:25786332

  15. A validated approach for modeling collapse of steel structures

    NASA Astrophysics Data System (ADS)

    Saykin, Vitaliy Victorovich

    A civil engineering structure is faced with many hazardous conditions such as blasts, earthquakes, hurricanes, tornadoes, floods, and fires during its lifetime. Even though structures are designed for credible events that can happen during a lifetime of the structure, extreme events do happen and cause catastrophic failures. Understanding the causes and effects of structural collapse is now at the core of critical areas of national need. One factor that makes studying structural collapse difficult is the lack of full-scale structural collapse experimental test results against which researchers could validate their proposed collapse modeling approaches. The goal of this work is the creation of an element deletion strategy based on fracture models for use in validated prediction of collapse of steel structures. The current work reviews the state-of-the-art of finite element deletion strategies for use in collapse modeling of structures. It is shown that current approaches to element deletion in collapse modeling do not take into account stress triaxiality in vulnerable areas of the structure, which is important for proper fracture and element deletion modeling. The report then reviews triaxiality and its role in fracture prediction. It is shown that fracture in ductile materials is a function of triaxiality. It is also shown that, depending on the triaxiality range, different fracture mechanisms are active and should be accounted for. An approach using semi-empirical fracture models as a function of triaxiality are employed. The models to determine fracture initiation, softening and subsequent finite element deletion are outlined. This procedure allows for stress-displacement softening at an integration point of a finite element in order to subsequently remove the element. This approach avoids abrupt changes in the stress that would create dynamic instabilities, thus making the results more reliable and accurate. The calibration and validation of these models are shown. The calibration is performed using a particle swarm optimization algorithm to establish accurate parameters when calibrated to circumferentially notched tensile coupons. It is shown that consistent, accurate predictions are attained using the chosen models. The variation of triaxiality in steel material during plastic hardening and softening is reported. The range of triaxiality in steel structures undergoing collapse is investigated in detail and the accuracy of the chosen finite element deletion approaches is discussed. This is done through validation of different structural components and structural frames undergoing severe fracture and collapse.

  16. Construct Validity of an Inanimate Training Model for Laparoscopic Appendectomy

    PubMed Central

    Sanchez-Ismayel, Alexis; Sanchez, Renata; Pena, Romina; Salamo, Oriana

    2013-01-01

    Background and Objective: The use of training models in laparoscopic surgery allows the surgical team to practice procedures in a safe environment. The aim of this study was to determine the capability of an inanimate laparoscopic appendectomy model to discriminate between different levels of surgical experience (construct validity). Methods: The performance of 3 groups with different levels of expertise in laparoscopic surgeryexperts (Group A), intermediates (Group B), and novices (Group C)was evaluated. The groups were instructed of the task to perform in the model using a video tutorial. Procedures were recorded in a digital format for later analysis using the Global Operative Assessment of Laparoscopic Skills (GOALS) score; procedure time was registered. The data were analyzed using the analysis of variance test. Results: Twelve subjects were evaluated, 4 in each group, using the GOALS score and time required to finish the task. Higher scores were observed in the expert group, followed by the intermediate and novice groups, with statistically significant difference. Regarding procedure time, a significant difference was also found between the groups, with the experts having the shorter time. The proposed model is able to discriminate among individuals with different levels of expertise, indicating that the abilities that the model evaluates are relevant in the surgeon's performance. Conclusions: Construct validity for the inanimate full-task laparoscopic appendectomy training model was demonstrated. Therefore, it is a useful tool in the development and evaluation of the resident in training. PMID:24018084

  17. Full-scale validation of a model of algal productivity.

    PubMed

    Bchet, Quentin; Shilton, Andy; Guieysse, Benoit

    2014-12-01

    While modeling algal productivity outdoors is crucial to assess the economic and environmental performance of full-scale cultivation, most of the models hitherto developed for this purpose have not been validated under fully relevant conditions, especially with regard to temperature variations. The objective of this study was to independently validate a model of algal biomass productivity accounting for both light and temperature and constructed using parameters experimentally derived using short-term indoor experiments. To do this, the accuracy of a model developed for Chlorella vulgaris was assessed against data collected from photobioreactors operated outdoor (New Zealand) over different seasons, years, and operating conditions (temperature-control/no temperature-control, batch, and fed-batch regimes). The model accurately predicted experimental productivities under all conditions tested, yielding an overall accuracy of 8.4% over 148 days of cultivation. For the purpose of assessing the feasibility of full-scale algal cultivation, the use of the productivity model was therefore shown to markedly reduce uncertainty in cost of biofuel production while also eliminating uncertainties in water demand, a critical element of environmental impact assessments. Simulations at five climatic locations demonstrated that temperature-control in outdoor photobioreactors would require tremendous amounts of energy without considerable increase of algal biomass. Prior assessments neglecting the impact of temperature variations on algal productivity in photobioreactors may therefore be erroneous. PMID:25369326

  18. "INTERMED": a method to assess health service needs. II. Results on its validity and clinical use.

    PubMed

    Stiefel, F C; de Jonge, P; Huyse, F J; Guex, P; Slaets, J P; Lyons, J S; Spagnoli, J; Vannotti, M

    1999-01-01

    The validity and clinical use of a recently developed instrument to assess health care needs of patients with a physical illness, called INTERMED, is investigated. The INTERMED combines data reflecting patients' biological, psychological, and social characteristics with information on health care utilization characteristics. An example of a patient population in which such an integral assessment can contribute to the appropriateness of care, are patients with low back pain of degenerative or unknown origin. It supports the validity and the clinical usefulness of the INTERMED when clinically relevant subgroups in this heterogeneous population can be identified and described based on their INTERMED scores. The INTERMED was utilized in a group of patients (N = 108) having low back pain who vary on the chronicity of complaints, functional status, and associated disability. All patients underwent a medical examination and responded to a battery of validated questionnaires assessing biological, psychological, and social aspects of their life. In addition, the patients were assessed by the INTERMED. It was studied whether it proved to be possible to form clinically meaningful groups of patients based on their INTERMED scores; for this, a hierarchical cluster analysis was performed. In order to clinically describe them, the groups of patients were compared with the data from the questionnaires. The cluster analysis on the INTERMED scores revealed three distinguishable groups of patients. Comparison with the questionnaires assessing biological, psychological, and social aspects of disease showed that one group can be characterized as complex patients with chronic complaints and reduced capacity to work who apply for a disability compensation. The other groups differed explicitly with regard to chronicity, but also on other variables. By means of the INTERMED, clinically relevant groups of patients can be identified, which supports its use in clinical practice and its use as a method to describe case mix for scientific or health care policy purposes. In addition, the INTERMED is easy to implement in daily clinical practice and can be of help to ease the operationalization of the biopychosocial model of disease. More information on its validity in different patient populations is necessary. PMID:10068920

  19. Experimental Validation and Applications of a Fluid Infiltration Model

    PubMed Central

    Kao, Cindy S.; Hunt, James R.

    2010-01-01

    Horizontal infiltration experiments were performed to validate a plug flow model that minimizes the number of parameters that must be measured. Water and silicone oil at three different viscosities were infiltrated into glass beads, desert alluvium, and silica powder. Experiments were also performed with negative inlet heads on air-dried silica powder, and with water and oil infiltrating into initially water moist silica powder. Comparisons between the data and model were favorable in most cases, with predictions usually within 40% of the measured data. The model is extended to a line source and small areal source at the ground surface to analytically predict the shape of two-dimensional wetting fronts. Furthermore, a plug flow model for constant flux infiltration agrees well with field data and suggests that the proposed model for a constant-head boundary condition can be effectively used to predict wetting front movement at heterogeneous field sites if averaged parameter values are used. PMID:20428480

  20. Statistical validation of high-dimensional models of growing networks

    NASA Astrophysics Data System (ADS)

    Medo, Matúš

    2014-03-01

    The abundance of models of complex networks and the current insufficient validation standards make it difficult to judge which models are strongly supported by data and which are not. We focus here on likelihood maximization methods for models of growing networks with many parameters and compare their performance on artificial and real datasets. While high dimensionality of the parameter space harms the performance of direct likelihood maximization on artificial data, this can be improved by introducing a suitable penalization term. Likelihood maximization on real data shows that the presented approach is able to discriminate among available network models. To make large-scale datasets accessible to this kind of analysis, we propose a subset sampling technique and show that it yields substantial model evidence in a fraction of time necessary for the analysis of the complete data.

  1. Experimental Validation and Applications of a Fluid Infiltration Model.

    PubMed

    Kao, Cindy S; Hunt, James R

    2001-02-01

    Horizontal infiltration experiments were performed to validate a plug flow model that minimizes the number of parameters that must be measured. Water and silicone oil at three different viscosities were infiltrated into glass beads, desert alluvium, and silica powder. Experiments were also performed with negative inlet heads on air-dried silica powder, and with water and oil infiltrating into initially water moist silica powder. Comparisons between the data and model were favorable in most cases, with predictions usually within 40% of the measured data. The model is extended to a line source and small areal source at the ground surface to analytically predict the shape of two-dimensional wetting fronts. Furthermore, a plug flow model for constant flux infiltration agrees well with field data and suggests that the proposed model for a constant-head boundary condition can be effectively used to predict wetting front movement at heterogeneous field sites if averaged parameter values are used. PMID:20428480

  2. Cross-Cultural Validation of the Beck Depression Inventory-II across U.S. and Turkish Samples

    ERIC Educational Resources Information Center

    Canel-Cinarbas, Deniz; Cui, Ying; Lauridsen, Erica

    2011-01-01

    The purpose of this study was to test the Beck Depression Inventory-II (BDI-II) for factorial invariance across Turkish and U.S. college student samples. The results indicated that (a) a two-factor model has an adequate fit for both samples, thus providing evidence of configural invariance, and (b) there is a metric invariance but "no" sufficient

  3. Cross-Cultural Validation of the Beck Depression Inventory-II across U.S. and Turkish Samples

    ERIC Educational Resources Information Center

    Canel-Cinarbas, Deniz; Cui, Ying; Lauridsen, Erica

    2011-01-01

    The purpose of this study was to test the Beck Depression Inventory-II (BDI-II) for factorial invariance across Turkish and U.S. college student samples. The results indicated that (a) a two-factor model has an adequate fit for both samples, thus providing evidence of configural invariance, and (b) there is a metric invariance but "no" sufficient…

  4. Calibrating and validating bacterial water quality models: a Bayesian approach.

    PubMed

    Gronewold, Andrew D; Qian, Song S; Wolpert, Robert L; Reckhow, Kenneth H

    2009-06-01

    Water resource management decisions often depend on mechanistic or empirical models to predict water quality conditions under future pollutant loading scenarios. These decisions, such as whether or not to restrict public access to a water resource area, may therefore vary depending on how models reflect process, observation, and analytical uncertainty and variability. Nonetheless, few probabilistic modeling tools have been developed which explicitly propagate fecal indicator bacteria (FIB) analysis uncertainty into predictive bacterial water quality model parameters and response variables. Here, we compare three approaches to modeling variability in two different FIB water quality models. We first calibrate a well-known first-order bacterial decay model using approaches ranging from ordinary least squares (OLS) linear regression to Bayesian Markov chain Monte Carlo (MCMC) procedures. We then calibrate a less frequently used empirical bacterial die-off model using the same range of procedures (and the same data). Finally, we propose an innovative approach to evaluating the predictive performance of each calibrated model using a leave-one-out cross-validation procedure and assessing the probability distributions of the resulting Bayesian posterior predictive p-values. Our results suggest that different approaches to acknowledging uncertainty can lead to discrepancies between parameter mean and variance estimates and predictive performance for the same FIB water quality model. Our results also suggest that models without a bacterial kinetics parameter related to the rate of decay may more appropriately reflect FIB fate and transport processes, regardless of how variability and uncertainty are acknowledged. PMID:19395060

  5. PASTIS: Bayesian extrasolar planet validation - II. Constraining exoplanet blend scenarios using spectroscopic diagnoses

    NASA Astrophysics Data System (ADS)

    Santerne, A.; Daz, R. F.; Almenara, J.-M.; Bouchy, F.; Deleuil, M.; Figueira, P.; Hbrard, G.; Moutou, C.; Rodionov, S.; Santos, N. C.

    2015-08-01

    The statistical validation of transiting exoplanets proved to be an efficient technique to secure the nature of small exoplanet signals which cannot be established by purely spectroscopic means. However, the spectroscopic diagnoses are providing us with useful constraints on the presence of blended stellar contaminants. In this paper, we present how a contaminating star affects the measurements of the various spectroscopic diagnoses as a function of the parameters of the target and contaminating stars using the model implemented into the PASTIS planet-validation software. We find particular cases for which a blend might produce a large radial velocity signal but no bisector variation. It might also produce a bisector variation anticorrelated with the radial velocity one, as in the case of stellar spots. In those cases, the full width at half-maximum variation provides complementary constraints. These results can be used to constrain blend scenarios for transiting planet candidates or radial velocity planets. We review all the spectroscopic diagnoses reported in the literature so far, especially the ones to monitor the line asymmetry. We estimate their uncertainty and compare their sensitivity to blends. Based on that, we recommend the use of BiGauss which is the most sensitive diagnosis to monitor line-profile asymmetry. In this paper, we also investigate the sensitivity of the radial velocities to constrain blend scenarios and develop a formalism to estimate the level of dilution of a blended signal. Finally, we apply our blend model to re-analyse the spectroscopic diagnoses of HD 16702, an unresolved face-on binary which exhibits bisector variations.

  6. Infrared ship signature prediction, model validation, and sky radiance

    NASA Astrophysics Data System (ADS)

    Neele, Filip

    2005-05-01

    The increased interest during the last decade in the infrared signature of (new) ships results in a clear need of validated infrared signature prediction codes. This paper presents the results of comparing an in-house developed signature prediction code with measurements made in the 3 5 ?m band in both clear-sky and overcast conditions. During the measurements, sensors measured the short-wave and long-wave irradiation from sun and sky, which forms a significant part of the heat flux exchange between ship and environment, but is linked weakly to the standard meteorological data measured routinely (e.g., air temperature, relative humidity, wind speed, pressure, cloud cover). The aim of the signature model validation is checking the heat flux balance algorithm in the model and the representation of the target. Any uncertainties in the prediction of the radiative properties of the environment (which are usually computed with a code like MODTRAN) must be minimised. It is shown that for the validation of signature prediction models the standard meteorological data are insufficient for the computation of sky radiance and solar irradiation with atmospheric radiation models (MODTRAN). Comparisons between model predictions and data are shown for predictions computed with and without global irradiation data. The results underline the necessity of measuring the irradiation (from sun, sky, sea or land environment) on the target during a signature measurement trial. Only then does the trial produce the data needed as a reference for the computation of the infrared signature of the ship in conditions other than those during the trial.

  7. Robust design and model validation of nonlinear compliant micromechanisms.

    SciTech Connect

    Howell, Larry L.; Baker, Michael Sean; Wittwer, Jonathan W.

    2005-02-01

    Although the use of compliance or elastic flexibility in microelectromechanical systems (MEMS) helps eliminate friction, wear, and backlash, compliant MEMS are known to be sensitive to variations in material properties and feature geometry, resulting in large uncertainties in performance. This paper proposes an approach for design stage uncertainty analysis, model validation, and robust optimization of nonlinear MEMS to account for critical process uncertainties including residual stress, layer thicknesses, edge bias, and material stiffness. A fully compliant bistable micromechanism (FCBM) is used as an example, demonstrating that the approach can be used to handle complex devices involving nonlinear finite element models. The general shape of the force-displacement curve is validated by comparing the uncertainty predictions to measurements obtained from in situ force gauges. A robust design is presented, where simulations show that the estimated force variation at the point of interest may be reduced from {+-}47 {micro}N to {+-}3 {micro}N. The reduced sensitivity to process variations is experimentally validated by measuring the second stable position at multiple locations on a wafer.

  8. Calibration and validation of DRAINMOD to model bioretention hydrology

    NASA Astrophysics Data System (ADS)

    Brown, R. A.; Skaggs, R. W.; Hunt, W. F.

    2013-04-01

    SummaryPrevious field studies have shown that the hydrologic performance of bioretention cells varies greatly because of factors such as underlying soil type, physiographic region, drainage configuration, surface storage volume, drainage area to bioretention surface area ratio, and media depth. To more accurately describe bioretention hydrologic response, a long-term hydrologic model that generates a water balance is needed. Some current bioretention models lack the ability to perform long-term simulations and others have never been calibrated from field monitored bioretention cells with underdrains. All peer-reviewed models lack the ability to simultaneously perform both of the following functions: (1) model an internal water storage (IWS) zone drainage configuration and (2) account for soil-water content using the soil-water characteristic curve. DRAINMOD, a widely-accepted agricultural drainage model, was used to simulate the hydrologic response of runoff entering a bioretention cell. The concepts of water movement in bioretention cells are very similar to those of agricultural fields with drainage pipes, so many bioretention design specifications corresponded directly to DRAINMOD inputs. Detailed hydrologic measurements were collected from two bioretention field sites in Nashville and Rocky Mount, North Carolina, to calibrate and test the model. Each field site had two sets of bioretention cells with varying media depths, media types, drainage configurations, underlying soil types, and surface storage volumes. After 12 months, one of these characteristics was altered - surface storage volume at Nashville and IWS zone depth at Rocky Mount. At Nashville, during the second year (post-repair period), the Nash-Sutcliffe coefficients for drainage and exfiltration/evapotranspiration (ET) both exceeded 0.8 during the calibration and validation periods. During the first year (pre-repair period), the Nash-Sutcliffe coefficients for drainage, overflow, and exfiltration/ET ranged from 0.6 to 0.9 during both the calibration and validation periods. The bioretention cells at Rocky Mount included an IWS zone. For both the calibration and validation periods, the modeled volume of exfiltration/ET was within 1% and 5% of the estimated volume for the cells with sand (Sand cell) and sandy clay loam (SCL cell) underlying soils, respectively. Nash-Sutcliffe coefficients for the SCL cell during both the calibration and validation periods were 0.92.

  9. VALIDATION OF COMPUTER MODELS FOR RADIOACTIVE MATERIAL SHIPPING PACKAGES

    SciTech Connect

    Gupta, N; Gene Shine, G; Cary Tuckfield, C

    2007-05-07

    Computer models are abstractions of physical reality and are routinely used for solving practical engineering problems. These models are prepared using large complex computer codes that are widely used in the industry. Patran/Thermal is such a finite element computer code that is used for solving complex heat transfer problems in the industry. Finite element models of complex problems involve making assumptions and simplifications that depend upon the complexity of the problem and upon the judgment of the analysts. The assumptions involve mesh size, solution methods, convergence criteria, material properties, boundary conditions, etc. that could vary from analyst to analyst. All of these assumptions are, in fact, candidates for a purposeful and intended effort to systematically vary each in connection with the others to determine there relative importance or expected overall effect on the modeled outcome. These kinds of models derive from the methods of statistical science and are based on the principles of experimental designs. These, as all computer models, must be validated to make sure that the output from such an abstraction represents reality [1,2]. A new nuclear material packaging design, called 9977, which is undergoing a certification design review, is used to assess the capability of the Patran/Thermal computer model to simulate 9977 thermal response. The computer model for the 9977 package is validated by comparing its output with the test data collected from an actual thermal test performed on a full size 9977 package. Inferences are drawn by performing statistical analyses on the residuals (test data--model predictions).

  10. Organic acid modeling and model validation: Workshop summary. Final report

    SciTech Connect

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E&S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled ``Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.`` The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  11. Organic acid modeling and model validation: Workshop summary

    SciTech Connect

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.'' The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  12. Validation and Verification with Applications to a Kinetic Global Model

    NASA Astrophysics Data System (ADS)

    Verboncoeur, J. P.

    2014-10-01

    As scientific software matures, verification, validation, benchmarking, and error estimation are becoming increasingly important to ensure predictable operation. Having well-described and consistent data is critical for consistent results. This presentation briefly addresses the motivation for V&V, the history and goals of the workshop series. A roadmap of the current workshop is presented. Finally, examples of V&V are applied to a novel kinetic global model for a series of low temperature plasma problems ranging from verification of specific rate equations to benchmarks and validation with other codes and experimental data for Penning breakdown and hydrocarbon plasmas. The results are included in the code release to ensure repeatability following code modifications. In collaboration with G. Parsey, J. Kempf, and A. Christlieb, Michigan State University. This work is supported in part by a U.S. Air Force Office of Scientific Research Basic Research Initiative and a Michigan State University Strategic Partnership grant.

  13. A turbulence model for iced airfoils and its validation

    NASA Technical Reports Server (NTRS)

    Shin, Jaiwon; Chen, Hsun H.; Cebeci, Tuncer

    1992-01-01

    A turbulence model based on the extension of the algebraic eddy viscosity formulation of Cebeci and Smith developed for two-dimensional flows over smooth and rough surfaces is described for iced airfoils and validated for computed ice shapes obtained for a range of total temperatures varying from 28 to -15 F. The validation is made with an interactive boundary layer method which uses a panel method to compute the inviscid flow and an inverse finite difference boundary layer method to compute the viscous flow. The interaction between inviscid and viscous flows is established by the use of the Hilbert integral. The calculated drag coefficients compare well with recent experimental data taken at the NASA-Lewis Icing Research Tunnel (IRT) and show that, in general, the drag increase due to ice accretion can be predicted well and efficiently.

  14. A turbulence model for iced airfoils and its validation

    NASA Technical Reports Server (NTRS)

    Shin, Jaiwon; Chen, Hsun H.; Cebeci, Tuncer

    1992-01-01

    A turbulence model based on the extension of the algebraic eddy viscosity formulation of Cebeci and Smith developed for two dimensional flows over smooth and rough surfaces is described for iced airfoils and validated for computed ice shapes obtained for a range of total temperatures varying from 28 to -15 F. The validation is made with an interactive boundary layer method which uses a panel method to compute the inviscid flow and an inverse finite difference boundary layer method to compute the viscous flow. The interaction between inviscid and viscous flows is established by the use of the Hilbert integral. The calculated drag coefficients compare well with recent experimental data taken at the NASA-Lewis Icing Research Tunnel (IRT) and show that, in general, the drag increase due to ice accretion can be predicted well and efficiently.

  15. Validation of chemistry models employed in a particle simulation method

    NASA Technical Reports Server (NTRS)

    Haas, Brian L.; Mcdonald, Jeffrey D.

    1991-01-01

    The chemistry models employed in a statistical particle simulation method, as implemented in the Intel iPSC/860 multiprocessor computer, are validated and applied. Chemical relaxation of five-species air in these reservoirs involves 34 simultaneous dissociation, recombination, and atomic-exchange reactions. The reaction rates employed in the analytic solutions are obtained from Arrhenius experimental correlations as functions of temperature for adiabatic gas reservoirs in thermal equilibrium. Favorable agreement with the analytic solutions validates the simulation when applied to relaxation of O2 toward equilibrium in reservoirs dominated by dissociation and recombination, respectively, and when applied to relaxation of air in the temperature range 5000 to 30,000 K. A flow of O2 over a circular cylinder at high Mach number is simulated to demonstrate application of the method to multidimensional reactive flows.

  16. Validating and Verifying Biomathematical Models of Human Fatigue

    NASA Technical Reports Server (NTRS)

    Martinez, Siera Brooke; Quintero, Luis Ortiz; Flynn-Evans, Erin

    2015-01-01

    Airline pilots experience acute and chronic sleep deprivation, sleep inertia, and circadian desynchrony due to the need to schedule flight operations around the clock. This sleep loss and circadian desynchrony gives rise to cognitive impairments, reduced vigilance and inconsistent performance. Several biomathematical models, based principally on patterns observed in circadian rhythms and homeostatic drive, have been developed to predict a pilots levels of fatigue or alertness. These models allow for the Federal Aviation Administration (FAA) and commercial airlines to make decisions about pilot capabilities and flight schedules. Although these models have been validated in a laboratory setting, they have not been thoroughly tested in operational environments where uncontrolled factors, such as environmental sleep disrupters, caffeine use and napping, may impact actual pilot alertness and performance. We will compare the predictions of three prominent biomathematical fatigue models (McCauley Model, Harvard Model, and the privately-sold SAFTE-FAST Model) to actual measures of alertness and performance. We collected sleep logs, movement and light recordings, psychomotor vigilance task (PVT), and urinary melatonin (a marker of circadian phase) from 44 pilots in a short-haul commercial airline over one month. We will statistically compare with the model predictions to lapses on the PVT and circadian phase. We will calculate the sensitivity and specificity of each model prediction under different scheduling conditions. Our findings will aid operational decision-makers in determining the reliability of each model under real-world scheduling situations.

  17. Validation of coupled atmosphere-fire behavior models

    SciTech Connect

    Bossert, J.E.; Reisner, J.M.; Linn, R.R.; Winterkamp, J.L.; Schaub, R.; Riggan, P.J.

    1998-12-31

    Recent advances in numerical modeling and computer power have made it feasible to simulate the dynamical interaction and feedback between the heat and turbulence induced by wildfires and the local atmospheric wind and temperature fields. At Los Alamos National Laboratory, the authors have developed a modeling system that includes this interaction by coupling a high resolution atmospheric dynamics model, HIGRAD, with a fire behavior model, BEHAVE, to predict the spread of wildfires. The HIGRAD/BEHAVE model is run at very high resolution to properly resolve the fire/atmosphere interaction. At present, these coupled wildfire model simulations are computationally intensive. The additional complexity of these models require sophisticated methods for assuring their reliability in real world applications. With this in mind, a substantial part of the research effort is directed at model validation. Several instrumented prescribed fires have been conducted with multi-agency support and participation from chaparral, marsh, and scrub environments in coastal areas of Florida and inland California. In this paper, the authors first describe the data required to initialize the components of the wildfire modeling system. Then they present results from one of the Florida fires, and discuss a strategy for further testing and improvement of coupled weather/wildfire models.

  18. Leading compounds for the validation of animal models of psychopathology.

    PubMed

    Micale, Vincenzo; Kucerova, Jana; Sulcova, Alexandra

    2013-10-01

    Modelling of complex psychiatric disorders, e.g., depression and schizophrenia, in animals is a major challenge, since they are characterized by certain disturbances in functions that are absolutely unique to humans. Furthermore, we still have not identified the genetic and neurobiological mechanisms, nor do we know precisely the circuits in the brain that function abnormally in mood and psychotic disorders. Consequently, the pharmacological treatments used are mostly variations on a theme that was started more than 50 years ago. Thus, progress in novel drug development with improved therapeutic efficacy would benefit greatly from improved animal models. Here, we review the available animal models of depression and schizophrenia and focus on the way that they respond to various types of potential candidate molecules, such as novel antidepressant or antipsychotic drugs, as an index of predictive validity. We conclude that the generation of convincing and useful animal models of mental illnesses could be a bridge to success in drug discovery. PMID:23942897

  19. Atmospheric forcing validation for modeling the central Arctic

    NASA Astrophysics Data System (ADS)

    Makshtas, A.; Atkinson, D.; Kulakov, M.; Shutilin, S.; Krishfield, R.; Proshutinsky, A.

    2007-10-01

    We compare daily data from the National Center for Atmospheric Research and National Centers for Environmental Prediction ``Reanalysis 1'' project with observational data obtained from the North Pole drifting stations in order to validate the atmospheric forcing data used in coupled ice-ocean models. This analysis is conducted to assess the role of errors associated with model forcing before performing model verifications against observed ocean variables. Our analysis shows an excellent agreement between observed and reanalysis sea level pressures and a relatively good correlation between observed and reanalysis surface winds. The observed temperature is in good agreement with reanalysis data only in winter. Specific air humidity and cloudiness are not reproduced well by reanalysis and are not recommended for model forcing. An example sensitivity study demonstrates that the equilibrium ice thickness obtained using NP forcing is two times thicker than using reanalysis forcing.

  20. Characterization and Validation of a Canine Pruritic Model.

    PubMed

    Aberg, Gunnar A K; Arulnesan, Nada; Bolger, Gordon T; Ciofalo, Vincent B; Pucaj, Kresimir

    2015-08-01

    Preclinical Research The mechanisms mediating canine pruritus are poorly understood with few models due to limited methods for inducing pruritus in dogs. Chloroquine (CQ) is a widely used antimalarial drug that causes pruritus in humans and mice. We have developed a canine model of pruritus where CQ reliably induced pruritus in all dogs tested following intravenous administration. This model is presently being used to test antipruritic activity of drug candidate molecules. This publication has been validated in a blinded cross-over study in eight beagle dogs using the reference standards, oclacitinib and prednisolone, and has been used to test a new compound, norketotifen. All compounds reduced CQ-induced pruritus in the dog. The sensitivity of the model was demonstrated using norketotifen, which at three dose levels, dose-dependently, inhibited scratching events compared with placebo. PMID:26220424

  1. Validation of High Displacement Piezoelectric Actuator Finite Element Models

    NASA Technical Reports Server (NTRS)

    Taleghani, B. K.

    2000-01-01

    The paper presents the results obtained by using NASTRAN(Registered Trademark) and ANSYS(Regitered Trademark) finite element codes to predict doming of the THUNDER piezoelectric actuators during the manufacturing process and subsequent straining due to an applied input voltage. To effectively use such devices in engineering applications, modeling and characterization are essential. Length, width, dome height, and thickness are important parameters for users of such devices. Therefore, finite element models were used to assess the effects of these parameters. NASTRAN(Registered Trademark) and ANSYS(Registered Trademark) used different methods for modeling piezoelectric effects. In NASTRAN(Registered Trademark), a thermal analogy was used to represent voltage at nodes as equivalent temperatures, while ANSYS(Registered Trademark) processed the voltage directly using piezoelectric finite elements. The results of finite element models were validated by using the experimental results.

  2. MODEL VALIDATION FOR A NONINVASIVE ARTERIAL STENOSIS DETECTION PROBLEM

    PubMed Central

    BANKS, H. THOMAS; HU, SHUHUA; KENZ, ZACKARY R.; KRUSE, CAROLA; SHAW, SIMON; WHITEMAN, JOHN; BREWIN, MARK P.; GREENWALD, STEPHEN E.; BIRCH, MALCOLM J.

    2014-01-01

    A current thrust in medical research is the development of a non-invasive method for detection, localization, and characterization of an arterial stenosis (a blockage or partial blockage in an artery). A method has been proposed to detect shear waves in the chest cavity which have been generated by disturbances in the blood flow resulting from a stenosis. In order to develop this methodology further, we use one-dimensional shear wave experimental data from novel acoustic phantoms to validate a corresponding viscoelastic mathematical model. We estimate model parameters which give a good fit (in a sense to be precisely defined) to the experimental data, and use asymptotic error theory to provide confidence intervals for parameter estimates. Finally, since a robust error model is necessary for accurate parameter estimates and confidence analysis, we include a comparison of absolute and relative models for measurement error. PMID:24506547

  3. Quantitative impedance measurements for eddy current model validation

    NASA Astrophysics Data System (ADS)

    Khan, T. A.; Nakagawa, N.

    2000-05-01

    This paper reports on a series of laboratory-based impedance measurement data, collected by the use of a quantitatively accurate, mechanically controlled measurement station. The purpose of the measurement is to validate a BEM-based eddy current model against experiment. We have therefore selected two "validation probes," which are both split-D differential probes. Their internal structures and dimensions are extracted from x-ray CT scan data, and thus known within the measurement tolerance. A series of measurements was carried out, using the validation probes and two Ti-6Al-4V block specimens, one containing two 1-mm long fatigue cracks, and the other containing six EDM notches of a range of sizes. Motor-controlled XY scanner performed raster scans over the cracks, with the probe riding on the surface with a spring-loaded mechanism to maintain the lift off. Both an impedance analyzer and a commercial EC instrument were used in the measurement. The probes were driven in both differential and single-coil modes for the specific purpose of model validation. The differential measurements were done exclusively by the eddyscope, while the single-coil data were taken with both the impedance analyzer and the eddyscope. From the single-coil measurements, we obtained the transfer function to translate the voltage output of the eddyscope into impedance values, and then used it to translate the differential measurement data into impedance results. The presentation will highlight the schematics of the measurement procedure, a representative of raw data, explanation of the post data-processing procedure, and then a series of resulting 2D flaw impedance results. A noise estimation will be given also, in order to quantify the accuracy of these measurements, and to be used in probability-of-detection estimation.—This work was supported by the NSF Industry/University Cooperative Research Program.

  4. Coupled Thermal-Chemical-Mechanical Modeling of Validation Cookoff Experiments

    SciTech Connect

    ERIKSON,WILLIAM W.; SCHMITT,ROBERT G.; ATWOOD,A.I.; CURRAN,P.D.

    2000-11-27

    The cookoff of energetic materials involves the combined effects of several physical and chemical processes. These processes include heat transfer, chemical decomposition, and mechanical response. The interaction and coupling between these processes influence both the time-to-event and the violence of reaction. The prediction of the behavior of explosives during cookoff, particularly with respect to reaction violence, is a challenging task. To this end, a joint DoD/DOE program has been initiated to develop models for cookoff, and to perform experiments to validate those models. In this paper, a series of cookoff analyses are presented and compared with data from a number of experiments for the aluminized, RDX-based, Navy explosive PBXN-109. The traditional thermal-chemical analysis is used to calculate time-to-event and characterize the heat transfer and boundary conditions. A reaction mechanism based on Tarver and McGuire's work on RDX{sup 2} was adjusted to match the spherical one-dimensional time-to-explosion data. The predicted time-to-event using this reaction mechanism compares favorably with the validation tests. Coupled thermal-chemical-mechanical analysis is used to calculate the mechanical response of the confinement and the energetic material state prior to ignition. The predicted state of the material includes the temperature, stress-field, porosity, and extent of reaction. There is little experimental data for comparison to these calculations. The hoop strain in the confining steel tube gives an estimation of the radial stress in the explosive. The inferred pressure from the measured hoop strain and calculated radial stress agree qualitatively. However, validation of the mechanical response model and the chemical reaction mechanism requires more data. A post-ignition burn dynamics model was applied to calculate the confinement dynamics. The burn dynamics calculations suffer from a lack of characterization of the confinement for the flaw-dominated failure mode experienced in the tests. High-pressure burning rates are needed for more detailed post-ignition studies. Sub-models for chemistry, mechanical response and burn dynamics need to be validated against data from less complex experiments. The sub-models can then be used in integrated analysis for comparison with experimental data taken during integrated tests.

  5. Development and Validation of a 3-Dimensional CFB Furnace Model

    NASA Astrophysics Data System (ADS)

    Vepslinen, Arl; Myhnen, Karl; Hyppneni, Timo; Leino, Timo; Tourunen, Antti

    At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents CFB process analysis focused on combustion and NO profiles in pilot and industrial scale bituminous coal combustion.

  6. Validation of thermal models for a prototypical MEMS thermal actuator.

    SciTech Connect

    Gallis, Michail A.; Torczynski, John Robert; Piekos, Edward Stanley; Serrano, Justin Raymond; Gorby, Allen D.; Phinney, Leslie Mary

    2008-09-01

    This report documents technical work performed to complete the ASC Level 2 Milestone 2841: validation of thermal models for a prototypical MEMS thermal actuator. This effort requires completion of the following task: the comparison between calculated and measured temperature profiles of a heated stationary microbeam in air. Such heated microbeams are prototypical structures in virtually all electrically driven microscale thermal actuators. This task is divided into four major subtasks. (1) Perform validation experiments on prototypical heated stationary microbeams in which material properties such as thermal conductivity and electrical resistivity are measured if not known and temperature profiles along the beams are measured as a function of electrical power and gas pressure. (2) Develop a noncontinuum gas-phase heat-transfer model for typical MEMS situations including effects such as temperature discontinuities at gas-solid interfaces across which heat is flowing, and incorporate this model into the ASC FEM heat-conduction code Calore to enable it to simulate these effects with good accuracy. (3) Develop a noncontinuum solid-phase heat transfer model for typical MEMS situations including an effective thermal conductivity that depends on device geometry and grain size, and incorporate this model into the FEM heat-conduction code Calore to enable it to simulate these effects with good accuracy. (4) Perform combined gas-solid heat-transfer simulations using Calore with these models for the experimentally investigated devices, and compare simulation and experimental temperature profiles to assess model accuracy. These subtasks have been completed successfully, thereby completing the milestone task. Model and experimental temperature profiles are found to be in reasonable agreement for all cases examined. Modest systematic differences appear to be related to uncertainties in the geometric dimensions of the test structures and in the thermal conductivity of the polycrystalline silicon test structures, as well as uncontrolled nonuniform changes in this quantity over time and during operation.

  7. Validated models for predicting skin penetration from different vehicles.

    PubMed

    Ghafourian, Taravat; Samaras, Eleftherios G; Brooks, James D; Riviere, Jim E

    2010-12-23

    The permeability of a penetrant though skin is controlled by the properties of the penetrants and the mixture components, which in turn relates to the molecular structures. Despite the well-investigated models for compound permeation through skin, the effect of vehicles and mixture components has not received much attention. The aim of this Quantitative Structure Activity Relationship (QSAR) study was to develop a statistically validated model for the prediction of skin permeability coefficients of compounds dissolved in different vehicles. Furthermore, the model can help with the elucidation of the mechanisms involved in the permeation process. With this goal in mind, the skin permeability of four different penetrants each blended in 24 different solvent mixtures were determined from diffusion cell studies using porcine skin. The resulting 96 kp values were combined with a previous dataset of 288 kp data for QSAR analysis. Stepwise regression analysis was used for the selection of the most significant molecular descriptors and development of several regression models. The selected QSAR employed two penetrant descriptors of Wiener topological index and total lipole moment, boiling point of the solvent and the difference between the melting point of the penetrant and the melting point of the solvent. The QSAR was validated internally, using a leave-many-out procedure, giving a mean absolute error of 0.454 for the logkp value of the test set. PMID:20816954

  8. Volumetric Intraoperative Brain Deformation Compensation: Model Development and Phantom Validation

    PubMed Central

    DeLorenzo, Christine; Papademetris, Xenophon; Staib, Lawrence H.; Vives, Kenneth P.; Spencer, Dennis D.; Duncan, James S.

    2012-01-01

    During neurosurgery, nonrigid brain deformation may affect the reliability of tissue localization based on preoperative images. To provide accurate surgical guidance in these cases, preoperative images must be updated to reflect the intraoperative brain. This can be accomplished by warping these preoperative images using a biomechanical model. Due to the possible complexity of this deformation, intraoperative information is often required to guide the model solution. In this paper, a linear elastic model of the brain is developed to infer volumetric brain deformation associated with measured intraoperative cortical surface displacement. The developed model relies on known material properties of brain tissue, and does not require further knowledge about intraoperative conditions. To provide an initial estimation of volumetric model accuracy, as well as determine the models sensitivity to the specified material parameters and surface displacements, a realistic brain phantom was developed. Phantom results indicate that the linear elastic model significantly reduced localization error due to brain shift, from >16 mm to under 5 mm, on average. In addition, though in vivo quantitative validation is necessary, preliminary application of this approach to images acquired during neocortical epilepsy cases confirms the feasibility of applying the developed model to in vivo data. PMID:22562728

  9. Rapid target gene validation in complex cancer mouse models using re-derived embryonic stem cells

    PubMed Central

    Huijbers, Ivo J; Bin Ali, Rahmen; Pritchard, Colin; Cozijnsen, Miranda; Kwon, Min-Chul; Proost, Natalie; Song, Ji-Ying; Vries, Hilda; Badhai, Jitendra; Sutherland, Kate; Krimpenfort, Paul; Michalak, Ewa M; Jonkers, Jos; Berns, Anton

    2014-01-01

    Human cancers modeled in Genetically Engineered Mouse Models (GEMMs) can provide important mechanistic insights into the molecular basis of tumor development and enable testing of new intervention strategies. The inherent complexity of these models, with often multiple modified tumor suppressor genes and oncogenes, has hampered their use as preclinical models for validating cancer genes and drug targets. In our newly developed approach for the fast generation of tumor cohorts we have overcome this obstacle, as exemplified for three GEMMs; two lung cancer models and one mesothelioma model. Three elements are central for this system; (i) The efficient derivation of authentic Embryonic Stem Cells (ESCs) from established GEMMs, (ii) the routine introduction of transgenes of choice in these GEMM-ESCs by Flp recombinase-mediated integration and (iii) the direct use of the chimeric animals in tumor cohorts. By applying stringent quality controls, the GEMM-ESC approach proofs to be a reliable and effective method to speed up cancer gene assessment and target validation. As proof-of-principle, we demonstrate that MycL1 is a key driver gene in Small Cell Lung Cancer. PMID:24401838

  10. Cosmological parameter uncertainties from SALT-II type Ia supernova light curve models

    SciTech Connect

    Mosher, J.; Sako, M.; Guy, J.; Astier, P.; Betoule, M.; El-Hage, P.; Pain, R.; Regnault, N.; Marriner, J.; Biswas, R.; Kuhlmann, S.; Schneider, D. P.

    2014-09-20

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ∼120 low-redshift (z < 0.1) SNe Ia, ∼255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ∼290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w {sub input} – w {sub recovered}) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty; the average bias on w is –0.014 ± 0.007.

  11. Cosmological Parameter Uncertainties from SALT-II Type Ia Supernova Light Curve Models

    NASA Astrophysics Data System (ADS)

    Mosher, J.; Guy, J.; Kessler, R.; Astier, P.; Marriner, J.; Betoule, M.; Sako, M.; El-Hage, P.; Biswas, R.; Pain, R.; Kuhlmann, S.; Regnault, N.; Frieman, J. A.; Schneider, D. P.

    2014-09-01

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ~120 low-redshift (z < 0.1) SNe Ia, ~255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ~290 SNLS SNe Ia (z <= 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w input - w recovered) ranging from -0.005 ± 0.012 to -0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty; the average bias on w is -0.014 ± 0.007.

  12. PARALLEL MEASUREMENT AND MODELING OF TRANSPORT IN THE DARHT II BEAMLINE ON ETA II

    SciTech Connect

    Chambers, F W; Raymond, B A; Falabella, S; Lee, B S; Richardson, R A; Weir, J T; Davis, H A; Schultze, M E

    2005-05-31

    To successfully tune the DARHT II transport beamline requires the close coupling of a model of the beam transport and the measurement of the beam observables as the beam conditions and magnet settings are varied. For the ETA II experiment using the DARHT II beamline components this was achieved using the SUICIDE (Simple User Interface Connecting to an Integrated Data Environment) data analysis environment and the FITS (Fully Integrated Transport Simulation) model. The SUICIDE environment has direct access to the experimental beam transport data at acquisition and the FITS predictions of the transport for immediate comparison. The FITS model is coupled into the control system where it can read magnet current settings for real time modeling. We find this integrated coupling is essential for model verification and the successful development of a tuning aid for the efficient convergence on a useable tune. We show the real time comparisons of simulation and experiment and explore the successes and limitations of this close coupled approach.

  13. Optimization and validation of a capillary zone electrophoretic method for the analysis of several angiotensin-II-receptor antagonists.

    PubMed

    Hillaert, S; Van den Bossche, W

    2002-12-01

    We optimized a capillary zone electrophoretic method for separation of six angiotensin-II-receptor antagonists (ARA-IIs): candesartan, eprosartan, irbesartan, losartan potassium, telmisartan, and valsartan. A three-level, full-factorial design was applied to study the effect of the pH and molarity of the running buffer on separation. Combination of the studied parameters permitted the separation of the six ARA-IIs, which was best carried out using 60 mM sodium phosphate buffer (pH 2.5). The same system can also be applied for the quantitative determination of these compounds, but only for the more soluble ones. Some parameters (linearity, precision and accuracy) were validated. PMID:12498264

  14. Multiscale Modeling of Gastrointestinal Electrophysiology and Experimental Validation

    PubMed Central

    Du, Peng; O'Grady, Greg; Davidson, John B.; Cheng, Leo K.; Pullan, Andrew J.

    2011-01-01

    Normal gastrointestinal (GI) motility results from the coordinated interplay of multiple cooperating mechanisms, both intrinsic and extrinsic to the GI tract. A fundamental component of this activity is an omnipresent electrical activity termed slow waves, which is generated and propagated by the interstitial cells of Cajal (ICCs). The role of ICC loss and network degradation in GI motility disorders is a significant area of ongoing research. This review examines recent progress in the multiscale modeling framework for effectively integrating a vast range of experimental data in GI electrophysiology, and outlines the prospect of how modeling can provide new insights into GI function in health and disease. The review begins with an overview of the GI tract and its electrophysiology, and then focuses on recent work on modeling GI electrical activity, spanning from cell to body biophysical scales. Mathematical cell models of the ICCs and smooth muscle cell are presented. The continuum framework of monodomain and bidomain models for tissue and organ models are then considered, and the forward techniques used to model the resultant body surface potential and magnetic field are discussed. The review then outlines recent progress in experimental support and validation of modeling, and concludes with a discussion on potential future research directions in this field. PMID:21133835

  15. Bolted connection modeling and validation through laser-aided testing

    NASA Astrophysics Data System (ADS)

    Dai, Kaoshan; Gong, Changqing; Smith, Benjiamin

    2013-04-01

    Bolted connections are widely employed in facility structures, such as light masts, transmission poles, and wind turbine towers. The complex connection behavior plays a significant role in the overall dynamic characteristics of a structure. A finite element (FE) modeling study of a bolt-connected square tubular steel beam is presented in this paper. Modal testing was performed in a controlled laboratory condition to validate the FE model, developed for the bolted beam. Two laser Doppler vibrometers were used simultaneously to measure structural vibration. A simplified joint model was proposed to further save computation time for structures with bolted connections. This study is an on-going effort to marshal knowledge associated with detecting damage on facility structures with bolted connections.

  16. A biomass combustion-gasification model: Validation and sensitivity analysis

    SciTech Connect

    Bettagli, N.; Fiaschi, D.; Desideri, U.

    1995-12-01

    The aim of the present paper is to study the gasification and combustion of biomass and waste materials. A model for the analysis of the chemical kinetics of gasification and combustion processes was developed with the main objective of calculating the gas composition at different operating conditions. The model was validated with experimental data for sawdust gasification. After having set the main kinetic parameters, the model was tested with other types of biomass, whose syngas composition is known. A sensitivity analysis was also performed to evaluate the influence of the main parameters, such as temperature, pressure, and air-fuel ratio on the composition of the exit gas. Both oxygen and air (i.e., a mixture of oxygen and nitrogen) gasification processes were simulated.

  17. Validated numerical simulation model of a dielectric elastomer generator

    NASA Astrophysics Data System (ADS)

    Foerster, Florentine; Moessinger, Holger; Schlaak, Helmut F.

    2013-04-01

    Dielectric elastomer generators (DEG) produce electrical energy by converting mechanical into electrical energy. Efficient operation requires homogeneous deformation of each single layer. However, by different internal and external influences like supports or the shape of a DEG the deformation will be inhomogeneous and hence negatively affect the amount of the generated electrical energy. Optimization of the deformation behavior leads to improved efficiency of the DEG and consequently to higher energy gain. In this work a numerical simulation model of a multilayer dielectric elastomer generator is developed using the FEM software ANSYS. The analyzed multilayer DEG consists of 49 active dielectric layers with layer thicknesses of 50 ?m. The elastomer is silicone (PDMS) while the compliant electrodes are made of graphite powder. In the simulation the real material parameters of the PDMS and the graphite electrodes need to be included. Therefore, the mechanical and electrical material parameters of the PDMS are determined by experimental investigations of test samples while the electrode parameters are determined by numerical simulations of test samples. The numerical simulation of the DEG is carried out as coupled electro-mechanical simulation for the constant voltage energy harvesting cycle. Finally, the derived numerical simulation model is validated by comparison with analytical calculations and further simulated DEG configurations. The comparison of the determined results show good accordance with regard to the deformation of the DEG. Based on the validated model it is now possible to optimize the DEG layout for improved deformation behavior with further simulations.

  18. Validation of a new Mesoscale Model for MARS .

    NASA Astrophysics Data System (ADS)

    De Sanctis, K.; Ferretti, R.; Forget, F.; Fiorenza, C.; Visconti, G.

    The study of Mars planet is very important because of the several similarities with the Earth. For the understanding of the dynamical processes which drive the martian atmosphere, a new Martian Mesoscale Model (MARS-MM5) is presented. The new model is based on the Pennsylvania State University (PSU)/National Centre for Atmosphere Research (NCAR) Mesoscale Model Version 5 \\citep{duh,gre}. MARS-MM5 has been adapted to Mars using soil characteristics and topography obtained by Mars Orbital Laser Altimeter (MOLA). Different cases, depending from data availability and corresponding to the equatorial region of Mars, have been selected for multiple MARS-MM5 simulations. To validate the different developments Mars Climate Database (MCD) and TES observations have been employed: MCD version 4.0 has been created on the basis of multi annual integration of Mars GCM output. The Thermal Emission Spectromter observations (TES) detected during Mars Global Surveyor (MGS) mission are used in terms of temperature. The new, and most important, aspect of this work is the direct validation of the newly generated MARS-MM5 in terms of three-dimensional observations. The comparison between MARS-MM5 and GCM horizontal and vertical temperature profiles shows a good agreement; moreover, a good agreement is also found between TES observations and MARS-MM5.

  19. Modeling and Validation of Damped Plexiglas Windows for Noise Control

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Gibbs, Gary P.; Klos, Jacob; Mazur, Marina

    2003-01-01

    Windows are a significant path for structure-borne and air-borne noise transmission in general aviation aircraft. In this paper, numerical and experimental results are used to evaluate damped plexiglas windows for the reduction of structure-borne and air-borne noise transmitted into the interior of an aircraft. In contrast to conventional homogeneous windows, the damped plexiglas windows were fabricated using two or three layers of plexiglas with transparent viscoelastic damping material sandwiched between the layers. Transmission loss and radiated sound power measurements were used to compare different layups of the damped plexiglas windows with uniform windows of the same nominal thickness. This vibro-acoustic test data was also used for the verification and validation of finite element and boundary element models of the damped plexiglas windows. Numerical models are presented for the prediction of radiated sound power for a point force excitation and transmission loss for diffuse acoustic excitation. Radiated sound power and transmission loss predictions are in good agreement with experimental data. Once validated, the numerical models were used to perform a parametric study to determine the optimum configuration of the damped plexiglas windows for reducing the radiated sound power for a point force excitation.

  20. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology

    PubMed Central

    Krishnamoorthi, Shankarjee; Perotti, Luigi E.; Borgstrom, Nils P.; Ajijola, Olujimi A.; Frid, Anna; Ponnaluri, Aditya V.; Weiss, James N.; Qu, Zhilin; Klug, William S.; Ennis, Daniel B.; Garfinkel, Alan

    2014-01-01

    We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG) and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations. PMID:25493967

  1. Low frequency eddy current benchmark study for model validation

    SciTech Connect

    Mooers, R. D.; Boehnlein, T. R.; Cherry, M. R.; Knopp, J. S.; Aldrin, J. C.; Sabbagh, H. A.

    2011-06-23

    This paper presents results of an eddy current model validation study. Precise measurements were made using an impedance analyzer to investigate changes in impedance due to Electrical Discharge Machining (EDM) notches in aluminum plates. Each plate contained one EDM notch at an angle of 0, 10, 20, or 30 degrees from the normal of the plate surface. Measurements were made with the eddy current probe both scanning parallel and perpendicular to the notch length. The experimental response from the vertical and oblique notches will be reported and compared to results from different numerical simulation codes.

  2. Higgs potential in the type II seesaw model

    SciTech Connect

    Arhrib, A.; Benbrik, R.; Chabab, M.; Rahili, L.; Ramadan, J.; Moultaka, G.; Peyranere, M. C.

    2011-11-01

    The standard model Higgs sector, extended by one weak gauge triplet of scalar fields with a very small vacuum expectation value, is a very promising setting to account for neutrino masses through the so-called type II seesaw mechanism. In this paper we consider the general renormalizable doublet/triplet Higgs potential of this model. We perform a detailed study of its main dynamical features that depend on five dimensionless couplings and two mass parameters after spontaneous symmetry breaking, and highlight the implications for the Higgs phenomenology. In particular, we determine (i) the complete set of tree-level unitarity constraints on the couplings of the potential and (ii) the exact tree-level boundedness from below constraints on these couplings, valid for all directions. When combined, these constraints delineate precisely the theoretically allowed parameter space domain within our perturbative approximation. Among the seven physical Higgs states of this model, the mass of the lighter (heavier) CP{sub even} state h{sup 0} (H{sup 0}) will always satisfy a theoretical upper (lower) bound that is reached for a critical value {mu}{sub c} of {mu} (the mass parameter controlling triple couplings among the doublet/triplet Higgses). Saturating the unitarity bounds, we find an upper bound m{sub h}{sup 0} or approx. {mu}{sub c} and {mu} < or approx. {mu}{sub c}. In the first regime the Higgs sector is typically very heavy, and only h{sup 0} that becomes SM-like could be accessible to the LHC. In contrast, in the second regime, somewhat overlooked in the literature, most of the Higgs sector is light. In particular, the heaviest state H{sup 0} becomes SM-like, the lighter states being the CP{sub odd} Higgs, the (doubly) charged Higgses, and a decoupled h{sup 0}, possibly leading to a distinctive phenomenology at the colliders.

  3. Conformational Analysis of the DFG-Out Kinase Motif and Biochemical Profiling of Structurally Validated Type II Inhibitors

    PubMed Central

    2015-01-01

    Structural coverage of the human kinome has been steadily increasing over time. The structures provide valuable insights into the molecular basis of kinase function and also provide a foundation for understanding the mechanisms of kinase inhibitors. There are a large number of kinase structures in the PDB for which the Asp and Phe of the DFG motif on the activation loop swap positions, resulting in the formation of a new allosteric pocket. We refer to these structures as classical DFG-out conformations in order to distinguish them from conformations that have also been referred to as DFG-out in the literature but that do not have a fully formed allosteric pocket. We have completed a structural analysis of almost 200 small molecule inhibitors bound to classical DFG-out conformations; we find that they are recognized by both type I and type II inhibitors. In contrast, we find that nonclassical DFG-out conformations strongly select against type II inhibitors because these structures have not formed a large enough allosteric pocket to accommodate this type of binding mode. In the course of this study we discovered that the number of structurally validated type II inhibitors that can be found in the PDB and that are also represented in publicly available biochemical profiling studies of kinase inhibitors is very small. We have obtained new profiling results for several additional structurally validated type II inhibitors identified through our conformational analysis. Although the available profiling data for type II inhibitors is still much smaller than for type I inhibitors, a comparison of the two data sets supports the conclusion that type II inhibitors are more selective than type I. We comment on the possible contribution of the DFG-in to DFG-out conformational reorganization to the selectivity. PMID:25478866

  4. Radiative transfer model for contaminated slabs: experimental validations

    NASA Astrophysics Data System (ADS)

    Andrieu, F.; Schmidt, F.; Schmitt, B.; Dout, S.; Brissaud, O.

    2015-09-01

    This article presents a set of spectro-goniometric measurements of different water ice samples and the comparison with an approximated radiative transfer model. The experiments were done using the spectro-radiogoniometer described in Brissaud et al. (2004). The radiative transfer model assumes an isotropization of the flux after the second interface and is fully described in Andrieu et al. (2015). Two kinds of experiments were conducted. First, the specular spot was closely investigated, at high angular resolution, at the wavelength of 1.5 ?m, where ice behaves as a very absorbing media. Second, the bidirectional reflectance was sampled at various geometries, including low phase angles on 61 wavelengths ranging from 0.8 to 2.0 ?m. In order to validate the model, we made qualitative tests to demonstrate the relative isotropization of the flux. We also conducted quantitative assessments by using a Bayesian inversion method in order to estimate the parameters (e.g., sample thickness, surface roughness) from the radiative measurements only. A simple comparison between the retrieved parameters and the direct independent measurements allowed us to validate the model. We developed an innovative Bayesian inversion approach to quantitatively estimate the uncertainties in the parameters avoiding the usual slow Monte Carlo approach. First we built lookup tables, and then we searched the best fits and calculated a posteriori density probability functions. The results show that the model is able to reproduce the geometrical energy distribution in the specular spot, as well as the spectral behavior of water ice slabs. In addition, the different parameters of the model are compatible with independent measurements.

  5. A validation study of a stochastic model of human interaction

    NASA Astrophysics Data System (ADS)

    Burchfield, Mitchel Talmadge

    The purpose of this dissertation is to validate a stochastic model of human interactions which is part of a developmentalism paradigm. Incorporating elements of ancient and contemporary philosophy and science, developmentalism defines human development as a progression of increasing competence and utilizes compatible theories of developmental psychology, cognitive psychology, educational psychology, social psychology, curriculum development, neurology, psychophysics, and physics. To validate a stochastic model of human interactions, the study addressed four research questions: (a) Does attitude vary over time? (b) What are the distributional assumptions underlying attitudes? (c) Does the stochastic model, {-}N{intlimitssbsp{-infty}{infty}}varphi(chi,tau)\\ Psi(tau)dtau, have utility for the study of attitudinal distributions and dynamics? (d) Are the Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein theories applicable to human groups? Approximately 25,000 attitude observations were made using the Semantic Differential Scale. Positions of individuals varied over time and the logistic model predicted observed distributions with correlations between 0.98 and 1.0, with estimated standard errors significantly less than the magnitudes of the parameters. The results bring into question the applicability of Fisherian research designs (Fisher, 1922, 1928, 1938) for behavioral research based on the apparent failure of two fundamental assumptions-the noninteractive nature of the objects being studied and normal distribution of attributes. The findings indicate that individual belief structures are representable in terms of a psychological space which has the same or similar properties as physical space. The psychological space not only has dimension, but individuals interact by force equations similar to those described in theoretical physics models. Nonlinear regression techniques were used to estimate Fermi-Dirac parameters from the data. The model explained a high degree of the variance in each probability distribution. The correlation between predicted and observed probabilities ranged from a low of 0.955 to a high value of 0.998, indicating that humans behave in psychological space as Fermions behave in momentum space.

  6. Model of the expansion of H II region RCW 82

    SciTech Connect

    Krasnobaev, K. V.; Kotova, G. Yu.; Tagirova, R. R. E-mail: gviana2005@gmail.com

    2014-05-10

    This paper aims to resolve the problem of formation of young objects observed in the RCW 82 H II region. In the framework of a classical trigger model the estimated time of fragmentation is larger than the estimated age of the H II region. Thus the young objects could not have formed during the dynamical evolution of the H II region. We propose a new model that helps resolve this problem. This model suggests that the H II region RCW 82 is embedded in a cloud of limited size that is denser than the surrounding interstellar medium. According to this model, when the ionization-shock front leaves the cloud it causes the formation of an accelerating dense gas shell. In the accelerated shell, the effects of the Rayleigh-Taylor (R-T) instability dominate and the characteristic time of the growth of perturbations with the observed magnitude of about 3 pc is 0.14 Myr, which is less than the estimated age of the H II region. The total time t {sub ?}, which is the sum of the expansion time of the H II region to the edge of the cloud, the time of the R-T instability growth, and the free fall time, is estimated as 0.44 < t {sub ?} < 0.78 Myr. We conclude that the young objects in the H II region RCW 82 could be formed as a result of the R-T instability with subsequent fragmentation into large-scale condensations.

  7. Sound Transmission Validation and Sensitivity Studies in Numerical Models.

    PubMed

    Oberrecht, Steve P; Krysl, Petr; Cranford, Ted W

    2016-01-01

    In 1974, Norris and Harvey published an experimental study of sound transmission into the head of the bottlenose dolphin. We used this rare source of data to validate our Vibroacoustic Toolkit, an array of numerical modeling simulation tools. Norris and Harvey provided measurements of received sound pressure in various locations within the dolphin's head from a sound source that was moved around the outside of the head. Our toolkit was used to predict the curves of pressure with the best-guess input data (material properties, transducer and hydrophone locations, and geometry of the animal's head). In addition, we performed a series of sensitivity analyses (SAs). SA is concerned with understanding how input changes to the model influence the outputs. SA can enhance understanding of a complex model by finding and analyzing unexpected model behavior, discriminating which inputs have a dominant effect on particular outputs, exploring how inputs combine to affect outputs, and gaining insight as to what additional information improves the model's ability to predict. Even when a computational model does not adequately reproduce the behavior of a physical system, its sensitivities may be useful for developing inferences about key features of the physical system. Our findings may become a valuable source of information for modeling the interactions between sound and anatomy. PMID:26611033

  8. Evaluation and cross-validation of Environmental Models

    NASA Astrophysics Data System (ADS)

    Lemaire, Joseph

    Before scientific models (statistical or empirical models based on experimental measurements; physical or mathematical models) can be proposed and selected as ISO Environmental Standards, a Commission of professional experts appointed by an established International Union or Association (e.g. IAGA for Geomagnetism and Aeronomy, . . . ) should have been able to study, document, evaluate and validate the best alternative models available at a given epoch. Examples will be given, indicating that different values for the Earth radius have been employed in different data processing laboratories, institutes or agencies, to process, analyse or retrieve series of experimental observations. Furthermore, invariant magnetic coordinates like B and L, commonly used in the study of Earth's radiation belts fluxes and for their mapping, differ from one space mission data center to the other, from team to team, and from country to country. Worse, users of empirical models generally fail to use the original magnetic model which had been employed to compile B and L , and thus to build these environmental models. These are just some flagrant examples of inconsistencies and misuses identified so far; there are probably more of them to be uncovered by careful, independent examination and benchmarking. A meter prototype, the standard unit length that has been determined on 20 May 1875, during the Diplomatic Conference of the Meter, and deposited at the BIPM (Bureau International des Poids et Mesures). In the same token, to coordinate and safeguard progress in the field of Space Weather, similar initiatives need to be undertaken, to prevent wild, uncontrolled dissemination of pseudo Environmental Models and Standards. Indeed, unless validation tests have been performed, there is guaranty, a priori, that all models on the market place have been built consistently with the same units system, and that they are based on identical definitions for the coordinates systems, etc... Therefore, preliminary analyses should be carried out under the control and authority of an established international professional Organization or Association, before any final political decision is made by ISO to select a specific Environmental Models, like for example IGRF and DGRF. Of course, Commissions responsible for checking the consistency of definitions, methods and algorithms for data processing might consider to delegate specific tasks (e.g. bench-marking the technical tools, the calibration procedures, the methods of data analysis, and the software algorithms employed in building the different types of models, as well as their usage) to private, intergovernmental or international organization/agencies (e.g.: NASA, ESA, AGU, EGU, COSPAR, . . . ); eventually, the latter should report conclusions to the Commissions members appointed by IAGA or any established authority like IUGG.

  9. Experimental validation of a numerical model for subway induced vibrations

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Degrande, G.; Lombaert, G.

    2009-04-01

    This paper presents the experimental validation of a coupled periodic finite element-boundary element model for the prediction of subway induced vibrations. The model fully accounts for the dynamic interaction between the train, the track, the tunnel and the soil. The periodicity or invariance of the tunnel and the soil in the longitudinal direction is exploited using the Floquet transformation, which allows for an efficient formulation in the frequency-wavenumber domain. A general analytical formulation is used to compute the response of three-dimensional invariant or periodic media that are excited by moving loads. The numerical model is validated by means of several experiments that have been performed at a site in Regent's Park on the Bakerloo line of London Underground. Vibration measurements have been performed on the axle boxes of the train, on the rail, the tunnel invert and the tunnel wall, and in the free field, both at the surface and at a depth of 15 m. Prior to these vibration measurements, the dynamic soil characteristics and the track characteristics have been determined. The Bakerloo line tunnel of London Underground has been modelled using the coupled periodic finite element-boundary element approach and free field vibrations due to the passage of a train at different speeds have been predicted and compared to the measurements. The correspondence between the predicted and measured response in the tunnel is reasonably good, although some differences are observed in the free field. The discrepancies are explained on the basis of various uncertainties involved in the problem. The variation in the response with train speed is similar for the measurements as well as the predictions. This study demonstrates the applicability of the coupled periodic finite element-boundary element model to make realistic predictions of the vibrations from underground railways.

  10. Traveling with Cognitive Tests: Testing the Validity of a KABC-II Adaptation in India

    ERIC Educational Resources Information Center

    Malda, Maike; van de Vijver, Fons J. R.; Srinivasan, Krishnamachari; Transler, Catherine; Sukumar, Prathima

    2010-01-01

    The authors evaluated the adequacy of an extensive adaptation of the American Kaufman Assessment Battery for Children, second edition (KABC-II), for 6- to 10-year-old Kannada-speaking children of low socioeconomic status in Bangalore, South India. The adapted KABC-II was administered to 598 children. Subtests showed high reliabilities, the

  11. Traveling with Cognitive Tests: Testing the Validity of a KABC-II Adaptation in India

    ERIC Educational Resources Information Center

    Malda, Maike; van de Vijver, Fons J. R.; Srinivasan, Krishnamachari; Transler, Catherine; Sukumar, Prathima

    2010-01-01

    The authors evaluated the adequacy of an extensive adaptation of the American Kaufman Assessment Battery for Children, second edition (KABC-II), for 6- to 10-year-old Kannada-speaking children of low socioeconomic status in Bangalore, South India. The adapted KABC-II was administered to 598 children. Subtests showed high reliabilities, the…

  12. Non-Linear Slosh Damping Model Development and Validation

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; West, Jeff

    2015-01-01

    Propellant tank slosh dynamics are typically represented by a mechanical model of spring mass damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control (GN&C) analysis. For a partially-filled smooth wall propellant tank, the critical damping based on classical empirical correlation is as low as 0.05%. Due to this low value of damping, propellant slosh is potential sources of disturbance critical to the stability of launch and space vehicles. It is postulated that the commonly quoted slosh damping is valid only under the linear regime where the slosh amplitude is small. With the increase of slosh amplitude, the critical damping value should also increase. If this nonlinearity can be verified and validated, the slosh stability margin can be significantly improved, and the level of conservatism maintained in the GN&C analysis can be lessened. The purpose of this study is to explore and to quantify the dependence of slosh damping with slosh amplitude. Accurately predicting the extremely low damping value of a smooth wall tank is very challenging for any Computational Fluid Dynamics (CFD) tool. One must resolve thin boundary layers near the wall and limit numerical damping to minimum. This computational study demonstrates that with proper grid resolution, CFD can indeed accurately predict the low damping physics from smooth walls under the linear regime. Comparisons of extracted damping values with experimental data for different tank sizes show very good agreements. Numerical simulations confirm that slosh damping is indeed a function of slosh amplitude. When slosh amplitude is low, the damping ratio is essentially constant, which is consistent with the empirical correlation. Once the amplitude reaches a critical value, the damping ratio becomes a linearly increasing function of the slosh amplitude. A follow-on experiment validated the developed nonlinear damping relationship. This discovery can lead to significant savings by reducing the number and size of slosh baffles in liquid propellant tanks.

  13. Instructional Support System--Occupational Education II. ISSOE Automotive Mechanics Content Validation.

    ERIC Educational Resources Information Center

    Abramson, Theodore

    A study was conducted to validate the Instructional Support System-Occupational Education (ISSOE) automotive mechanics curriculum. The following four steps were undertaken: (1) review of the ISSOE materials in terms of their "validity" as task statements; (2) a comparison of the ISSOE tasks to the tasks included in the V-TECS Automotive Mechanics

  14. A geomagnetically induced current warning system: model development and validation

    NASA Astrophysics Data System (ADS)

    McKay, A.; Clarke, E.; Reay, S.; Thomson, A.

    Geomagnetically Induced Currents (GIC), which can flow in technological systems at the Earth's surface, are a consequence of magnetic storms and Space Weather. A well-documented practical problem for the power transmission industry is that GIC can affect the lifetime and performance of transformers within the power grid. Operational mitigation is widely considered to be one of the best strategies to manage the Space Weather and GIC risk. Therefore in the UK a magnetic storm warning and GIC monitoring and analysis programme has been under development by the British Geological Survey and Scottish Power plc (the power grid operator for Central Scotland) since 1999. Under the auspices of the European Space Agency's service development activities BGS is developing the capability to meet two key user needs that have been identified. These needs are, firstly, the development of a near real-time solar wind shock/ geomagnetic storm warning, based on L1 solar wind data and, secondly, the development of an integrated surface geo-electric field and power grid network model that should allow prediction of GIC throughout the power grid in near real time. While the final goal is a `seamless package', the components of the package utilise diverse scientific techniques. We review progress to date with particular regard to the validation of the individual components of the package. The Scottish power grid response to the October 2003 magnetic storms is also discussed and model and validation data are presented.

  15. Low Altitude Validation of Geomagnetic Cutoff Models Using SAMPEX Data

    NASA Astrophysics Data System (ADS)

    Young, S. L.; Kress, B. T.

    2011-12-01

    Single event upsets (SEUs) caused by MeV protons are a concern for satellite operators so AFRL is working to create a tool that can specify and/or forecast SEU probabilities. An important component of the tool's SEU probability calculation will be the local energetic ion spectrum. The portion of that spectrum due to trapped energetic ion population is relatively stable and predictable; however it is more difficult to account for the transient solar energetic particles (SEPs). These particles, which can be ejected from the solar atmosphere during a solar flare or filament eruption or can be energized by coronal mass ejection (CME) driven shocks, can penetrate the Earth's magnetosphere into regions not normally populated by energetic protons. The magnetosphere will provide energy dependent shielding that also depends on its magnetic configuration. During magnetic storms that configuration is modified and the SEP cutoff latitude for a given particle energy can be suppressed up to ~15 degrees equatorward exposing normally shielded regions. As a first step to creating the satellite SEU prediction tool, we are comparing the Smart et al. (Advances in Space Research, 2006) and CISM-Dartmouth (Kress et al., Space Weather, 2010) geomagnetic cutoff tools. While they have provided some of their own validations in the noted papers, our validation will be done consistently between models allowing us to better compare the models.

  16. Validation of hydrogen gas stratification and mixing models

    SciTech Connect

    Wu, Hsingtzu; Zhao, Haihua

    2015-11-01

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for a large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. Computing time for each BMIX++ model with a normal desktop computer is less than 5 min.

  17. Modeling and Validation of a Propellant Mixer for Controller Design

    NASA Technical Reports Server (NTRS)

    Richter, Hanz; Barbieri, Enrique; Figueroa, Fernando

    2003-01-01

    A mixing chamber used in rocket engine testing at the NASA Stennis Space Center is modelled by a system of two nonlinear ordinary differential equations. The mixer is used to condition the thermodynamic properties of cryogenic liquid propellant by controlled injection of the same substance in the gaseous phase. The three inputs of the mixer are the positions of the valves regulating the liquid and gas flows at the inlets, and the position of the exit valve regulating the flow of conditioned propellant. Mixer operation during a test requires the regulation of its internal pressure, exit mass flow, and exit temperature. A mathematical model is developed to facilitate subsequent controller designs. The model must be simple enough to lend itself to subsequent feedback controller design, yet its accuracy must be tested against real data. For this reason, the model includes function calls to thermodynamic property data. Some structural properties of the resulting model that pertain to controller design, such as uniqueness of the equilibrium point, feedback linearizability and local stability are shown to hold under conditions having direct physical interpretation. The existence of fixed valve positions that attain a desired operating condition is also shown. Validation of the model against real data is likewise provided.

  18. Investigating the validity of the networked imaging sensor model

    NASA Astrophysics Data System (ADS)

    Friedman, Melvin

    2015-05-01

    The Networked Imaging Sensor (NIS) model takes as input target acquisition probability as a function of time for individuals or individual imaging sensors, and outputs target acquisition probability for a collection of imaging sensors and individuals. System target acquisition takes place the moment the first sensor or individual acquires the target. The derivation of the NIS model implies it is applicable to multiple moving sensors and targets. The principal assumption of the NIS model is independence of events that give rise to input target acquisition probabilities. For investigating the validity of the NIS model, we consider a collection of single images where neither the sensor nor target is moving. This paper investigates the ability of the NIS model to predict system target acquisition performance when multiple observers view first and second Gen thermal imagery, field-of-view imagery that has either zero or one stationary target in a laboratory environment when observers have a maximum of 12, 17 or unlimited seconds to acquire the target. Modeled and measured target acquisition performance are in good agreement.

  19. Nonlinear ultrasound modelling and validation of fatigue damage

    NASA Astrophysics Data System (ADS)

    Fierro, G. P. Malfense; Ciampa, F.; Ginzburg, D.; Onder, E.; Meo, M.

    2015-05-01

    Nonlinear ultrasound techniques have shown greater sensitivity to microcracks and they can be used to detect structural damages at their early stages. However, there is still a lack of numerical models available in commercial finite element analysis (FEA) tools that are able to simulate the interaction of elastic waves with the materials nonlinear behaviour. In this study, a nonlinear constitutive material model was developed to predict the structural response under continuous harmonic excitation of a fatigued isotropic sample that showed anharmonic effects. Particularly, by means of Landau's theory and Kelvin tensorial representation, this model provided an understanding of the elastic nonlinear phenomena such as the second harmonic generation in three-dimensional solid media. The numerical scheme was implemented and evaluated using a commercially available FEA software LS-DYNA, and it showed a good numerical characterisation of the second harmonic amplitude generated by the damaged region known as the nonlinear response area (NRA). Since this process requires only the experimental second-order nonlinear parameter and rough damage size estimation as an input, it does not need any baseline testing with the undamaged structure or any dynamic modelling of the fatigue crack growth. To validate this numerical model, the second-order nonlinear parameter was experimentally evaluated at various points over the fatigue life of an aluminium (AA6082-T6) coupon and the crack propagation was measured using an optical microscope. A good correlation was achieved between the experimental set-up and the nonlinear constitutive model.

  20. Validation of the Coronal Thick Target Source Model

    NASA Astrophysics Data System (ADS)

    Fleishman, Gregory D.; Xu, Yan; Nita, Gelu N.; Gary, Dale E.

    2016-01-01

    We present detailed 3D modeling of a dense, coronal thick-target X-ray flare using the GX Simulator tool, photospheric magnetic measurements, and microwave imaging and spectroscopy data. The developed model offers a remarkable agreement between the synthesized and observed spectra and images in both X-ray and microwave domains, which validates the entire model. The flaring loop parameters are chosen to reproduce the emission measure, temperature, and the nonthermal electron distribution at low energies derived from the X-ray spectral fit, while the remaining parameters, unconstrained by the X-ray data, are selected such as to match the microwave images and total power spectra. The modeling suggests that the accelerated electrons are trapped in the coronal part of the flaring loop, but away from where the magnetic field is minimal, and, thus, demonstrates that the data are clearly inconsistent with electron magnetic trapping in the weak diffusion regime mediated by the Coulomb collisions. Thus, the modeling supports the interpretation of the coronal thick-target sources as sites of electron acceleration in flares and supplies us with a realistic 3D model with physical parameters of the acceleration region and flaring loop.

  1. Validation of document image defect models for optical character recognition

    SciTech Connect

    Li, Y.; Lopresti, D.; Tomkins, A.

    1994-12-31

    In this paper we consider the problem of evaluating models for physical defects affecting the optical character recognition (OCR) process. While a number of such models have been proposed, the contention that they produce the desired result is typically argued in an ad hoc and informal way. We introduce a rigorous and more pragmatic definition of when a model is accurate: we say a defect model is validated if the OCR errors induced by the model are effectively indistinguishable from the errors encountered when using real scanned documents. We present two measures to quantify this similarity: the Vector Space method and the Coin Bias method. The former adapts an approach used in information retrieval, the latter simulates an observer attempting to do better than a {open_quotes}random{close_quotes} guesser. We compare and contrast the two techniques based on experimental data; both seem to work well, suggesting this is an appropriate formalism for the development and evaluation of document image defect models.

  2. Experimental validation of a finite-element model updating procedure

    NASA Astrophysics Data System (ADS)

    Kanev, S.; Weber, F.; Verhaegen, M.

    2007-02-01

    This paper validates an approach to damage detection and localization based on finite-element model updating (FEMU). The approach has the advantage over other existing methods to FEMU that it simultaneously updates all three finite-element model matrices at the same time preserving their structure (connectivity), symmetry and positive-definiteness. The approach is tested in this paper on an experimental setup consisting of a steel cable, where local mass changes and global change in the tension of the cable are introduced. The new algorithm is applied to identify the size and location of different changes in the structural parameters (mass, stiffness and damping). The obtained results clearly indicate that even small structural changes can be detected and localized with the new method. Additionally, a comparison with many other FEMU-based methods has been performed to show the superiority of the considered method.

  3. Ultrasonic transducers for cure monitoring: design, modelling and validation

    NASA Astrophysics Data System (ADS)

    Lionetto, Francesca; Montagna, Francesco; Maffezzoli, Alfonso

    2011-12-01

    The finite element method (FEM) has been applied to simulate the ultrasonic wave propagation in a multilayered transducer, expressly designed for high-frequency dynamic mechanical analysis of polymers. The FEM model includes an electro-acoustic (active element) and some acoustic (passive elements) transmission lines. The simulation of the acoustic propagation accounts for the interaction between the piezoceramic and the materials in the buffer rod and backing, and the coupling between the electric and mechanical properties of the piezoelectric material. As a result of the simulations, the geometry and size of the modelled ultrasonic transducer has been optimized and used for the realization of a prototype transducer for cure monitoring. The transducer performance has been validated by measuring the velocity changes during the polymerization of a thermosetting matrix of composite materials.

  4. Defect distribution model validation and effective process control

    NASA Astrophysics Data System (ADS)

    Zhong, Lei

    2003-07-01

    Assumption of the underlying probability distribution is an essential part of effective process control. In this article, we demonstrate how to improve the effectiveness of equipment monitoring and process induced defect control through properly selecting, validating and using the hypothetical distribution models. The testing method is based on probability plotting, which is made possible through order statistics. Since each ordered sample data point has a cumulative probability associated with it, which is calculated as a function of sample size, the assumption validity is readily judged by the linearity of the ordered sample data versus the deviate predicted by the assumed statistical model from the cumulative probability. A comparison is made between normal and lognormal distributions to illustrate how dramatically the distribution model could affect the control limit setting. Examples presented include defect data collected on SP1 the dark field inspection tool on a variety of deposited and polished metallic and dielectric films. We find that the defect count distribution is in most cases approximately lognormal. We show that normal distribution is an inadequate assumption, as clearly indicated by the non-linearity of the probability plots. Misuse of normal distribution leads to a too optimistic process control limit, typically 50% tighter than suggested by the lognormal distribution. The inappropriate control limit setting consequently results in an excursion rate at a level too high to be manageable. Lognormal distribution is a valid assumption because it is positively skewed, which adequately takes into account the fact that defect count distribution is typically characteristic of a long tail. In essence, use of lognormal distribution is a suggestion that the long tail be treated as part of the process entitlement (capability) instead of process excursion. The adjustment of the expected process entitlement is reflected and quantified by the skewness of lognormal distribution, yielding a more realistic estimate (defect count control limit). It is of particular importance to use a validated probability distribution when the sample size is small. Statistical process control (SPC) chart is generally constructed on the assumption of normality of the underlying population. Although this assumption is not true, as we discussed in the previous paragraph, the sample average will follow a normal distribution regardless of the underlying distribution according to the central limit theorem. However, this practice requires a large sample, which is sometimes impractical, especially in the stage of process development and yield ramp-up, when the process control limit is and has to be a moving target, enabling a rapid and constant yield-learning with minimal amount of production interruption and/or resource reallocation. In this work, we demonstrate that a validated statistical model such as lognormal distribution allows us to monitor the progress in a quantifiable and measurable way, and to tighten the control limits smoothly and systematically. To do so, we use the verified model to make a deduction about the expected defect count at a predetermined deviate, say 3s. The estimate error or the range is a function of sample variation, sample size, and the confidence level at which the estimation is being made. If we choose a fixed sample size and confidence level, the defectivity performance is explicitly defined and gauged by the estimate and the estimate error.

  5. Long-term ELBARA-II Assistance to SMOS Land Product and Algorithm Validation at the Valencia Anchor Station (MELBEX Experiment 2010-2013)

    NASA Astrophysics Data System (ADS)

    Lopez-Baeza, Ernesto; Wigneron, Jean-Pierre; Schwank, Mike; Miernecki, Maciej; Kerr, Yann; Casal, Tania; Delwart, Steven; Fernandez-Moran, Roberto; Mecklenburg, Susanne; Coll Pajaron, M. Amparo; Salgado Hernanz, Paula

    The main activity of the Valencia Anchor Station (VAS) is currently now to support the validation of SMOS (Soil Moisture and Ocean Salinity) Level 2 and 3 land products (soil moisture, SM, and vegetation optical depth, TAU). With this aim, the European Space Agency (ESA) has provided the Climatology from Satellites Group of the University of Valencia with an ELBARA-II microwave radiometer under a loan agreement since September 2009. During this time, brightness temperatures (TB) have continuously been acquired, except during normal maintenance or minor repair interruptions. ELBARA-II is an L-band dual-polarization radiometer with two channels (1400-1418 MHz, 1409-1427 MHz). It is continuously measuring over a vineyard field (El Renegado, Caudete de las Fuentes, Valencia) from a 15 m platform with a constant protocol for calibration and angular scanning measurements with the aim to assisting the validation of SMOS land products and the calibration of the L-MEB (L-Band Emission of the Biosphere) -basis for the SMOS Level 2 Land Processor- over the VAS validation site. One of the advantages of using the VAS site is the possibility of studying two different environmental conditions along the year. While the vine cycle extends mainly between April and October, during the rest of the year the area remains under bare soil conditions, adequate for the calibration of the soil model. The measurement protocol currently running has shown to be robust during the whole operation time and will be extended in time as much as possible to continue providing a long-term data set of ELBARA-II TB measurements and retrieved SM and TAU. This data set is also showing to be useful in support of SMOS scientific activities: the VAS area and, specifically the ELBARA-II site, offer good conditions to control the long-term evolution of SMOS Level 2 and Level 3 land products and interpret eventual anomalies that may obscure sensor hidden biases. In addition, SM and TAU that are currently retrieved from the ELBARA-II TB data by inversion of the L-MEB model, can also be compared to the Level 2 and Level 3 SMOS products. L-band ELBARA-II measurements provide area-integrated estimations of SM and TAU that are much more representative of the soil and vegetation conditions at field scale than ground measurements (from capacitive probes for SM and destructive measurements for TAU). For instance, Miernecki et al., (2012) and Wigneron et al. (2012) showed that very good correlations could be obtained from TB data and SM retrievals obtained from both SMOS and ELBARA-II over the 2010-2011 time period. The analysis of the quality of these correlations over a long time period can be very useful to evaluate the SMOS measurements and retrieved products (Level 2 and 3). The present work that extends the analysis over almost 4 years now (2010-2013) emphasizes the need to (i) maintain the long-time record of ELBARA-II measurements (ii) enhance as much as possible the control over other parameters, especially, soil roughness (SR), vegetation water content (VWC) and surface temperature, to interpret the retrieved results obtained from both SMOS and ELBARA-II instruments.

  6. A Model for Estimating the Reliability and Validity of Criterion-Referenced Measures.

    ERIC Educational Resources Information Center

    Edmonston, Leon P.; Randall, Robert S.

    A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in

  7. Predictive validity of behavioural animal models for chronic pain

    PubMed Central

    Berge, Odd-Geir

    2011-01-01

    Rodent models of chronic pain may elucidate pathophysiological mechanisms and identify potential drug targets, but whether they predict clinical efficacy of novel compounds is controversial. Several potential analgesics have failed in clinical trials, in spite of strong animal modelling support for efficacy, but there are also examples of successful modelling. Significant differences in how methods are implemented and results are reported means that a literature-based comparison between preclinical data and clinical trials will not reveal whether a particular model is generally predictive. Limited reports on negative outcomes prevents reliable estimate of specificity of any model. Animal models tend to be validated with standard analgesics and may be biased towards tractable pain mechanisms. But preclinical publications rarely contain drug exposure data, and drugs are usually given in high doses and as a single administration, which may lead to drug distribution and exposure deviating significantly from clinical conditions. The greatest challenge for predictive modelling is, however, the heterogeneity of the target patient populations, in terms of both symptoms and pharmacology, probably reflecting differences in pathophysiology. In well-controlled clinical trials, a majority of patients shows less than 50% reduction in pain. A model that responds well to current analgesics should therefore predict efficacy only in a subset of patients within a diagnostic group. It follows that successful translation requires several models for each indication, reflecting critical pathophysiological processes, combined with data linking exposure levels with effect on target. LINKED ARTICLES This article is part of a themed issue on Translational Neuropharmacology. To view the other articles in this issue visit http://dx.doi.org/10.1111/bph.2011.164.issue-4 PMID:21371010

  8. Development and validation of a liquid composite molding model

    NASA Astrophysics Data System (ADS)

    Bayldon, John Michael

    2007-12-01

    In composite manufacturing, Vacuum Assisted Resin Transfer Molding (VARTM) is becoming increasingly important as a cost effective manufacturing method of structural composites. In this process the dry preform (reinforcement) is placed on a rigid tool and covered by a flexible film to form an airtight vacuum bag. Liquid resin is drawn under vacuum through the preform inside the vacuum bag. Modeling of this process relies on a good understanding of closely coupled phenomena. The resin flow depends on the preform permeability, which in turn depends on the local fluid pressure and the preform compaction behavior. VARTM models for predicting the flow rate in this process do exist, however, they are not able to properly predict the flow for all classes of reinforcement material. In this thesis, the continuity equation used in VARTM models is reexamined and a modified form proposed. In addition, the compaction behavior of the preform in both saturated and dry states is studied in detail and new models are proposed for the compaction behavior. To assess the validity of the proposed models, the shadow moire method was adapted and used to perform full field measurement of the preform thickness during infusion, in addition to the usual measurements of flow front position. A new method was developed and evaluated for the analysis of the moire data related to the VARTM process, however, the method has wider applicability to other full field thickness measurements. The use of this measurement method demonstrated that although the new compaction models work well in the characterization tests, they do not properly describe all the preform features required for modeling the process. In particular the effect of varying saturation on the preform's behavior requires additional study. The flow models developed did, however, improve the prediction of the flow rate for the more compliant preform material tested, and the experimental techniques have shown where additional test methods will be required to further improve the models.

  9. Literature-derived bioaccumulation models for earthworms: Development and validation

    SciTech Connect

    Sample, B.E.; Suter, G.W. II; Beauchamp, J.J.; Efroymson, R.A.

    1999-09-01

    Estimation of contaminant concentrations in earthworms is a critical component in many ecological risk assessments. Without site-specific data, literature-derived uptake factors or models are frequently used. Although considerable research has been conducted on contaminant transfer from soil to earthworms, most studies focus on only a single location. External validation of transfer models has not been performed. The authors developed a database of soil and tissue concentrations for nine inorganic and two organic chemicals. Only studies that presented total concentrations in departed earthworms were included. Uptake factors and simple and multiple regression models of natural-log-transformed concentrations of each analyte in soil and earthworms were developed using data from 26 studies. These models were then applied to data from six additional studies. Estimated and observed earthworm concentrations were compared using nonparametric Wilcoxon signed-rank tests. Relative accuracy and quality of different estimation methods were evaluated by calculating the proportional deviation of the estimate from the measured value. With the exception of Cr, significant, single-variable (e.g., soil concentration) regression models were fit for each analyte. Inclusion of soil Ca improved model fits for Cd and Pb. Soil pH only marginally improved model fits. The best general estimates of chemical concentrations in earthworms were generated by simple ln-ln regression models for As, Cd, Cu, Hg, Mn, Pb, Zn, and polychlorinated biphenyls. No method accurately estimated Cr or Ni in earthworms. Although multiple regression models including pH generated better estimates for a few analytes, in general, the predictive utility gained by incorporating environmental variables was marginal.

  10. Test cell modeling and optimization for FPD-II

    SciTech Connect

    Haney, S.W.; Fenstermacher, M.E.

    1985-04-10

    The Fusion Power Demonstration, Configuration II (FPD-II), will ba a DT burning tandem mirror facility with thermal barriers, designed as the next step engineering test reactor (ETR) to follow the tandem mirror ignition test machines. Current plans call for FPD-II to be a multi-purpose device. For approximately the first half of its lifetime, it will operate as a high-Q ignition machine designed to reach or exceed engineering break-even and to demonstrate the technological feasibility of tandem mirror fusion. The second half of its operation will focus on the evaluation of candidate reactor blanket designs using a neutral beam driven test cell inserted at the midplane of the 90 m long cell. This machine called FPD-II+T, uses an insert configuration similar to that used in the MFTF-..cap alpha..+T study. The modeling and optimization of FPD-II+T are the topic of the present paper.

  11. Systematic approach to verification and validation: High explosive burn models

    SciTech Connect

    Menikoff, Ralph; Scovel, Christina A.

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code, run a simulation, and generate a comparison plot showing simulated and experimental velocity gauge data. These scripts are then applied to several series of experiments and to several HE burn models. The same systematic approach is applicable to other types of material models; for example, equations of state models and material strength models.

  12. Mathematical modelling in physics and engineering. II

    NASA Astrophysics Data System (ADS)

    Oke, K. H.; Jones, A. L.

    1982-11-01

    For pt.I see Phys. Educ., vol.17, p.220 (1982). The authors present an example on mathematical modelling in the classroom, called 'Power from windmills', which has considerable potential for development both as a model and as a series of modelling exercises of increasing difficulty for students with different backgrounds.

  13. Validation of a Global Hydrodynamic Flood Inundation Model

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Smith, A.; Sampson, C. C.; Alfieri, L.; Neal, J. C.

    2014-12-01

    In this work we present first validation results for a hyper-resolution global flood inundation model. We use a true hydrodynamic model (LISFLOOD-FP) to simulate flood inundation at 1km resolution globally and then use downscaling algorithms to determine flood extent and depth at 90m spatial resolution. Terrain data are taken from a custom version of the SRTM data set that has been processed specifically for hydrodynamic modelling. Return periods of flood flows along the entire global river network are determined using: (1) empirical relationships between catchment characteristics and index flood magnitude in different hydroclimatic zones derived from global runoff data; and (2) an index flood growth curve, also empirically derived. Bankful return period flow is then used to set channel width and depth, and flood defence impacts are modelled using empirical relationships between GDP, urbanization and defence standard of protection. The results of these simulations are global flood hazard maps for a number of different return period events from 1 in 5 to 1 in 1000 years. We compare these predictions to flood hazard maps developed by national government agencies in the UK and Germany using similar methods but employing detailed local data, and to observed flood extent at a number of sites including St. Louis, USA and Bangkok in Thailand. Results show that global flood hazard models can have considerable skill given careful treatment to overcome errors in the publicly available data that are used as their input.

  14. Standard solar model. II - g-modes

    NASA Technical Reports Server (NTRS)

    Guenther, D. B.; Demarque, P.; Pinsonneault, M. H.; Kim, Y.-C.

    1992-01-01

    The paper presents the g-mode oscillation for a set of modern solar models. Each solar model is based on a single modification or improvement to the physics of a reference solar model. Improvements were made to the nuclear reaction rates, the equation of state, the opacities, and the treatment of the atmosphere. The error in the predicted g-mode periods associated with the uncertainties in the model physics is predicted and the specific sensitivities of the g-mode periods and their period spacings to the different model structures are described. In addition, these models are compared to a sample of published observations. A remarkably good agreement is found between the 'best' solar model and the observations of Hill and Gu (1990).

  15. Validation Analysis of the Shoal Groundwater Flow and Transport Model

    SciTech Connect

    A. Hassan; J. Chapman

    2008-11-01

    Environmental restoration at the Shoal underground nuclear test is following a process prescribed by a Federal Facility Agreement and Consent Order (FFACO) between the U.S. Department of Energy, the U.S. Department of Defense, and the State of Nevada. Characterization of the site included two stages of well drilling and testing in 1996 and 1999, and development and revision of numerical models of groundwater flow and radionuclide transport. Agreement on a contaminant boundary for the site and a corrective action plan was reached in 2006. Later that same year, three wells were installed for the purposes of model validation and site monitoring. The FFACO prescribes a five-year proof-of-concept period for demonstrating that the site groundwater model is capable of producing meaningful results with an acceptable level of uncertainty. The corrective action plan specifies a rigorous seven step validation process. The accepted groundwater model is evaluated using that process in light of the newly acquired data. The conceptual model of ground water flow for the Project Shoal Area considers groundwater flow through the fractured granite aquifer comprising the Sand Springs Range. Water enters the system by the infiltration of precipitation directly on the surface of the mountain range. Groundwater leaves the granite aquifer by flowing into alluvial deposits in the adjacent basins of Fourmile Flat and Fairview Valley. A groundwater divide is interpreted as coinciding with the western portion of the Sand Springs Range, west of the underground nuclear test, preventing flow from the test into Fourmile Flat. A very low conductivity shear zone east of the nuclear test roughly parallels the divide. The presence of these lateral boundaries, coupled with a regional discharge area to the northeast, is interpreted in the model as causing groundwater from the site to flow in a northeastward direction into Fairview Valley. Steady-state flow conditions are assumed given the absence of groundwater withdrawal activities in the area. The conceptual and numerical models were developed based upon regional hydrogeologic investigations conducted in the 1960s, site characterization investigations (including ten wells and various geophysical and geologic studies) at Shoal itself prior to and immediately after the test, and two site characterization campaigns in the 1990s for environmental restoration purposes (including eight wells and a year-long tracer test). The new wells are denoted MV-1, MV-2, and MV-3, and are located to the northnortheast of the nuclear test. The groundwater model was generally lacking data in the north-northeastern area; only HC-1 and the abandoned PM-2 wells existed in this area. The wells provide data on fracture orientation and frequency, water levels, hydraulic conductivity, and water chemistry for comparison with the groundwater model. A total of 12 real-number validation targets were available for the validation analysis, including five values of hydraulic head, three hydraulic conductivity measurements, three hydraulic gradient values, and one angle value for the lateral gradient in radians. In addition, the fracture dip and orientation data provide comparisons to the distributions used in the model and radiochemistry is available for comparison to model output. Goodness-of-fit analysis indicates that some of the model realizations correspond well with the newly acquired conductivity, head, and gradient data, while others do not. Other tests indicated that additional model realizations may be needed to test if the model input distributions need refinement to improve model performance. This approach (generating additional realizations) was not followed because it was realized that there was a temporal component to the data disconnect: the new head measurements are on the high side of the model distributions, but the heads at the original calibration locations themselves have also increased over time. This indicates that the steady-state assumption of the groundwater model is in error. To test the robustness of the model despite the transient nature of the heads, the newly acquired MV hydraulic head values were trended back to their likely values in 1999, the date of the calibration measurements. Additional statistical tests are performed using both the backward-projected MV heads and the observed heads to identify acceptable model realizations. A jackknife approach identified two possible threshold values to consider. For the analysis using the backward-trended heads, either 458 or 818 realizations (out of 1,000) are found acceptable, depending on the threshold chosen. The analysis using the observed heads found either 284 or 709 realizations acceptable. The impact of the refined set of realizations on the contaminant boundary was explored using an assumed starting mass of a single radionuclide and the acceptable realizations from the backward-trended analysis.

  16. NAIRAS aircraft radiation model development, dose climatology, and initial validation

    NASA Astrophysics Data System (ADS)

    Mertens, Christopher J.; Meier, Matthias M.; Brown, Steven; Norman, Ryan B.; Xu, Xiaojing

    2013-10-01

    The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis suggests that these single-point differences will be within 30% when a new deterministic pion-initiated electromagnetic cascade code is integrated into NAIRAS, an effort which is currently underway.

  17. Aqueous Solution Vessel Thermal Model Development II

    SciTech Connect

    Buechler, Cynthia Eileen

    2015-10-28

    The work presented in this report is a continuation of the work described in the May 2015 report, “Aqueous Solution Vessel Thermal Model Development”. This computational fluid dynamics (CFD) model aims to predict the temperature and bubble volume fraction in an aqueous solution of uranium. These values affect the reactivity of the fissile solution, so it is important to be able to calculate them and determine their effects on the reaction. Part A of this report describes some of the parameter comparisons performed on the CFD model using Fluent. Part B describes the coupling of the Fluent model with a Monte-Carlo N-Particle (MCNP) neutron transport model. The fuel tank geometry is the same as it was in the May 2015 report, annular with a thickness-to-height ratio of 0.16. An accelerator-driven neutron source provides the excitation for the reaction, and internal and external water cooling channels remove the heat. The model used in this work incorporates the Eulerian multiphase model with lift, wall lubrication, turbulent dispersion and turbulence interaction. The buoyancy-driven flow is modeled using the Boussinesq approximation, and the flow turbulence is determined using the k-ω Shear-Stress-Transport (SST) model. The dispersed turbulence multiphase model is employed to capture the multiphase turbulence effects.

  18. Geophysical Monitoring for Validation of Transient Permafrost Models (Invited)

    NASA Astrophysics Data System (ADS)

    Hauck, C.; Hilbich, C.; Marmy, A.; Scherler, M.

    2013-12-01

    Permafrost is a widespread phenomenon at high latitudes and high altitudes and describes the permanently frozen state of the subsurface in lithospheric material. In the context of climate change, both, new monitoring and modelling techniques are required to observe and predict potential permafrost changes, e.g. the warming and degradation which may lead to the liberation of carbon (Arctic) and the destabilisation of permafrost slopes (mountains). Mountain permafrost occurrences in the European Alps are characterised by temperatures only a few degrees below zero and are therefore particularly sensitive to projected climate changes in the 21st century. Traditional permafrost observation techniques are mainly based on thermal monitoring in vertical and horizontal dimension, but they provide only weak indications of physical properties such as ice or liquid water content. Geophysical techniques can be used to characterise permafrost occurrences and to monitor their changes as the physical properties of frozen and unfrozen ground measured by geophysical techniques are markedly different. In recent years, electromagnetic, seismic but especially electrical methods have been used to continuously monitor permafrost occurrences and to detect long-term changes within the active layer and regarding the ice content within the permafrost layer. On the other hand, coupled transient thermal/hydraulic models are used to predict the evolution of permafrost occurrences under different climate change scenarios. These models rely on suitable validation data for a certain observation period, which is usually restricted to data sets of ground temperature and active layer depth. Very important initialisation and validation data for permafrost models are, however, ground ice content and unfrozen water content in the active layer. In this contribution we will present a geophysical monitoring application to estimate ice and water content and their evolution in time at a permafrost station in the Swiss Alps. These data are then used to validate the coupled mass and energy balance soil model COUP, which is used for long-term projections of the permafrost evolution in the Swiss Alps. For this, we apply the recently developed 4-phase model, which is based on simple petrophysical relationships and which uses geoelectric and seismic tomographic data sets as input data.. In addition, we use continuously measured electrical resistivity tomography data sets and soil moisture data in daily resolution to compare modelled ice content changes and geophysical observations in high temporal resolution. The results show still large uncertainties in both model approaches regarding the absolute ice content values, but much smaller uncertainties regarding the changes in ice and unfrozen water content. We conclude that this approach is well suited for the analysis of permafrost changes in both, model and monitoring studies, even though more efforts are needed for obtaining in situ ground truth data of ice content and porosity.

  19. PIV validation of blood-heart valve leaflet interaction modelling.

    PubMed

    Kaminsky, R; Dumont, K; Weber, H; Schroll, M; Verdonck, P

    2007-07-01

    The aim of this study was to validate the 2D computational fluid dynamics (CFD) results of a moving heart valve based on a fluid-structure interaction (FSI) algorithm with experimental measurements. Firstly, a pulsatile laminar flow through a monoleaflet valve model with a stiff leaflet was visualized by means of Particle Image Velocimetry (PIV). The inflow data sets were applied to a CFD simulation including blood-leaflet interaction. The measurement section with a fixed leaflet was enclosed into a standard mock loop in series with a Harvard Apparatus Pulsatile Blood Pump, a compliance chamber and a reservoir. Standard 2D PIV measurements were made at a frequency of 60 bpm. Average velocity magnitude results of 36 phase-locked measurements were evaluated at every 10 degrees of the pump cycle. For the CFD flow simulation, a commercially available package from Fluent Inc. was used in combination with inhouse developed FSI code based on the Arbitrary Lagrangian-Eulerian (ALE) method. Then the CFD code was applied to the leaflet to quantify the shear stress on it. Generally, the CFD results are in agreement with the PIV evaluated data in major flow regions, thereby validating the FSI simulation of a monoleaflet valve with a flexible leaflet. The applicability of the new CFD code for quantifying the shear stress on a flexible leaflet is thus demonstrated. PMID:17674341

  20. Case study for model validation : assessing a model for thermal decomposition of polyurethane foam.

    SciTech Connect

    Dowding, Kevin J.; Leslie, Ian H.; Hobbs, Michael L.; Rutherford, Brian Milne; Hills, Richard Guy; Pilch, Martin M.

    2004-10-01

    A case study is reported to document the details of a validation process to assess the accuracy of a mathematical model to represent experiments involving thermal decomposition of polyurethane foam. The focus of the report is to work through a validation process. The process addresses the following activities. The intended application of mathematical model is discussed to better understand the pertinent parameter space. The parameter space of the validation experiments is mapped to the application parameter space. The mathematical models, computer code to solve the models and its (code) verification are presented. Experimental data from two activities are used to validate mathematical models. The first experiment assesses the chemistry model alone and the second experiment assesses the model of coupled chemistry, conduction, and enclosure radiation. The model results of both experimental activities are summarized and uncertainty of the model to represent each experimental activity is estimated. The comparison between the experiment data and model results is quantified with various metrics. After addressing these activities, an assessment of the process for the case study is given. Weaknesses in the process are discussed and lessons learned are summarized.

  1. Richards model revisited: validation by and application to infection dynamics.

    PubMed

    Wang, Xiang-Sheng; Wu, Jianhong; Yang, Yong

    2012-11-21

    Ever since Richards proposed his flexible growth function more than half a century ago, it has been a mystery that this empirical function has made many incredible coincidences with real ecological or epidemic data even though one of its parameters (i.e., the exponential term) does not seem to have clear biological meaning. It is therefore a natural challenge to mathematical biologists to provide an explanation of the interesting coincidences and a biological interpretation of the parameter. Here we start from a simple epidemic SIR model to revisit Richards model via an intrinsic relation between both models. Especially, we prove that the exponential term in the Richards model has a one-to-one nonlinear correspondence to the basic reproduction number of the SIR model. This one-to-one relation provides us an explicit formula in calculating the basic reproduction number. Another biological significance of our study is the observation that the peak time is approximately just a serial interval after the turning point. Moreover, we provide an explicit relation between final outbreak size, basic reproduction number and the peak epidemic size which means that we can predict the final outbreak size shortly after the peak time. Finally, we introduce a constraint in Richards model to address over fitting problem observed in the existing studies and then apply our method with constraint to conduct some validation analysis using the data of recent outbreaks of prototype infectious diseases such as Canada 2009 H1N1 outbreak, GTA 2003 SARS outbreak, Singapore 2005 dengue outbreak, and Taiwan 2003 SARS outbreak. Our new formula gives much more stable and precise estimate of model parameters and key epidemic characteristics such as the final outbreak size, the basic reproduction number, and the turning point, compared with earlier simulations without constraints. PMID:22889641

  2. ADOPT: A Historically Validated Light Duty Vehicle Consumer Choice Model

    SciTech Connect

    Brooker, A.; Gonder, J.; Lopp, S.; Ward, J.

    2015-05-04

    The Automotive Deployment Option Projection Tool (ADOPT) is a light-duty vehicle consumer choice and stock model supported by the U.S. Department of Energy’s Vehicle Technologies Office. It estimates technology improvement impacts on U.S. light-duty vehicles sales, petroleum use, and greenhouse gas emissions. ADOPT uses techniques from the multinomial logit method and the mixed logit method estimate sales. Specifically, it estimates sales based on the weighted value of key attributes including vehicle price, fuel cost, acceleration, range and usable volume. The average importance of several attributes changes nonlinearly across its range and changes with income. For several attributes, a distribution of importance around the average value is used to represent consumer heterogeneity. The majority of existing vehicle makes, models, and trims are included to fully represent the market. The Corporate Average Fuel Economy regulations are enforced. The sales feed into the ADOPT stock model. It captures key aspects for summing petroleum use and greenhouse gas emissions This includes capturing the change in vehicle miles traveled by vehicle age, the creation of new model options based on the success of existing vehicles, new vehicle option introduction rate limits, and survival rates by vehicle age. ADOPT has been extensively validated with historical sales data. It matches in key dimensions including sales by fuel economy, acceleration, price, vehicle size class, and powertrain across multiple years. A graphical user interface provides easy and efficient use. It manages the inputs, simulation, and results.

  3. Empirical validation of SAR values predicted by FDTD modeling.

    PubMed

    Gajsek, P; Walters, T J; Hurt, W D; Ziriax, J M; Nelson, D A; Mason, P A

    2002-01-01

    Rapid increase in the use of numerical techniques to predict current density or specific absorption rate (SAR) in sophisticated three dimensional anatomical computer models of man and animals has resulted in the need to understand how numerical solutions of the complex electrodynamics equations match with empirical measurements. This aspect is particularly important because different numerical codes and computer models are used in research settings as a guide in designing clinical devices, telecommunication systems, and safety standards. To ensure compliance with safety guidelines during equipment design, manufacturing and maintenance, realistic and accurate models could be used as a bridge between empirical data and actual exposure conditions. Before these tools are transitioned into the hands of health safety officers and system designers, their accuracy and limitations must be verified under a variety of exposure conditions using available analytical and empirical dosimetry techniques. In this paper, empirical validation of SAR values predicted by finite difference time domain (FDTD) numerical code on sphere and rat is presented. The results of this study show a good agreement between empirical and theoretical methods and, thus, offer a relatively high confidence in SAR predictions obtained from digital anatomical models based on the FDTD numerical code. PMID:11793404

  4. Development and validation of a railgun hydrogen pellet injector model

    SciTech Connect

    King, T.L.; Zhang, J.; Kim, K.

    1995-12-31

    A railgun hydrogen pellet injector model is presented and its predictions are compared with the experimental data. High-speed hydrogenic ice injection is the dominant refueling method for magnetically confined plasmas used in controlled thermonuclear fusion research. As experimental devices approach the scale of power-producing fusion reactors, the fueling requirements become increasingly more difficult to meet since, due to the large size and the high electron densities and temperatures of the plasma, hypervelocity pellets of a substantial size will need to be injected into the plasma continuously and at high repetition rates. Advanced technologies, such as the railgun pellet injector, are being developed to address this demand. Despite the apparent potential of electromagnetic launchers to produce hypervelocity projectiles, physical effects that were neither anticipated nor well understood have made it difficult to realize this potential. Therefore, it is essential to understand not only the theory behind railgun operation, but the primary loss mechanisms, as well. Analytic tools have been used by many researchers to design and optimize railguns and analyze their performance. This has led to a greater understanding of railgun behavior and opened the door for further improvement. A railgun hydrogen pellet injector model has been developed. The model is based upon a pellet equation of motion that accounts for the dominant loss mechanisms, inertial and viscous drag. The model has been validated using railgun pellet injectors developed by the Fusion Technology Research Laboratory at the University of Illinois at Urbana-Champaign.

  5. Analytical modeling and experimental validation of a magnetorheological mount

    NASA Astrophysics Data System (ADS)

    Nguyen, The; Ciocanel, Constantin; Elahinia, Mohammad

    2009-03-01

    Magnetorheological (MR) fluid has been increasingly researched and applied in vibration isolation devices. To date, the suspension system of several high performance vehicles has been equipped with MR fluid based dampers and research is ongoing to develop MR fluid based mounts for engine and powertrain isolation. MR fluid based devices have received attention due to the MR fluid's capability to change its properties in the presence of a magnetic field. This characteristic places MR mounts in the class of semiactive isolators making them a desirable substitution for the passive hydraulic mounts. In this research, an analytical model of a mixed-mode MR mount was constructed. The magnetorheological mount employs flow (valve) mode and squeeze mode. Each mode is powered by an independent electromagnet, so one mode does not affect the operation of the other. The analytical model was used to predict the performance of the MR mount with different sets of parameters. Furthermore, in order to produce the actual prototype, the analytical model was used to identify the optimal geometry of the mount. The experimental phase of this research was carried by fabricating and testing the actual MR mount. The manufactured mount was tested to evaluate the effectiveness of each mode individually and in combination. The experimental results were also used to validate the ability of the analytical model in predicting the response of the MR mount. Based on the observed response of the mount a suitable controller can be designed for it. However, the control scheme is not addressed in this study.

  6. Bioaerosol optical sensor model development and initial validation

    NASA Astrophysics Data System (ADS)

    Campbell, Steven D.; Jeys, Thomas H.; Eapen, Xuan Le

    2007-04-01

    This paper describes the development and initial validation of a bioaerosol optical sensor model. This model was used to help determine design parameters and estimate performance of a new low-cost optical sensor for detecting bioterrorism agents. In order to estimate sensor performance in detecting biowarfare simulants and rejecting environmental interferents, use was made of a previously reported catalog of EEM (excitation/emission matrix) fluorescence cross-section measurements and previously reported multiwavelength-excitation biosensor modeling work. In the present study, the biosensor modeled employs a single high-power 365 nm UV LED source plus an IR laser diode for particle size determination. The sensor has four output channels: IR size channel, UV elastic channel and two fluorescence channels. The sensor simulation was used to select the fluorescence channel wavelengths of 400-450 and 450-600 nm. Using these selected fluorescence channels, the performance of the sensor in detecting simulants and rejecting interferents was estimated. Preliminary measurements with the sensor are presented which compare favorably with the simulation results.

  7. Validation of modeled pharmacoeconomic claims in formulary submissions.

    PubMed

    Langley, Paul C

    2015-12-01

    Modeled or simulated claims for costs and outcomes are a key element in formulary submissions and comparative assessments of drug products and devices; however, all too often these claims are presented in a form that is either unverifiable or potentially verifiable but in a time frame that is of no practical use to formulary committees and others who may be committed to ongoing disease-area and therapeutic-class reviews. On the assumption that formulary committees are interested in testable predictions for product performance in target populations and ongoing disease area and therapeutic reviews, the methodological standards that should be applied are those that are accepted in the natural sciences. Claims should be presented in a form that is amenable to falsification. If not, they have no scientific standing. Certainly one can follow ISPOR-SMDM standards for validating the assumptions underpinning a model or simulation. There is clearly an important role for simulations as an input to policy initiatives and developing claims for healthcare interventions and testable hypotheses; however, one would not evaluate such claims on the realism or otherwise of the model. The only standard is one of the model's ability to predict outcomes successfully in a time frame that is practical and useful. No other standard is acceptable. This sets the stage for an active research agenda. PMID:26549802

  8. First principles Candu fuel model and validation experimentation

    SciTech Connect

    Corcoran, E.C.; Kaye, M.H.; Lewis, B.J.; Thompson, W.T.; Akbari, F.; Higgs, J.D.; Verrall, R.A.; He, Z.; Mouris, J.F.

    2007-07-01

    Many modeling projects on nuclear fuel rest on a quantitative understanding of the co-existing phases at various stages of burnup. Since the various fission products have considerably different abilities to chemically associate with oxygen, and the O/M ratio is slowly changing as well, the chemical potential (generally expressed as an equivalent oxygen partial pressure) is a function of burnup. Concurrently, well-recognized small fractions of new phases such as inert gas, noble metals, zirconates, etc. also develop. To further complicate matters, the dominant UO{sub 2} fuel phase may be non-stoichiometric and most of minor phases have a variable composition dependent on temperature and possible contact with the coolant in the event of a sheathing defect. A Thermodynamic Fuel Model to predict the phases in partially burned Candu nuclear fuel containing many major fission products has been under development. This model is capable of handling non-stoichiometry in the UO{sub 2} fluorite phase, dilute solution behaviour of significant solute oxides, noble metal inclusions, a second metal solid solution U(Pd-Rh-Ru)3, zirconate and uranate solutions as well as other minor solid phases, and volatile gaseous species. The treatment is a melding of several thermodynamic modeling projects dealing with isolated aspects of this important multi-component system. To simplify the computations, the number of elements has been limited to twenty major representative fission products known to appear in spent fuel. The proportion of elements must first be generated using SCALES-5. Oxygen is inferred from the concentration of the other elements. Provision to study the disposition of very minor fission products is included within the general treatment but these are introduced only on an as needed basis for a particular purpose. The building blocks of the model are the standard Gibbs energies of formation of the many possible compounds expressed as a function of temperature. To these data are added mixing terms associated with the appearance of the component species in particular phases. To validate the model, coulometric titration experiments relating the chemical potential of oxygen to the moles of oxygen introduced to SIMFUEL are underway. A description of the apparatus to oxidize and reduce samples in a controlled way in a H{sub 2}/H{sub 2}O mixture is presented. Existing measurements for irradiated fuel, both defected and non-defected, are also being incorporated into the validation process. (authors)

  9. Development, validation and application of numerical space environment models

    NASA Astrophysics Data System (ADS)

    Honkonen, Ilja

    2013-10-01

    Currently the majority of space-based assets are located inside the Earth's magnetosphere where they must endure the effects of the near-Earth space environment, i.e. space weather, which is driven by the supersonic flow of plasma from the Sun. Space weather refers to the day-to-day changes in the temperature, magnetic field and other parameters of the near-Earth space, similarly to ordinary weather which refers to changes in the atmosphere above ground level. Space weather can also cause adverse effects on the ground, for example, by inducing large direct currents in power transmission systems. The performance of computers has been growing exponentially for many decades and as a result the importance of numerical modeling in science has also increased rapidly. Numerical modeling is especially important in space plasma physics because there are no in-situ observations of space plasmas outside of the heliosphere and it is not feasible to study all aspects of space plasmas in a terrestrial laboratory. With the increasing number of computational cores in supercomputers, the parallel performance of numerical models on distributed memory hardware is also becoming crucial. This thesis consists of an introduction, four peer reviewed articles and describes the process of developing numerical space environment/weather models and the use of such models to study the near-Earth space. A complete model development chain is presented starting from initial planning and design to distributed memory parallelization and optimization, and finally testing, verification and validation of numerical models. A grid library that provides good parallel scalability on distributed memory hardware and several novel features, the distributed cartesian cell-refinable grid (DCCRG), is designed and developed. DCCRG is presently used in two numerical space weather models being developed at the Finnish Meteorological Institute. The first global magnetospheric test particle simulation based on the Vlasov description of plasma is carried out using the Vlasiator model. The test shows that the Vlasov equation for plasma in six-dimensionsional phase space is solved correctly b! y Vlasiator, that results are obtained beyond those of the magnetohydrodynamic (MHD) description of plasma and that global magnetospheric simulations using a hybrid-Vlasov model are feasible on current hardware. For the first time four global magnetospheric models using the MHD description of plasma (BATS-R-US, GUMICS, OpenGGCM, LFM) are run with identical solar wind input and the results compared to observations in the ionosphere and outer magnetosphere. Based on the results of the global magnetospheric MHD model GUMICS a hypothesis is formulated for a new mechanism of plasmoid formation in the Earth's magnetotail.

  10. A photoemission model for low work function coated metal surfaces and its experimental validation

    NASA Astrophysics Data System (ADS)

    Jensen, Kevin L.; Feldman, Donald W.; Moody, Nathan A.; O'Shea, Patrick G.

    2006-06-01

    Photocathodes are a critical component many linear accelerator based light sources. The development of a custom-engineered photocathode based on low work function coatings requires an experimentally validated photoemission model that accounts the complexity of the emission process. We have developed a time-dependent model accounting for the effects of laser heating and thermal propagation on photoemission. It accounts for surface conditions (coating, field enhancement, and reflectivity), laser parameters (duration, intensity, and wavelength), and material characteristics (reflectivity, laser penetration depth, and scattering rates) to predict current distribution and quantum efficiency (QE) as a function of wavelength. The model is validated by (i) experimental measurements of the QE of cesiated surfaces, (ii) the QE and performance of commercial dispenser cathodes (B, M, and scandate), and (iii) comparison to QE values reported in the literature for bare metals and B-type dispenser cathodes, all for various wavelengths. Of particular note is that the highest QE for a commercial (M-type) dispenser cathode found here was measured to be 0.22% at 266 nm, and is projected to be 3.5 times larger for a 5 ps pulse delivering 0.6 mJ/cm2 under a 50 MV/m field.

  11. Comparison of LIDAR system performance for alternative single-mode receiver architectures: modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Toliver, Paul; Ozdur, Ibrahim; Agarwal, Anjali; Woodward, T. K.

    2013-05-01

    In this paper, we describe a detailed performance comparison of alternative single-pixel, single-mode LIDAR architectures including (i) linear-mode APD-based direct-detection, (ii) optically-preamplified PIN receiver, (iii) PINbased coherent-detection, and (iv) Geiger-mode single-photon-APD counting. Such a comparison is useful when considering next-generation LIDAR on a chip, which would allow one to leverage extensive waveguide-based structures and processing elements developed for telecom and apply them to small form-factor sensing applications. Models of four LIDAR transmit and receive systems are described in detail, which include not only the dominant sources of receiver noise commonly assumed in each of the four detection limits, but also additional noise terms present in realistic implementations. These receiver models are validated through the analysis of detection statistics collected from an experimental LIDAR testbed. The receiver is reconfigurable into four modes of operation, while transmit waveforms and channel characteristics are held constant. The use of a diffuse hard target highlights the importance of including speckle noise terms in the overall system analysis. All measurements are done at 1550 nm, which offers multiple system advantages including less stringent eye safety requirements and compatibility with available telecom components, optical amplification, and photonic integration. Ultimately, the experimentally-validated detection statistics can be used as part of an end-to-end system model for projecting rate, range, and resolution performance limits and tradeoffs of alternative integrated LIDAR architectures.

  12. Validation of a Deterministic Vibroacoustic Response Prediction Model

    NASA Technical Reports Server (NTRS)

    Caimi, Raoul E.; Margasahayam, Ravi

    1997-01-01

    This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.

  13. Use of Synchronized Phasor Measurements for Model Validation in ERCOT

    NASA Astrophysics Data System (ADS)

    Nuthalapati, Sarma; Chen, Jian; Shrestha, Prakash; Huang, Shun-Hsien; Adams, John; Obadina, Diran; Mortensen, Tim; Blevins, Bill

    2013-05-01

    This paper discusses experiences in the use of synchronized phasor measurement technology in Electric Reliability Council of Texas (ERCOT) interconnection, USA. Implementation of synchronized phasor measurement technology in the region is a collaborative effort involving ERCOT, ONCOR, AEP, SHARYLAND, EPG, CCET, and UT-Arlington. As several phasor measurement units (PMU) have been installed in ERCOT grid in recent years, phasor data with the resolution of 30 samples per second is being used to monitor power system status and record system events. Post-event analyses using recorded phasor data have successfully verified ERCOT dynamic stability simulation studies. Real time monitoring software "RTDMS" enables ERCOT to analyze small signal stability conditions by monitoring the phase angles and oscillations. The recorded phasor data enables ERCOT to validate the existing dynamic models of conventional and/or wind generator.

  14. Utilizing Chamber Data for Developing and Validating Climate Change Models

    NASA Technical Reports Server (NTRS)

    Monje, Oscar

    2012-01-01

    Controlled environment chambers (e.g. growth chambers, SPAR chambers, or open-top chambers) are useful for measuring plant ecosystem responses to climatic variables and CO2 that affect plant water relations. However, data from chambers was found to overestimate responses of C fluxes to CO2 enrichment. Chamber data may be confounded by numerous artifacts (e.g. sidelighting, edge effects, increased temperature and VPD, etc) and this limits what can be measured accurately. Chambers can be used to measure canopy level energy balance under controlled conditions and plant transpiration responses to CO2 concentration can be elucidated. However, these measurements cannot be used directly in model development or validation. The response of stomatal conductance to CO2 will be the same as in the field, but the measured response must be recalculated in such a manner to account for differences in aerodynamic conductance, temperature and VPD between the chamber and the field.

  15. Validation of atmospheric propagation models in littoral waters

    NASA Astrophysics Data System (ADS)

    de Jong, Arie N.; Schwering, Piet B. W.; van Eijk, Alexander M. J.; Gunter, Willem H.

    2013-04-01

    Various atmospheric propagation effects are limiting the long-range performance of electro-optical imaging systems. These effects include absorption and scattering by molecules and aerosols, refraction due to vertical temperature gradients and scintillation and blurring due to turbulence. In maritime and coastal areas, ranges up to 25 km are relevant for detection and classification tasks on small targets (missiles, pirates). From November 2009 to October 2010 a measurement campaign was set-up over a range of more than 15 km in the False Bay in South Africa, where all of the propagation effects could be investigated quantitatively. The results have been used to provide statistical information on basic parameters as visibility, air-sea temperature difference, absolute humidity and wind speed. In addition various propagation models on aerosol particle size distribution, temperature profile, blur and scintillation under strong turbulence conditions could be validated. Examples of collected data and associated results are presented in this paper.

  16. A magnetospheric specification model validation study: Geosynchronous electrons

    NASA Astrophysics Data System (ADS)

    Hilmer, R. V.; Ginet, G. P.

    2000-09-01

    The Rice University Magnetospheric Specification Model (MSM) is an operational space environment model of the inner and middle magnetosphere designed to specify charged particle fluxes up to 100keV. Validation test data taken between January 1996 and June 1998 consist of electron fluxes measured by a charge control system (CCS) on a defense satellite communications system (DSCS) spacecraft. The CCS includes both electrostatic analyzers to measure the particle environment and surface potential monitors to track differential charging between various materials and vehicle ground. While typical RMS error analysis methods provide a sense of the models overall abilities, they do not specifically address physical situations critical to operations, i.e., how well does the model specify when a high differential charging state is probable. In this validation study, differential charging states observed by DSCS are used to determine several threshold fluxes for the associated 20-50keV electrons and joint probability distributions are constructed to determine Hit, Miss, and False Alarm rates for the models. An MSM run covering the two and one-half year interval is performed using the minimum required input parameter set, consisting of only the magnetic activity index Kp, in order to statistically examine the model's seasonal and yearly performance. In addition, the relative merits of the input parameter, i.e., Kp, Dst, the equatorward boundary of diffuse aurora at midnight, cross-polar cap potential, solar wind density and velocity, and interplanetary magnetic field values, are evaluated as drivers of shorter model runs of 100 d each. In an effort to develop operational tools that can address spacecraft charging issues, we also identify temporal features in the model output that can be directly linked to input parameter variations and model boundary conditions. All model output is interpreted using the full three-dimensional, dipole tilt-dependent algorithms currently in operational use at the Air Force 55th Space Weather Squadron (55 SWXS). Results indicate that both diurnal and seasonal activity related variations in geosynchronous electrons are reproduced in a regular and consistent manner regardless of the input parameter used as drivers. The ability of the MSM to specify DSCS electrons in relation to thresholds indicative of spacecraft charging varies with the combination of input parameters used. The input parameter of greatest benefit to the MSM, after the required Kp index, is the polar cap potential drop as determined by DMSP spacecraft. Regarding the highest electron flux threshold, the model typically achieves high HIT rates paired with both high False Alarm rates and higher RMS error. Suggestions are made regarding the utilization of proxy values for the polar cap potential parameter and Kp-dependent model boundary conditions. The importance of generating accurate real-time proxy input data for operational use is stressed.

  17. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  18. Experimental validation of sheath models at intermediate radio frequencies

    NASA Astrophysics Data System (ADS)

    Sobolewski, Mark

    2013-09-01

    Sheaths in radio-frequency (rf) discharges play a dominant role in determining important properties such as the efficiency of power delivery and utilization, plasma spatial uniformity, and ion energy distributions (IEDs). To obtain high quality predictions for these properties requires sheath models that have been rigorously tested and validated. We have performed such tests in capacitively coupled and rf-biased inductively coupled discharges, for inert as well as reactive gases, over two or more orders of magnitude in frequency, voltage, and plasma density. We measured a complete set of model input and output parameters including rf current and voltage waveforms, rf plasma potential measured by a capacitive probe, electron temperature and ion saturation current measured by Langmuir probe and other techniques, and IEDs measured by mass spectrometers and gridded energy analyzers. Experiments concentrated on the complicated, intermediate-frequency regime of ion dynamics, where the ion transit time is comparable to the rf period and the ion current oscillates strongly during the rf cycle. The first models tested used several simplifying assumptions including fluid treatment of ions, neglect of electron inertia, and the oscillating step approximation for the electron profile. These models were nevertheless able to yield rather accurate predictions for current waveforms, sheath impedance, and the peak energies in IEDs. More recently, the oscillating step has been replaced by an exact solution of Poisson's equation. This results in a modest improvement in the agreement with measured electrical characteristics and IED peak amplitudes. The new model also eliminates the need for arbitrary or nonphysical boundary conditions that arises in step models, replacing them with boundary conditions that can be obtained directly from measurements or theories of the presheath.

  19. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  20. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPAs virtual embryo project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  1. Nyala and Bushbuck II: A Harvesting Model.

    ERIC Educational Resources Information Center

    Fay, Temple H.; Greeff, Johanna C.

    1999-01-01

    Adds a cropping or harvesting term to the animal overpopulation model developed in Part I of this article. Investigates various harvesting strategies that might suggest a solution to the overpopulation problem without actually culling any animals. (ASK)

  2. Development and validation of a broad scheme for prediction of HLA class II restricted T cell epitopes.

    PubMed

    Paul, Sinu; Lindestam Arlehamn, Cecilia S; Scriba, Thomas J; Dillon, Myles B C; Oseroff, Carla; Hinz, Denise; McKinney, Denise M; Carrasco Pro, Sebastian; Sidney, John; Peters, Bjoern; Sette, Alessandro

    2015-07-01

    Computational prediction of HLA class II restricted T cell epitopes has great significance in many immunological studies including vaccine discovery. In recent years, prediction of HLA class II binding has improved significantly but a strategy to globally predict the most dominant epitopes has not been rigorously defined. Using human immunogenicity data associated with sets of 15-mer peptides overlapping by 10 residues spanning over 30 different allergens and bacterial antigens, and HLA class II binding prediction tools from the Immune Epitope Database and Analysis Resource (IEDB), we optimized a strategy to predict the top epitopes recognized by human populations. The most effective strategy was to select peptides based on predicted median binding percentiles for a set of seven DRB1 and DRB3/4/5 alleles. These results were validated with predictions on a blind set of 15 new allergens and bacterial antigens. We found that the top 21% predicted peptides (based on the predicted binding to seven DRB1 and DRB3/4/5 alleles) were required to capture 50% of the immune response. This corresponded to an IEDB consensus percentile rank of 20.0, which could be used as a universal prediction threshold. Utilizing actual binding data (as opposed to predicted binding data) did not appreciably change the efficacy of global predictions, suggesting that the imperfect predictive capacity is not due to poor algorithm performance, but intrinsic limitations of HLA class II epitope prediction schema based on HLA binding in genetically diverse human populations. PMID:25862607

  3. [Neurobiology of parkinsonism. II. Experimental models].

    PubMed

    Ponzoni, S; Garcia-Cairasco, N

    1995-09-01

    The study of experimental models of parkinsonism has contributed to the knowledge of basal ganglia functions, as well as to the establishment of several hypothesis for the explanation of the cause and expression of central neurodegenerative disorders. In this review we present and discuss several models such as 6-hydroxydopamine, MPTP and manganese, all of them widely used to characterize the behavioral, cellular and molecular mechanisms of parkinsonism. PMID:8585836

  4. Vibroacoustic Model Validation for a Curved Honeycomb Composite Panel

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Robinson, Jay H.; Grosveld, Ferdinand W.

    2001-01-01

    Finite element and boundary element models are developed to investigate the vibroacoustic response of a curved honeycomb composite sidewall panel. Results from vibroacoustic tests conducted in the NASA Langley Structural Acoustic Loads and Transmission facility are used to validate the numerical predictions. The sidewall panel is constructed from a flexible honeycomb core sandwiched between carbon fiber reinforced composite laminate face sheets. This type of construction is being used in the development of an all-composite aircraft fuselage. In contrast to conventional rib-stiffened aircraft fuselage structures, the composite panel has nominally uniform thickness resulting in a uniform distribution of mass and stiffness. Due to differences in the mass and stiffness distribution, the noise transmission mechanisms for the composite panel are expected to be substantially different from those of a conventional rib-stiffened structure. The development of accurate vibroacoustic models will aide in the understanding of the dominant noise transmission mechanisms and enable optimization studies to be performed that will determine the most beneficial noise control treatments. Finite element and boundary element models of the sidewall panel are described. Vibroacoustic response predictions are presented for forced vibration input and the results are compared with experimental data.

  5. Circulation Control Model Experimental Database for CFD Validation

    NASA Technical Reports Server (NTRS)

    Paschal, Keith B.; Neuhart, Danny H.; Beeler, George B.; Allan, Brian G.

    2012-01-01

    A 2D circulation control wing was tested in the Basic Aerodynamic Research Tunnel at the NASA Langley Research Center. A traditional circulation control wing employs tangential blowing along the span over a trailing-edge Coanda surface for the purpose of lift augmentation. This model has been tested extensively at the Georgia Tech Research Institute for the purpose of performance documentation at various blowing rates. The current study seeks to expand on the previous work by documenting additional flow-field data needed for validation of computational fluid dynamics. Two jet momentum coefficients were tested during this entry: 0.047 and 0.114. Boundary-layer transition was investigated and turbulent boundary layers were established on both the upper and lower surfaces of the model. Chordwise and spanwise pressure measurements were made, and tunnel sidewall pressure footprints were documented. Laser Doppler Velocimetry measurements were made on both the upper and lower surface of the model at two chordwise locations (x/c = 0.8 and 0.9) to document the state of the boundary layers near the spanwise blowing slot.

  6. Modal testing for model validation of structures with discrete nonlinearities

    PubMed Central

    Ewins, D. J.; Weekes, B.; delli Carri, A.

    2015-01-01

    Model validation using data from modal tests is now widely practiced in many industries for advanced structural dynamic design analysis, especially where structural integrity is a primary requirement. These industries tend to demand highly efficient designs for their critical structures which, as a result, are increasingly operating in regimes where traditional linearity assumptions are no longer adequate. In particular, many modern structures are found to contain localized areas, often around joints or boundaries, where the actual mechanical behaviour is far from linear. Such structures need to have appropriate representation of these nonlinear features incorporated into the otherwise largely linear models that are used for design and operation. This paper proposes an approach to this task which is an extension of existing linear techniques, especially in the testing phase, involving only just as much nonlinear analysis as is necessary to construct a model which is good enough, or ‘valid’: i.e. capable of predicting the nonlinear response behaviour of the structure under all in-service operating and test conditions with a prescribed accuracy. A short-list of methods described in the recent literature categorized using our framework is given, which identifies those areas in which further development is most urgently required. PMID:26303924

  7. Validation of an Acoustic Impedance Prediction Model for Skewed Resonators

    NASA Technical Reports Server (NTRS)

    Howerton, Brian M.; Parrott, Tony L.

    2009-01-01

    An impedance prediction model was validated experimentally to determine the composite impedance of a series of high-aspect ratio slot resonators incorporating channel skew and sharp bends. Such structures are useful for packaging acoustic liners into constrained spaces for turbofan noise control applications. A formulation of the Zwikker-Kosten Transmission Line (ZKTL) model, incorporating the Richards correction for rectangular channels, is used to calculate the composite normalized impedance of a series of six multi-slot resonator arrays with constant channel length. Experimentally, acoustic data was acquired in the NASA Langley Normal Incidence Tube over the frequency range of 500 to 3500 Hz at 120 and 140 dB OASPL. Normalized impedance was reduced using the Two-Microphone Method for the various combinations of channel skew and sharp 90o and 180o bends. Results show that the presence of skew and/or sharp bends does not significantly alter the impedance of a slot resonator as compared to a straight resonator of the same total channel length. ZKTL predicts the impedance of such resonators very well over the frequency range of interest. The model can be used to design arrays of slot resonators that can be packaged into complex geometries heretofore unsuitable for effective acoustic treatment.

  8. Using remote sensing for validation of a large scale hydrologic and hydrodynamic model in the Amazon

    NASA Astrophysics Data System (ADS)

    Paiva, R. C.; Bonnet, M.; Buarque, D. C.; Collischonn, W.; Frappart, F.; Mendes, C. B.

    2011-12-01

    We present the validation of the large-scale, catchment-based hydrological MGB-IPH model in the Amazon River basin. In this model, physically-based equations are used to simulate the hydrological processes, such as the Penman Monteith method to estimate evapotranspiration, or the Moore and Clarke infiltration model. A new feature recently introduced in the model is a 1D hydrodynamic module for river routing. It uses the full Saint-Venant equations and a simple floodplain storage model. River and floodplain geometry parameters are extracted from SRTM DEM using specially developed GIS algorithms that provide catchment discretization, estimation of river cross-sections geometry and water storage volume variations in the floodplains. The model was forced using satellite-derived daily rainfall TRMM 3B42, calibrated against discharge data and first validated using daily discharges and water levels from 111 and 69 stream gauges, respectively. Then, we performed a validation against remote sensing derived hydrological products, including (i) monthly Terrestrial Water Storage (TWS) anomalies derived from GRACE, (ii) river water levels derived from ENVISAT satellite altimetry data (212 virtual stations from Santos da Silva et al., 2010) and (iii) a multi-satellite monthly global inundation extent dataset at ~25 x 25 km spatial resolution (Papa et al., 2010). Validation against river discharges shows good performance of the MGB-IPH model. For 70% of the stream gauges, the Nash and Suttcliffe efficiency index (ENS) is higher than 0.6 and at bidos, close to Amazon river outlet, ENS equals 0.9 and the model bias equals,-4.6%. Largest errors are located in drainage areas outside Brazil and we speculate that it is due to the poor quality of rainfall datasets in these areas poorly monitored and/or mountainous. Validation against water levels shows that model is performing well in the major tributaries. For 60% of virtual stations, ENS is higher than 0.6. But, similarly, largest errors are also located in drainage areas outside Brazil, mostly Japur River, and in the lower Amazon River. In the latter, correlation with observations is high but the model underestimates the amplitude of water levels. We also found a large bias between model and ENVISAT water levels, ranging from -3 to -15 m. The model provided TWS in good accordance with GRACE estimates. ENS values for TWS over the whole Amazon equals 0.93. We also analyzed results in 21 sub-regions of 4 x 4. ENS is smaller than 0.8 only in 5 areas, and these are found mostly in the northwest part of the Amazon, possibly due to same errors reported in discharge results. Flood extent validation is under development, but a previous analysis in Brazilian part of Solimes River basin suggests a good model performance. The authors are grateful for the financial and operational support from the brazilian agencies FINEP, CNPq and ANA and from the french observatories HYBAM and SOERE RBV.

  9. VALIDATION OF A SUB-MODEL OF FORAGE GROWTH OF THE INTEGRATED FARM SYSTEM MODEL - IFSM

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A sub-model of forage production developed for temperate climate is being adapted to tropical conditions in Brazil. Sub-model predictive performance has been evaluated using data of Cynodon spp. Results from sensitivity and validation tests were consistent, but values of DM production for the wet se...

  10. An approach to model validation and model-based prediction -- polyurethane foam case study.

    SciTech Connect

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model-based predictions. Several hypothetical prediction problems are created and addressed. Hypothetical problems are used because no guidance was provided concerning what was needed for this aspect of the analysis. The resulting predictions and corresponding uncertainty assessment demonstrate the flexibility of this approach.

  11. An open source lower limb model: Hip joint validation.

    PubMed

    Modenese, L; Phillips, A T M; Bull, A M J

    2011-08-11

    Musculoskeletal lower limb models have been shown to be able to predict hip contact forces (HCFs) that are comparable to in vivo measurements obtained from instrumented prostheses. However, the muscle recruitment predicted by these models does not necessarily compare well to measured electromyographic (EMG) signals. In order to verify if it is possible to accurately estimate HCFs from muscle force patterns consistent with EMG measurements, a lower limb model based on a published anatomical dataset (Klein Horsman et al., 2007. Clinical Biomechanics. 22, 239-247) has been implemented in the open source software OpenSim. A cycle-to-cycle hip joint validation was conducted against HCFs recorded during gait and stair climbing trials of four arthroplasty patients (Bergmann et al., 2001. Journal of Biomechanics. 34, 859-871). Hip joint muscle tensions were estimated by minimizing a polynomial function of the muscle forces. The resulting muscle activation patterns obtained by assessing multiple powers of the objective function were compared against EMG profiles from the literature. Calculated HCFs denoted a tendency to monotonically increase their magnitude when raising the power of the objective function; the best estimation obtained from muscle forces consistent with experimental EMG profiles was found when a quadratic objective function was minimized (average overestimation at experimental peak frame: 10.1% for walking, 7.8% for stair climbing). The lower limb model can produce appropriate balanced sets of muscle forces and joint contact forces that can be used in a range of applications requiring accurate quantification of both. The developed model is available at the website https://simtk.org/home/low_limb_london. PMID:21742331

  12. Validation of lateral boundary conditions for regional climate models

    NASA Astrophysics Data System (ADS)

    Pignotti, Angela J.

    California boasts a population of more than 34 million and is the tenth largest energy consumer in the world. As such, the California Energy Commission (CEC) is greatly concerned about the environmental impacts of global climate change on energy needs, production and distribution. In order to better understand future energy needs in California, the CEC depends upon international climate scientists who use results from simulations of western U.S. regional climate models (RCMs). High-resolution RCMs are driven by coupled Atmosphere/Ocean General Circulation Model (AOGCM) simulations along lateral surface boundaries outlining the region of interest. For projections of future climate, however, when the RCM is driven by future climate change output from an AOGCM, the performance of an RCM will depend to some degree on the merit of the AOGCM. The objective of this study is to provide tools to assist with model validation of coupled Atmosphere/Ocean General Circulation Model (AOGCM) simulations against present-day observations. A comparison technique frequently utilized by climate scientists is multiple hypothesis testing, which identifies statistically significant regions of difference between spatial fields. In order to use these methods, the AOGCM fields must be interpolated onto the reanalysis grid. In this work, I present an efficient interpolation technique using thin-plate splines. I then compare significant regions of difference using multiple testing procedures of Bonferoni against the false detection rate methodology. A major drawback of multiple hypothesis methods is that they do not account for correlation in the spatial field. I introduce and employ measures of comparison, including the Mahalanobis distance measure, that account for anisotropy within the spatial field. Bayesian techniques are applied to calculate comparison measures between the driver-GCM lateral surface boundaries and the NCEP/NCAR and ERA40 reanalysis data sets. I find that the Mahalanobis measure provides a systematic ranking of model performance against present-day observations.

  13. Model of a mode II shear crack

    NASA Astrophysics Data System (ADS)

    Glagolev, V. V.; Devyatova, M. V.; Markin, A. A.

    2015-07-01

    Based on the model of a physical cut and a material layer on its continuation, elastic and elastoplastic problems of determining the stress-strain state inside and outside the layer in the case of loading of cut edges by an antisymmetric system of forces are posed and solved. The solution of the elastic problem is compared with the solution obtained within the framework of the Neuber-Novozhilov model. In contrast to the latter model, the proposed approach provides results consistent with experimental data on the process of formation of fracture regions. Based on the analysis of the discrete solution of the problem, regions of plastic deformation and regions of possible fracture are found.

  14. Stratospheric Heterogeneous Chemistry and Microphysics: Model Development, Validation and Applications

    NASA Technical Reports Server (NTRS)

    Turco, Richard P.

    1996-01-01

    The objectives of this project are to: define the chemical and physical processes leading to stratospheric ozone change that involve polar stratospheric clouds (PSCS) and the reactions occurring on the surfaces of PSC particles; study the formation processes, and the physical and chemical properties of PSCS, that are relevant to atmospheric chemistry and to the interpretation of field measurements taken during polar stratosphere missions; develop quantitative models describing PSC microphysics and heterogeneous chemical processes; assimilate laboratory and field data into these models; and calculate the extent of chemical processing on PSCs and the impact of specific microphysical processes on polar composition and ozone depletion. During the course of the project, a new coupled microphysics/physical-chemistry/ photochemistry model for stratospheric sulfate aerosols and nitric acid and ice PSCs was developed and applied to analyze data collected during NASA's Arctic Airborne Stratospheric Expedition-II (AASE-II) and other missions. In this model, detailed treatments of multicomponent sulfate aerosol physical chemistry, sulfate aerosol microphysics, polar stratospheric cloud microphysics, PSC ice surface chemistry, as well as homogeneous gas-phase chemistry were included for the first time. In recent studies focusing on AASE measurements, the PSC model was used to analyze specific measurements from an aircraft deployment of an aerosol impactor, FSSP, and NO(y) detector. The calculated results are in excellent agreement with observations for particle volumes as well as NO(y) concentrations, thus confirming the importance of supercooled sulfate/nitrate droplets in PSC formation. The same model has been applied to perform a statistical study of PSC properties in the Northern Hemisphere using several hundred high-latitude air parcel trajectories obtained from Goddard. The rates of ozone depletion along trajectories with different meteorological histories are presently being systematically evaluated to identify the principal relationships between ozone loss and aerosol state. Under this project, we formulated a detailed quantitative model that predicts the multicomponent composition of sulfate aerosols under stratospheric conditions, including sulfuric, nitric, hydrochloric, hydrofluoric and hydrobromic acids. This work defined for the first time the behavior of liquid ternary-system type-1b PSCS. The model also allows the compositions and reactivities of sulfate aerosols to be calculated over the entire range of environmental conditions encountered in the stratosphere (and has been incorporated into a trajectory/microphysics model-see above). Important conclusions that derived from this work over the last few years include the following: the HNO3 content of liquid-state aerosols dominate PSCs below about 195 K; the freezing of nitric acid ice from sulfate aerosol solutions is likely to occur within a few degrees K of the water vapor frost point; the uptake and reactions of HCl in liquid aerosols is a critical component of PSC heterogeneous chemistry. In a related application of this work, the inefficiency of chlorine injection into the stratosphere during major volcanic eruptions was explained on the basis of nucleation of sulfuric acid aerosols in rising volcanic plumes leading to the formation of supercooled water droplets on these aerosols, which efficiently scavenges HCl via precipitation.

  15. System modeling and simulation at EBR-II

    SciTech Connect

    Dean, E.M.; Lehto, W.K.; Larson, H.A.

    1986-01-01

    The codes being developed and verified using EBR-II data are the NATDEMO, DSNP and CSYRED. NATDEMO is a variation of the Westinghouse DEMO code coupled to the NATCON code previously used to simulate perturbations of reactor flow and inlet temperature and loss-of-flow transients leading to natural convection in EBR-II. CSYRED uses the Continuous System Modeling Program (CSMP) to simulate the EBR-II core, including power, temperature, control-rod movement reactivity effects and flow and is used primarily to model reactivity induced power transients. The Dynamic Simulator for Nuclear Power Plants (DSNP) allows a whole plant, thermal-hydraulic simulation using specific component and system models called from libraries. It has been used to simulate flow coastdown transients, reactivity insertion events and balance-of-plant perturbations.

  16. Criterion Validity, Severity Cut Scores, and Test-Retest Reliability of the Beck Depression Inventory-II in a University Counseling Center Sample

    ERIC Educational Resources Information Center

    Sprinkle, Stephen D.; Lurie, Daphne; Insko, Stephanie L.; Atkinson, George; Jones, George L.; Logan, Arthur R.; Bissada, Nancy N.

    2002-01-01

    The criterion validity of the Beck Depression Inventory-II (BDI-II; A. T. Beck, R. A. Steer, & G. K. Brown, 1996) was investigated by pairing blind BDI-II administrations with the major depressive episode portion of the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I; M. B. First, R. L. Spitzer, M. Gibbon, & J. B. W. Williams,

  17. Validation model for Raman based skin carotenoid detection.

    PubMed

    Ermakov, Igor V; Gellermann, Werner

    2010-12-01

    Raman spectroscopy holds promise as a rapid objective non-invasive optical method for the detection of carotenoid compounds in human tissue in vivo. Carotenoids are of interest due to their functions as antioxidants and/or optical absorbers of phototoxic light at deep blue and near UV wavelengths. In the macular region of the human retina, carotenoids may prevent or delay the onset of age-related tissue degeneration. In human skin, they may help prevent premature skin aging, and are possibly involved in the prevention of certain skin cancers. Furthermore, since carotenoids exist in high concentrations in a wide variety of fruits and vegetables, and are routinely taken up by the human body through the diet, skin carotenoid levels may serve as an objective biomarker for fruit and vegetable intake. Before the Raman method can be accepted as a widespread optical alternative for carotenoid measurements, direct validation studies are needed to compare it with the gold standard of high performance liquid chromatography. This is because the tissue Raman response is in general accompanied by a host of other optical processes which have to be taken into account. In skin, the most prominent is strongly diffusive, non-Raman scattering, leading to relatively shallow light penetration of the blue/green excitation light required for resonant Raman detection of carotenoids. Also, sizable light attenuation exists due to the combined absorption from collagen, porphyrin, hemoglobin, and melanin chromophores, and additional fluorescence is generated by collagen and porphyrins. In this study, we investigate for the first time the direct correlation of in vivo skin tissue carotenoid Raman measurements with subsequent chromatography derived carotenoid concentrations. As tissue site we use heel skin, in which the stratum corneum layer thickness exceeds the light penetration depth, which is free of optically confounding chromophores, which can be easily optically accessed for in vivo RRS measurement, and which can be easily removed for subsequent biochemical measurements. Excellent correlation (coefficient R=0.95) is obtained for this tissue site which could serve as a model site for scaled up future validation studies of large populations. The obtained results provide proof that resonance Raman spectroscopy is a valid non-invasive objective methodology for the quantitative assessment of carotenoid antioxidants in human skin in vivo. PMID:20678465

  18. NAIRAS aircraft radiation model development, dose climatology, and initial validation

    PubMed Central

    Mertens, Christopher J; Meier, Matthias M; Brown, Steven; Norman, Ryan B; Xu, Xiaojing

    2013-01-01

    [1] The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis suggests that these single-point differences will be within 30% when a new deterministic pion-initiated electromagnetic cascade code is integrated into NAIRAS, an effort which is currently underway. PMID:26213513

  19. The Internal Validation of Level II and Level III Respiratory Therapy Examinations. Final Report.

    ERIC Educational Resources Information Center

    Jouett, Michael L.

    This project began with the delineation of the roles and functions of respiratory therapy personnel by the American Association for Respiratory Therapy. In Phase II, The Psychological Corporation used this delineation to develop six proficiency examinations, three at each of two levels. One exam at each level was designated for the purpose of the

  20. Results of site validation experiments. Volume II. Supporting documents 5 through 14

    SciTech Connect

    Not Available

    1983-01-01

    Volume II contains the following supporting documents: Summary of Geologic Mapping of Underground Investigations; Logging of Vertical Coreholes - ''Double Box'' Area and Exploratory Drift; WIPP High Precision Gravity Survey; Basic Data Reports for Drillholes, Brine Content of Facility Internal Strata; Mineralogical Content of Facility Interval Strata; Location and Characterization of Interbedded Materials; Characterization of Aquifers at Shaft Locations; and Permeability of Facility Interval Strate.

  1. Science Motivation Questionnaire II: Validation with Science Majors and Nonscience Majors

    ERIC Educational Resources Information Center

    Glynn, Shawn M.; Brickman, Peggy; Armstrong, Norris; Taasoobshirazi, Gita

    2011-01-01

    From the perspective of social cognitive theory, the motivation of students to learn science in college courses was examined. The students--367 science majors and 313 nonscience majors--responded to the Science Motivation Questionnaire II, which assessed five motivation components: intrinsic motivation, self-determination, self-efficacy, career…

  2. Science Motivation Questionnaire II: Validation with Science Majors and Nonscience Majors

    ERIC Educational Resources Information Center

    Glynn, Shawn M.; Brickman, Peggy; Armstrong, Norris; Taasoobshirazi, Gita

    2011-01-01

    From the perspective of social cognitive theory, the motivation of students to learn science in college courses was examined. The students--367 science majors and 313 nonscience majors--responded to the Science Motivation Questionnaire II, which assessed five motivation components: intrinsic motivation, self-determination, self-efficacy, career

  3. A Test of Model Validation from Observed Temperature Trends

    NASA Astrophysics Data System (ADS)

    Singer, S. F.

    2006-12-01

    How much of current warming is due to natural causes and how much is manmade? This requires a comparison of the patterns of observed warming with the best available models that incorporate both anthropogenic (greenhouse gases and aerosols) as well as natural climate forcings (solar and volcanic). Fortunately, we have the just published U.S.-Climate Change Science Program (CCSP) report (www.climatescience.gov/Library/sap/sap1-1/finalreport/default.htm), based on best current information. As seen in Fig. 1.3F of the report, modeled surface temperature trends change little with latitude, except for a stronger warming in the Arctic. The observations, however, show a strong surface warming in the northern hemisphere but not in the southern hemisphere (see Fig. 3.5C and 3.6D). The Antarctic is found to be cooling and Arctic temperatures, while currently rising, were higher in the 1930s than today. Although the Executive Summary of the CCSP report claims "clear evidence" for anthropogenic warming, based on comparing tropospheric and surface temperature trends, the report itself does not confirm this. Greenhouse models indicate that the tropics should provide the most sensitive location for their validation; trends there should increase by 200-300 percent with altitude, peaking at around 10 kilometers. The observations, however, show the opposite: flat or even decreasing tropospheric trend values (see Fig. 3.7 and also Fig. 5.7E). This disparity is demonstrated most strikingly in Fig. 5.4G, which shows the difference between surface and troposphere trends for a collection of models (displayed as a histogram) and for balloon and satellite data. [The disparities are less apparent in the Summary, which displays model results in terms of "range" rather than as histograms.] There may be several possible reasons for the disparity: Instrumental and other effects that exaggerate or otherwise distort observed temperature trends. Or, more likely: Shortcomings in models that result in much reduced values of climate sensitivity; for example, the neglect of important negative feedbacks. Allowing for uncertainties in the data and for imperfect models, there is only one valid conclusion from the failure of greenhouse models to explain the observations: The human contribution to global warming is still quite small, so that natural climate factors are dominant. This may also explain why the climate was cooling from 1940 to 1975 -- even as greenhouse-gas levels increased rapidly. An overall test for climate prediction may soon be possible by measuring the ongoing rise in sea level. According to my estimates, sea level should rise by 1.5 to 2.0 cm per decade (about the same rate as in past millennia); the U.N.-IPCC (4th Assessment Report) predicts 1.4 to 4.3 cm per decade. In the New York Review of Books (July 13, 2006), however, James Hansen suggests 20 feet or more per century -- equivalent to about 60 cm or more per decade.

  4. Using Laboratory Magnetospheres to Develop and Validate Space Weather Models

    NASA Astrophysics Data System (ADS)

    Mauel, M. E.; Garnier, D.; Kesner, J.

    2012-12-01

    Reliable space weather predictions can be used to plan satellite operations, predict radio outages, and protect the electrical transmission grid. While direct observation of the solar corona and satellite measurements of the solar wind give warnings of possible subsequent geomagnetic activity, more accurate and reliable models of how solar fluxes effect the earth's space environment are needed. The recent development in laboratory magnetic dipoles have yielded well confined high-beta plasmas with intense energetic electron belts similar to magnetospheres. With plasma diagnostics spanning from global to small spatial scales and user-controlled experiments, these devices can be used to study current issues in space weather such as fast particle excitation and rapid depolarization events. In levitated dipole experiments, which remove the collisional loss along field lines that normally dominate laboratory dipole plasmas, slow radial convection processes can be observed. We describe ongoing experiments and investigations that (i) control interchange mixing through application of vorticity injection, (ii) make whole-plasma, high-speed images of turbulent plasma dynamics, (iii) simulate nonlinear gyrokinetic dynamics of bounded driven dipole plasma, and (iv) compare laboratory plasma measurements and global convection models.; Photographs of the LDX and CTX Laboratory Magnetospheres. Trapped plasma and energetic particles are created and studied with a variety of imaging diagnostics. Shown to the right are multiple probes for simultaneous measurements of plasma structures and turbulent mixing.

  5. Validity of Social, Moral and Emotional Facets of Self-Description Questionnaire II

    ERIC Educational Resources Information Center

    Leung, Kim Chau; Marsh, Herbert W.; Yeung, Alexander Seeshing; Abduljabbar, Adel S.

    2015-01-01

    Studies adopting a construct validity approach can be categorized into within- and between-network studies. Few studies have applied between-network approach and tested the correlations of the social (same-sex relations, opposite-sex relations, parent relations), moral (honesty-trustworthiness), and emotional (emotional stability) facets of the

  6. Comparing Validity and Reliability in Special Education Title II and IDEA Data

    ERIC Educational Resources Information Center

    Steinbrecher, Trisha D.; McKeown, Debra; Walther-Thomas, Chriss

    2013-01-01

    Previous researchers have found that special education teacher shortages are pervasive and exacerbated by federal policies regarding "highly qualified" teacher requirements. The authors examined special education teacher personnel data from 2 federal data sources to determine if these sources offer a reliable and valid means of

  7. The African American Acculturation Scale II: Cross-Validation and Short Form.

    ERIC Educational Resources Information Center

    Landrine, Hope; Klonoff, Elizabeth A.

    1995-01-01

    Studied African American culture, using a new, shortened, 33-item African American Acculturation Scale (AAAS-33) to assess the scale's validity and reliability. Comparisons between the original form and AAAS-33 reveal high correlations, however, the longer form may be sensitive to some beliefs, practices, and attitudes not assessed by the short…

  8. Satellite data for diagnostics and for validation of model simulations

    NASA Technical Reports Server (NTRS)

    Collins, W. D.

    1993-01-01

    Two issues in the treatment of tropical convection in general circulation models are examined. First, several studies have found significant gradients in clear sky longwave fluxes near large convective systems. Increased upper tropospheric moisture associated with deep convection may explain the reduction in the longwave emission. Similar local gradients are not apparent in measurements from the Earth Radiation Budget Experiment (ERBE), an important data set for model validation. Thus the average cloud forcing and greenhouse effect derived from models and observations may differ systematically over warm tropical oceans. A comparison of ERBE fluxes with radiative calculations using coincident balloon-sonde atmospheric profiles indicates negligible systematic bias in the observations. The effect of convection on the clear sky fluxes may be localized to the edges of individual cloud systems. Second, the balance between shortwave and cloud forcing and longwave forcing is a persistent feature of tropical cloud systems in ERBE data. This cancellation effect has been used to diagnose problems in GCM (General Circulation Model) cloud fields on seasonal time scales. The daily record of net cloud radiative forcing is analyzed to determine the smallest spatial and temporal scales for the balance. The results show cancellation on periods as short as three days for regions smaller than 2.5 by 2.5 deg. The analysis indicates that the balance is primarily a local phenomenon characteristic of tropical convection. This is consistent with findings that the small cloud radiative forcing is due primarily to thick tropical cirrus. These results represent a particularly stringent test of convective parameterizations in GCM's with interactive ocean surfaces.

  9. Pilot test validation of the Coal Fouling Tendency model

    SciTech Connect

    Barta, L.E.; Beer, J.M.; Wood, V.J.

    1995-03-01

    Advances in our understanding of the details of chemical and physical processes of deposit formation in pulverized coal-fired boiler plant have led to the development at MIT of a Coal Fouling Tendency (CFT) computer code. Through utilization of a number of mathematical models and computer sub-codes, The CFT is capable of predicting the relative fouling tendency of coals. The sub-models interpret computer controlled scanning electron microscope analysis data in terms of mineral size and chemical composition distributions; follow the transformation of these mineral property distribution during the combustion of the coal; determine the probability of the resultant fly ash particles impacting on boiler-tube surfaces and of their sticking upon impaction. The sub-models are probabilistic, and take account of the particle-to-particle variation of coal mineral matter and fly ash properties by providing mean values and variances for particle size, chemical composition and viscosity. Results of an independent pilot test to validate the predictions of the CFT code are presented in this publication based on experimental data obtained in the Combustion Research Facility of the ABB-Combustion Engineering, a 3 MW furnace capable of simulating combustion conditions in utility boilers. Using various pulverized coals and coal blends as fuel, measurements were taken of the fly ash deposition on tubes inserted in the flame tunnel. Deposit formations were monitored for periods of several hours in duration on stainless steel tubes. The predictions of the CFT model were tested experimentally on four coals. The measured size and calculated viscosity distributions of fly ash were compared with predictions and good agreement was obtained. CFT predictions were also calculated for fly ash deposition rates over a wide temperature range. Based on these results, the relative fouling tendencies of the tested coals were given and compared with the pilot and field test results.

  10. Calibration and Validation of Airborne InSAR Geometric Model

    NASA Astrophysics Data System (ADS)

    Chunming, Han; huadong, Guo; Xijuan, Yue; Changyong, Dou; Mingming, Song; Yanbing, Zhang

    2014-03-01

    The image registration or geo-coding is a very important step for many applications of airborne interferometric Synthetic Aperture Radar (InSAR), especially for those involving Digital Surface Model (DSM) generation, which requires an accurate knowledge of the geometry of the InSAR system. While the trajectory and attitude instabilities of the aircraft introduce severe distortions in three dimensional (3-D) geometric model. The 3-D geometrical model of an airborne SAR image depends on the SAR processor itself. Working at squinted model, i.e., with an offset angle (squint angle) of the radar beam from broadside direction, the aircraft motion instabilities may produce distortions in airborne InSAR geometric relationship, which, if not properly being compensated for during SAR imaging, may damage the image registration. The determination of locations of the SAR image depends on the irradiated topography and the exact knowledge of all signal delays: range delay and chirp delay (being adjusted by the radar operator) and internal delays which are unknown a priori. Hence, in order to obtain reliable results, these parameters must be properly calibrated. An Airborne InSAR mapping system has been developed by the Institute of Remote Sensing and Digital Earth (RADI), Chinese Academy of Sciences (CAS) to acquire three-dimensional geo-spatial data with high resolution and accuracy. To test the performance of the InSAR system, the Validation/Calibration (Val/Cal) campaign has carried out in Sichun province, south-west China, whose results will be reported in this paper.

  11. PEP-II vacuum system pressure profile modeling using EXCEL

    SciTech Connect

    Nordby, M.; Perkins, C.

    1994-06-01

    A generic, adaptable Microsoft EXCEL program to simulate molecular flow in beam line vacuum systems is introduced. Modeling using finite-element approximation of the governing differential equation is discussed, as well as error estimation and program capabilities. The ease of use and flexibility of the spreadsheet-based program is demonstrated. PEP-II vacuum system models are reviewed and compared with analytical models.

  12. Bidirectional reflectance function in coastal waters: modeling and validation

    NASA Astrophysics Data System (ADS)

    Gilerson, Alex; Hlaing, Soe; Harmel, Tristan; Tonizzo, Alberto; Arnone, Robert; Weidemann, Alan; Ahmed, Samir

    2011-11-01

    The current operational algorithm for the correction of bidirectional effects from the satellite ocean color data is optimized for typical oceanic waters. However, versions of bidirectional reflectance correction algorithms, specifically tuned for typical coastal waters and other case 2 conditions, are particularly needed to improve the overall quality of those data. In order to analyze the bidirectional reflectance distribution function (BRDF) of case 2 waters, a dataset of typical remote sensing reflectances was generated through radiative transfer simulations for a large range of viewing and illumination geometries. Based on this simulated dataset, a case 2 water focused remote sensing reflectance model is proposed to correct above-water and satellite water leaving radiance data for bidirectional effects. The proposed model is first validated with a one year time series of in situ above-water measurements acquired by collocated multi- and hyperspectral radiometers which have different viewing geometries installed at the Long Island Sound Coastal Observatory (LISCO). Match-ups and intercomparisons performed on these concurrent measurements show that the proposed algorithm outperforms the algorithm currently in use at all wavelengths.

  13. Validating the topographic climatology logic of the MTCLIM model

    SciTech Connect

    Glassy, J.M.; Running, S.W.

    1995-06-01

    The topographic climatology logic of the MTCLIM model was validated using a comparison of modeled air temperatures vs. remotely sensed, thermal infrared (TIR) surface temperatures from three Daedalus Thematic Mapper Simulator scenes. The TIR data was taken in 1990 near Sisters, Oregon, as part of the NASA OTTER project. The original air temperature calculation method was modified for the spatial context of this study. After stratifying by canopy closure and relative solar loading, r{sup 2} values of 0.74, 0.89, and 0.97 were obtained for the March, June, and August scenes, respectively, using a modified air temperature algorithm. Consistently lower coefficients of determination were obtained using the original air temperature algorithm on the same data r{sup 2} values of .070, .52, and .66 for the March, June, and August samples respectively. The difficulties of comparing screen height air temperatures with remotely sensed surface temperatures are discussed, and several ideas for follow-on studies are suggested.

  14. Validation of model based active control of combustion instability

    SciTech Connect

    Fleifil, M.; Ghoneim, Z.; Ghoniem, A.F.

    1998-07-01

    The demand for efficient, company and clean combustion systems have spurred research into the fundamental mechanisms governing their performance and means of interactively changing their performance characteristics. Thermoacoustic instability which is frequently observed in combustion systems with high power density, when burning close to the lean flammability limit, or using exhaust gas recirculation to meet more stringent emissions regulations, etc. Its occurrence and/or means to mitigate them passively lead to performance degradation such as reduced combustion efficiency, high local heat transfer rates, increase in the mixture equivalence ratio or system failure due to structural damage. This paper reports on their study of the origin of thermoacoustic instability, its dependence on system parameters and the means of actively controlling it. The authors have developed an analytical model of thermoacoustic instability in premixed combustors. The model combines a heat release dynamics model constructed using the kinematics of a premixed flame stabilized behind a perforated plate with the linearized conservation equations governing the system acoustics. This formulation allows model based controller design. In order to test the performance of the analytical model, a numerical solution of the partial differential equations governing the system has been carried out using the principle of harmonic separation and focusing on the dominant unstable mode. This leads to a system of ODEs governing the thermofluid variables. Analytical predictions of the frequency and growth ate of the unstable mode are shown to be in good agreement with the numerical simulations as well s with those obtained using experimental identification techniques when applied to a laboratory combustor. The authors use these results to confirm the validity of the assumptions used in formulating the analytical model. A controller based on the minimization of a cost function using the LQR technique has been designed using the analytical model and implemented on a bench top laboratory combustor. The authors show that the controller is capable of suppressing the pressure oscillations in the combustor with a settling time much shorter than what had been attained before and without exciting secondary peaks.

  15. Validation of Community Models: 2. Development of a Baseline, Using the Wang-Sheeley-Arge Model

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    This paper is the second in a series providing independent validation of community models of the outer corona and inner heliosphere. Here I present a comprehensive validation of the Wang-Sheeley-Arge (WSA) model. These results will serve as a baseline against which to compare the next generation of comparable forecasting models. The WSA model is used by a number of agencies to predict Solar wind conditions at Earth up to 4 days into the future. Given its importance to both the research and forecasting communities, it is essential that its performance be measured systematically and independently. I offer just such an independent and systematic validation. I report skill scores for the model's predictions of wind speed and interplanetary magnetic field (IMF) polarity for a large set of Carrington rotations. The model was run in all its routinely used configurations. It ingests synoptic line of sight magnetograms. For this study I generated model results for monthly magnetograms from multiple observatories, spanning the Carrington rotation range from 1650 to 2074. I compare the influence of the different magnetogram sources and performance at quiet and active times. I also consider the ability of the WSA model to forecast both sharp transitions in wind speed from slow to fast wind and reversals in the polarity of the radial component of the IMF. These results will serve as a baseline against which to compare future versions of the model as well as the current and future generation of magnetohydrodynamic models under development for forecasting use.

  16. Parental modelling of eating behaviours: observational validation of the Parental Modelling of Eating Behaviours scale (PARM).

    PubMed

    Palfreyman, Zoe; Haycraft, Emma; Meyer, Caroline

    2015-03-01

    Parents are important role models for their children's eating behaviours. This study aimed to further validate the recently developed Parental Modelling of Eating Behaviours Scale (PARM) by examining the relationships between maternal self-reports on the PARM with the modelling practices exhibited by these mothers during three family mealtime observations. Relationships between observed maternal modelling and maternal reports of children's eating behaviours were also explored. Seventeen mothers with children aged between 2 and 6 years were video recorded at home on three separate occasions whilst eating a meal with their child. Mothers also completed the PARM, the Children's Eating Behaviour Questionnaire and provided demographic information about themselves and their child. Findings provided validation for all three PARM subscales, which were positively associated with their observed counterparts on the observational coding scheme (PARM-O). The results also indicate that habituation to observations did not change the feeding behaviours displayed by mothers. In addition, observed maternal modelling was significantly related to children's food responsiveness (i.e., their interest in and desire for foods), enjoyment of food, and food fussiness. This study makes three important contributions to the literature. It provides construct validation for the PARM measure and provides further observational support for maternal modelling being related to lower levels of food fussiness and higher levels of food enjoyment in their children. These findings also suggest that maternal feeding behaviours remain consistent across repeated observations of family mealtimes, providing validation for previous research which has used single observations. PMID:25111293

  17. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    PubMed

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601

  18. Some Hamiltonian models of friction II

    SciTech Connect

    Egli, Daniel; Gang Zhou

    2012-10-15

    In the present paper we consider the motion of a very heavy tracer particle in a medium of a very dense, non-interacting Bose gas. We prove that, in a certain mean-field limit, the tracer particle will be decelerated and come to rest somewhere in the medium. Friction is caused by emission of Cerenkov radiation of gapless modes into the gas. Mathematically, a system of semilinear integro-differential equations, introduced in Froehlich et al. ['Some hamiltonian models of friction,' J. Math. Phys. 52(8), 083508 (2011)], describing a tracer particle in a dispersive medium is investigated, and decay properties of the solution are proven. This work is an extension of Froehlich et al. ['Friction in a model of hamiltonian dynamics,' Commun. Math. Phys. 315(2), 401-444 (2012)]; it is an extension because no weak coupling limit for the interaction between tracer particle and medium is assumed. The technical methods used are dispersive estimates and a contraction principle.

  19. Alaska North Slope Tundra Travel Model and Validation Study

    SciTech Connect

    Harry R. Bader; Jacynthe Guimond

    2006-03-01

    The Alaska Department of Natural Resources (DNR), Division of Mining, Land, and Water manages cross-country travel, typically associated with hydrocarbon exploration and development, on Alaska's arctic North Slope. This project is intended to provide natural resource managers with objective, quantitative data to assist decision making regarding opening of the tundra to cross-country travel. DNR designed standardized, controlled field trials, with baseline data, to investigate the relationships present between winter exploration vehicle treatments and the independent variables of ground hardness, snow depth, and snow slab thickness, as they relate to the dependent variables of active layer depth, soil moisture, and photosynthetically active radiation (a proxy for plant disturbance). Changes in the dependent variables were used as indicators of tundra disturbance. Two main tundra community types were studied: Coastal Plain (wet graminoid/moist sedge shrub) and Foothills (tussock). DNR constructed four models to address physical soil properties: two models for each main community type, one predicting change in depth of active layer and a second predicting change in soil moisture. DNR also investigated the limited potential management utility in using soil temperature, the amount of photosynthetically active radiation (PAR) absorbed by plants, and changes in microphotography as tools for the identification of disturbance in the field. DNR operated under the assumption that changes in the abiotic factors of active layer depth and soil moisture drive alteration in tundra vegetation structure and composition. Statistically significant differences in depth of active layer, soil moisture at a 15 cm depth, soil temperature at a 15 cm depth, and the absorption of photosynthetically active radiation were found among treatment cells and among treatment types. The models were unable to thoroughly investigate the interacting role between snow depth and disturbance due to a lack of variability in snow depth cover throughout the period of field experimentation. The amount of change in disturbance indicators was greater in the tundra communities of the Foothills than in those of the Coastal Plain. However the overall level of change in both community types was less than expected. In Coastal Plain communities, ground hardness and snow slab thickness were found to play an important role in change in active layer depth and soil moisture as a result of treatment. In the Foothills communities, snow cover had the most influence on active layer depth and soil moisture as a result of treatment. Once certain minimum thresholds for ground hardness, snow slab thickness, and snow depth were attained, it appeared that little or no additive effect was realized regarding increased resistance to disturbance in the tundra communities studied. DNR used the results of this modeling project to set a standard for maximum permissible disturbance of cross-country tundra travel, with the threshold set below the widely accepted standard of Low Disturbance levels (as determined by the U.S. Fish and Wildlife Service). DNR followed the modeling project with a validation study, which seemed to support the field trial conclusions and indicated that the standard set for maximum permissible disturbance exhibits a conservative bias in favor of environmental protection. Finally DNR established a quick and efficient tool for visual estimations of disturbance to determine when investment in field measurements is warranted. This Visual Assessment System (VAS) seemed to support the plot disturbance measurements taking during the modeling and validation phases of this project.

  20. Validation of transport models using additive flux minimization technique

    SciTech Connect

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  1. The role of global cloud climatologies in validating numerical models

    NASA Technical Reports Server (NTRS)

    HARSHVARDHAN

    1991-01-01

    Reliable estimates of the components of the surface radiation budget are important in studies of ocean-atmosphere interaction, land-atmosphere interaction, ocean circulation and in the validation of radiation schemes used in climate models. The methods currently under consideration must necessarily make certain assumptions regarding both the presence of clouds and their vertical extent. Because of the uncertainties in assumed cloudiness, all these methods involve perhaps unacceptable uncertainties. Here, a theoretical framework that avoids the explicit computation of cloud fraction and the location of cloud base in estimating the surface longwave radiation is presented. Estimates of the global surface downward fluxes and the oceanic surface net upward fluxes were made for four months (April, July, October and January) in 1985 to 1986. These estimates are based on a relationship between cloud radiative forcing at the top of the atmosphere and the surface obtained from a general circulation model. The radiation code is the version used in the UCLA/GLA general circulation model (GCM). The longwave cloud radiative forcing at the top of the atmosphere as obtained from Earth Radiation Budget Experiment (ERBE) measurements is used to compute the forcing at the surface by means of the GCM-derived relationship. This, along with clear-sky fluxes from the computations, yield maps of the downward longwave fluxes and net upward longwave fluxes at the surface. The calculated results are discussed and analyzed. The results are consistent with current meteorological knowledge and explainable on the basis of previous theoretical and observational works; therefore, it can be concluded that this method is applicable as one of the ways to obtain the surface longwave radiation fields from currently available satellite data.

  2. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  3. Gravitropic responses of the Avena coleoptile in space and on clinostats. II. Is reciprocity valid?

    NASA Technical Reports Server (NTRS)

    Johnsson, A.; Brown, A. H.; Chapman, D. K.; Heathcote, D.; Karlsson, C.

    1995-01-01

    Experiments were undertaken to determine if the reciprocity rule is valid for gravitropic responses of oat coleoptiles in the acceleration region below 1 g. The rule predicts that the gravitropic response should be proportional to the product of the applied acceleration and the stimulation time. Seedlings were cultivated on 1 g centrifuges and transferred to test centrifuges to apply a transverse g-stimulation. Since responses occurred in microgravity, the uncertainties about the validity of clinostat simulation of weightlessness was avoided. Plants at two stages of coleoptile development were tested. Plant responses were obtained using time-lapse video recordings that were analyzed after the flight. Stimulus intensities and durations were varied and ranged from 0.1 to 1.0 g and from 2 to 130 min, respectively. For threshold g-doses the reciprocity rule was obeyed. The threshold dose was of the order of 55 g s and 120 g s, respectively, for two groups of plants investigated. Reciprocity was studied also at bending responses which are from just above the detectable level to about 10 degrees. The validity of the rule could not be confirmed for higher g-doses, chiefly because the data were more variable. It was investigated whether the uniformity of the overall response data increased when the gravitropic dose was defined as (gm x t) with m-values different from unity. This was not the case and the reciprocity concept is, therefore, valid also in the hypogravity region. The concept of gravitropic dose, the product of the transverse acceleration and the stimulation time, is also well-defined in the acceleration region studied. With the same hardware, tests were done on earth where responses occurred on clinostats. The results did not contradict the reciprocity rule but scatter in the data was large.

  4. Modeling short wave radiation and ground surface temperature: a validation experiment in the Western Alps

    NASA Astrophysics Data System (ADS)

    Pogliotti, P.; Cremonese, E.; Dallamico, M.; Gruber, S.; Migliavacca, M.; Morra di Cella, U.

    2009-12-01

    Permafrost distribution in high-mountain areas is influenced by topography (micro-climate) and high variability of ground covers conditions. Its monitoring is very difficult due to logistical problems like accessibility, costs, weather conditions and reliability of instrumentation. For these reasons physically-based modeling of surface rock/ground temperatures (GST) is fundamental for the study of mountain permafrost dynamics. With this awareness a 1D version of GEOtop model (www.geotop.org) is tested in several high-mountain sites and its accuracy to reproduce GST and incoming short wave radiation (SWin) is evaluated using independent field measurements. In order to describe the influence of topography, both flat and near-vertical sites with different aspects are considered. Since the validation of SWin is difficult on steep rock faces (due to the lack of direct measures) and validation of GST is difficult on flat sites (due to the presence of snow) the two parameters are validated as independent experiments: SWin only on flat morphologies, GST only on the steep ones. The main purpose is to investigate the effect of: (i) distance between driving meteo station location and simulation point location, (ii) cloudiness, (iii) simulation point aspect, (iv) winter/summer period. The temporal duration of model runs is variable from 3 years for the SWin experiment to 8 years for the validation of GST. The model parameterization is constant and tuned for a common massive bedrock of crystalline rock like granite. Ground temperature profile is not initialized because rock temperature is measured at only 10cm depth. A set of 9 performance measures is used for comparing model predictions and observations (including: fractional mean bias (FB), coefficient of residual mass (CMR), mean absolute error (MAE), modelling efficiency (ME), coefficient of determination (R2)). Results are very encouraging. For both experiments the distance (Km) between location of the driving meteo station and location of simulation doesn't play a significant effect (below 230 Km) on ME and R2 values. The incoming short wave radiation on flat sites is very well modeled and only the cloudiness can be a significant source of error in therms of underestimation. Also the GST on steep sites is very well modeled and very good values of both ME and R2 are obtained. MAE values are always quite big (15C) but the role of fixed parameterization is probably strong is such sense. Over and under-estimations occur during winter and summer respectively and can be an effect of not well modeling of SWin on near-vertical morphologies. In the future the direct validation of SWin on steep sites is needed together with a validation of snow accumulation/melting on flat sites and relative analysis of the effect on ground thermal regime. This require very good precipitation datasets in middle-high-mountain areas.

  5. EEG-neurofeedback for optimising performance. II: creativity, the performing arts and ecological validity.

    PubMed

    Gruzelier, John H

    2014-07-01

    As a continuation of a review of evidence of the validity of cognitive/affective gains following neurofeedback in healthy participants, including correlations in support of the gains being mediated by feedback learning (Gruzelier, 2014a), the focus here is on the impact on creativity, especially in the performing arts including music, dance and acting. The majority of research involves alpha/theta (A/T), sensory-motor rhythm (SMR) and heart rate variability (HRV) protocols. There is evidence of reliable benefits from A/T training with advanced musicians especially for creative performance, and reliable benefits from both A/T and SMR training for novice music performance in adults and in a school study with children with impact on creativity, communication/presentation and technique. Making the SMR ratio training context ecologically relevant for actors enhanced creativity in stage performance, with added benefits from the more immersive training context. A/T and HRV training have benefitted dancers. The neurofeedback evidence adds to the rapidly accumulating validation of neurofeedback, while performing arts studies offer an opportunity for ecological validity in creativity research for both creative process and product. PMID:24239853

  6. Development and validation of a quantification method for ziyuglycoside I and II in rat plasma: Application to their pharmacokinetic studies.

    PubMed

    Ye, Wei; Fu, Hanxu; Xie, Lin; Zhou, Lijun; Rao, Tai; Wang, Qian; Shao, Yuhao; Xiao, Jingcheng; Kang, Dian; Wang, Guangji; Liang, Yan

    2015-07-01

    This study provided a novel and generally applicable method to determine ziyuglycoside I and ziyuglycoside II in rat plasma based on liquid chromatography with tandem mass spectrometry. A single step of liquid-liquid extraction with n-butanol was utilized, and ginsenoside Rg3 was chosen as internal standard. Final extracts were analyzed based on liquid chromatography with tandem mass spectrometry. Chromatographic separation was achieved using a Thermo Golden C18 column, and the applied gradient elution program allowed for the simultaneous determination of two ziyuglycosides in a one-step chromatographic separation with a total run time of 10 min. The fully validated methodology for both analytes demonstrated high sensitivity (the lower limit of quantitation was 2.0 ng/mL), good accuracy (% RE ≤ ± 15) and precision (% RSD ≤ 15). The average recoveries of both ziyuglycosides and internal standard were all above 75% and no obvious matrix effect was found. This method was then successfully applied to the preclinical pharmacokinetic studies of ziyuglycoside I and ziyuglycoside II. The presently developed methodology would be useful for the preclinical and clinical pharmacokinetic studies for ziyuglycoside I and ziyuglycoside II. PMID:25885584

  7. Mitigation of turbidity currents in reservoirs with passive retention systems: validation of CFD modeling

    NASA Astrophysics Data System (ADS)

    Ferreira, E.; Alves, E.; Ferreira, R. M. L.

    2012-04-01

    Sediment deposition by continuous turbidity currents may affect eco-environmental river dynamics in natural reservoirs and hinder the maneuverability of bottom discharge gates in dam reservoirs. In recent years, innovative techniques have been proposed to enforce the deposition of turbidity further upstream in the reservoir (and away from the dam), namely, the use of solid and permeable obstacles such as water jet screens , geotextile screens, etc.. The main objective of this study is to validate a computational fluid dynamics (CFD) code applied to the simulation of the interaction between a turbidity current and a passive retention system, designed to induce sediment deposition. To accomplish the proposed objective, laboratory tests were conducted where a simple obstacle configuration was subjected to the passage of currents with different initial sediment concentrations. The experimental data was used to build benchmark cases to validate the 3D CFD software ANSYS-CFX. Sensitivity tests of mesh design, turbulence models and discretization requirements were performed. The validation consisted in comparing experimental and numerical results, involving instantaneous and time-averaged sediment concentrations and velocities. In general, a good agreement between the numerical and the experimental values is achieved when: i) realistic outlet conditions are specified, ii) channel roughness is properly calibrated, iii) two equation k - ? models are employed iv) a fine mesh is employed near the bottom boundary. Acknowledgements This study was funded by the Portuguese Foundation for Science and Technology through the project PTDC/ECM/099485/2008. The first author thanks the assistance of Professor Moitinho de Almeida from ICIST and to all members of the project and of the Fluvial Hydraulics group of CEHIDRO.

  8. Validation of Community Models: Identifying Events in Space Weather Model Timelines

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    I develop and document a set of procedures which test the quality of predictions of solar wind speed and polarity of the interplanetary magnetic field (IMF) made by coupled models of the ambient solar corona and heliosphere. The Wang-Sheeley-Arge (WSA) model is used to illustrate the application of these validation procedures. I present an algorithm which detects transitions of the solar wind from slow to high speed. I also present an algorithm which processes the measured polarity of the outward directed component of the IMF. This removes high-frequency variations to expose the longer-scale changes that reflect IMF sector changes. I apply these algorithms to WSA model predictions made using a small set of photospheric synoptic magnetograms obtained by the Global Oscillation Network Group as input to the model. The results of this preliminary validation of the WSA model (version 1.6) are summarized.

  9. Transient PVT measurements and model predictions for vessel heat transfer. Part II.

    SciTech Connect

    Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.; Evans, Gregory Herbert; Rice, Steven F.

    2010-07-01

    Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models in which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.

  10. Bow shock models of ultracompact H II regions

    NASA Technical Reports Server (NTRS)

    Mac Low, Mordecai-Mark; Van Buren, Dave; Wood, Douglas O. S.; Churchwell, ED

    1991-01-01

    This paper presents models of ultracompact H II regions as the bow shocks formed by massive stars, with strong stellar winds, moving supersonically through molecular clouds. The morphologies, sizes and brightnesses of observed objects match the models well. Plausible models are provided for the ultracompact H II regions G12.21 - 0.1, G29.96 - 0.02, G34.26 + 0.15, and G43.89 - 0.78. To do this, the equilibrium shape of the wind-blown shell is calculated, assuming momentum conservation. Then the shell is illuminated with ionizing radiation from the central star, radiative transfer for free-free emission through the shell is performed, and the resulting object is visualized at various angles for comparison with radio continuum maps. The model unifies most of the observed morphologies of ultracompact H II regions, excluding only those objects with spherical shells. Ram pressure confinement greatly lengthens the life of ultracompact H II regions, explaining the large number that exist in the Galaxy despite their low apparent kinematic ages.

  11. An independent verification and validation of the Future Theater Level Model conceptual model

    SciTech Connect

    Hartley, D.S. III; Kruse, K.L.; Martellaro, A.J.; Packard, S.L.; Thomas, B. Jr.; Turley, V.K.

    1994-08-01

    This report describes the methodology and results of independent verification and validation performed on a combat model in its design stage. The combat model is the Future Theater Level Model (FTLM), under development by The Joint Staff/J-8. J-8 has undertaken its development to provide an analysis tool that addresses the uncertainties of combat more directly than previous models and yields more rapid study results. The methodology adopted for this verification and validation consisted of document analyses. Included were detailed examination of the FTLM design documents (at all stages of development), the FTLM Mission Needs Statement, and selected documentation for other theater level combat models. These documents were compared to assess the FTLM as to its design stage, its purpose as an analytical combat model, and its capabilities as specified in the Mission Needs Statement. The conceptual design passed those tests. The recommendations included specific modifications as well as a recommendation for continued development. The methodology is significant because independent verification and validation have not been previously reported as being performed on a combat model in its design stage. The results are significant because The Joint Staff/J-8 will be using the recommendations from this study in determining whether to proceed with develop of the model.

  12. Geoid model computation and validation over Alaska/Yukon

    NASA Astrophysics Data System (ADS)

    Li, X.; Huang, J.; Roman, D. R.; Wang, Y.; Veronneau, M.

    2012-12-01

    The Alaska and Yukon area consists of very complex and dynamic geology. It is featured by the two highest mountains in North America, Mount McKinely (20,320ft) in Alaska, USA and Mount Logan (19,541ft) in Yukon, Canada, along with the Alaska trench along the plate boundaries. On the one hand this complex geology gives rise to large horizontal geoid gradients across this area. On the other hand geoid time variation is much stronger than most of the other areas in the world due to tectonic movement, the post glacial rebound and ice melting effects in this region. This type of geology poses great challenges for the determination of North American geoid over this area, which demands proper gravity data coverage in both space and time on both the Alaska and Yukon sides. However, the coverage of the local gravity data is inhomogenous in this area. The terrestrial gravity is sparse in Alaska, and spans a century in time. In contrast, the terrestrial gravity is relatively well-distributed in Yukon but with data gaps. In this paper, various new satellite models along with the newly acquired airborne data will be incorporated to augment the middle-to-long wavelength geoid components. Initial tests show clear geoid improvements at the local GPS benchmarks in the Yukon area after crustal motion is accounted for. Similar approaches will be employed on the Alaska side for a better validation to determine a continuous vertical datum across US and Canada.

  13. On the verification and validation of detonation models

    NASA Astrophysics Data System (ADS)

    Quirk, James

    2013-06-01

    This talk will consider the verification and validation of detonation models, such as Wescott-Stewart-Davis (Journal of Applied Physics. 2005), from the perspective of the American Institute of Aeronautics and Astronautics policy on numerical accuracy (AIAA J. Vol. 36, No. 1, 1998). A key aspect of the policy is that accepted documentation procedures must be used for journal articles with the aim of allowing the reported work to be reproduced by the interested reader. With the rise of electronic documents, since the policy was formulated, it is now possible to satisfy this mandate in its strictest sense: that is, it is now possible to run a comptuational verification study directly in a PDF, thereby allowing a technical author to report numerical subtleties that traditionally have been ignored. The motivation for this document-centric approach is discussed elsewhere (Quirk2003, Adaptive Mesh Refinement Theory and Practice, Springer), leaving the talk to concentrate on specific detonation examples that should be of broad interest to the shock-compression community.

  14. Real-time infrared signature model validation for hardware-in-the-loop simulations

    NASA Astrophysics Data System (ADS)

    Sanders, Jeffrey S.; Peters, Trina S.

    1997-07-01

    Techniques and tools for validation of real-time infrared target signature models are presented. The model validation techniques presented in this paper were developed for hardware-in-the-loop (HWIL) simulations at the U.S. Army Missile Command's Research, Development, and Engineering Center. Real-time target model validation is a required deliverable to the customer of a HWIL simulation facility and is a critical part of ensuring the fidelity of a HWIL simulation. There are two levels of real-time target model validation. The first level is comparison of the target model to some baseline or measured data which answers the question `are the simulation inputs correct?'. The second level of validation is a simulation validation which answers the question `for a given target model input is the simulation hardware and software generating the correct output?'. This paper deals primarily with the first level of target model validation. IR target signature models have often been validated by subjective visual inspection or by objective, but limited, statistical comparisons. Subjective methods can be very satisfying to the simulation developer but offer little comfort to the simulation customer since subjective methods cannot be documented. Generic statistical methods offer a level of documentation, yet are often not robust enough to fully test the fidelity of an IR signature. Advances in infrared seeker and sensor technology have led to the necessity of system specific target model validation. For any HWIL simulation it must be demonstrated that the sensor responds to the real-time signature model in a manner which is functionally equivalent to the sensor's response to a baseline model. Depending on the application, a baseline method can be measured IR imagery or the output of a validated IR signature prediction code. Tools are described that generate validation data for HWIL simulations at MICOM and example real-time model validations are presented.

  15. Predictive data modeling of human type II diabetes related statistics

    NASA Astrophysics Data System (ADS)

    Jaenisch, Kristina L.; Jaenisch, Holger M.; Handley, James W.; Albritton, Nathaniel G.

    2009-04-01

    During the course of routine Type II treatment of one of the authors, it was decided to derive predictive analytical Data Models of the daily sampled vital statistics: namely weight, blood pressure, and blood sugar, to determine if the covariance among the observed variables could yield a descriptive equation based model, or better still, a predictive analytical model that could forecast the expected future trend of the variables and possibly eliminate the number of finger stickings required to montior blood sugar levels. The personal history and analysis with resulting models are presented.

  16. Experiments for calibration and validation of plasticity and failure material modeling: 304L stainless steel.

    SciTech Connect

    Lee, Kenneth L.; Korellis, John S.; McFadden, Sam X.

    2006-01-01

    Experimental data for material plasticity and failure model calibration and validation were obtained from 304L stainless steel. Model calibration data were taken from smooth tension, notched tension, and compression tests. Model validation data were provided from experiments using thin-walled tube specimens subjected to path dependent combinations of internal pressure, extension, and torsion.

  17. Validation of the REBUS-3/RCT methodologies for EBR-II core-follow analysis

    SciTech Connect

    McKnight, R.D.

    1992-01-01

    One of the many tasks to be completed at EBR-2/FCF (Fuel Cycle Facility) regarding fuel cycle closure for the Integral Fast Reactor (IFR) is to develop and install the systems to be used for fissile material accountancy and control. The IFR fuel cycle and pyrometallurgical process scheme determine the degree of actinide of actinide buildup in the reload fuel assemblies. Inventories of curium, americium and neptunium in the fuel will affect the radiation and thermal environmental conditions at the fuel fabrication stations, the chemistry of reprocessing, and the neutronic performance of the core. Thus, it is important that validated calculational tools be put in place for accurately determining isotopic mass and neutronic inputs to FCF for both operational and material control and accountancy purposes. The primary goal of this work is to validate the REBUS-2/RCT codes as tools which can adequately compute the burnup and isotopic distribution in binary- and ternary-fueled Mark-3, Mark-4, and Mark-5 subassemblies. 6 refs.

  18. Modeling and experimental validation of unsteady impinging flames

    SciTech Connect

    Fernandes, E.C.; Leandro, R.E.

    2006-09-15

    This study reports on a joint experimental and analytical study of premixed laminar flames impinging onto a plate at controlled temperature, with special emphasis on the study of periodically oscillating flames. Six types of flame structures were found, based on parametric variations of nozzle-to-plate distance (H), jet velocity (U), and equivalence ratio (f). They were classified as conical, envelope, disc, cool central core, ring, and side-lifted flames. Of these, the disc, cool central core, and envelope flames were found to oscillate periodically, with frequency and sound pressure levels increasing with Re and decreasing with nozzle-to-plate distance. The unsteady behavior of these flames was modeled using the formulation derived by Durox et al. [D. Durox, T. Schuller, S. Candel, Proc. Combust. Inst. 29 (2002) 69-75] for the cool central core flames where the convergent burner acts as a Helmholtz resonator, driven by an external pressure fluctuation dependent on a velocity fluctuation at the burner mouth after a convective time delay {tau}. Based on this model, the present work shows that {tau} = [Re[2jtanh{sup -1}((2{delta}{omega}+(1+N)j{omega}{sup 2}-j{omega}{sub 0}{sup 2})/ (2{delta}{omega}+(1-N)j{omega}{sup 2}-j{omega}{sub 0}{sup 2}))]+2{pi}K]/{omega}, i.e., there is a relation between oscillation frequency ({omega}), burner acoustic characteristics ({omega}{sub 0},{delta}), and time delay {tau}, not explicitly dependent on N, the flame-flow normalized interaction coefficient [D. Durox, T. Schuller, S. Candel, Proc. Combust. Inst. 29 (2002) 69-75], because {partial_derivative}t/{partial_derivative}N = 0. Based on flame motion and noise analysis, K was found to physically represent the integer number of perturbations on flame surface or number of coherent structures on impinging jet. Additionally, assuming that {tau}={beta}H/U, where H is the nozzle-to-plate distance and U is the mean jet velocity, it is shown that {beta}{sub Disc}=1.8, {beta}{sub CCC}=1.03, and {beta}{sub Env}=1.0. A physical analysis of the proportionality constant {beta} showed that for the disc flames, {tau} corresponds to the ratio between H and the velocity of the coherent structures. In the case of envelope and cool central core flames, {tau} corresponds to the ratio between H and the mean jet velocity. The predicted frequency fits the experimental data, supporting the validity of the mathematical modeling, empirical formulation, and assumptions made. (author)

  19. SDSS-II: Determination of shape and color parameter coefficients for SALT-II fit model

    SciTech Connect

    Dojcsak, L.; Marriner, J.; /Fermilab

    2010-08-01

    In this study we look at the SALT-II model of Type IA supernova analysis, which determines the distance moduli based on the known absolute standard candle magnitude of the Type IA supernovae. We take a look at the determination of the shape and color parameter coefficients, {alpha} and {beta} respectively, in the SALT-II model with the intrinsic error that is determined from the data. Using the SNANA software package provided for the analysis of Type IA supernovae, we use a standard Monte Carlo simulation to generate data with known parameters to use as a tool for analyzing the trends in the model based on certain assumptions about the intrinsic error. In order to find the best standard candle model, we try to minimize the residuals on the Hubble diagram by calculating the correct shape and color parameter coefficients. We can estimate the magnitude of the intrinsic errors required to obtain results with {chi}{sup 2}/degree of freedom = 1. We can use the simulation to estimate the amount of color smearing as indicated by the data for our model. We find that the color smearing model works as a general estimate of the color smearing, and that we are able to use the RMS distribution in the variables as one method of estimating the correct intrinsic errors needed by the data to obtain the correct results for {alpha} and {beta}. We then apply the resultant intrinsic error matrix to the real data and show our results.

  20. Importance of Sea Ice for Validating Global Climate Models

    NASA Technical Reports Server (NTRS)

    Geiger, Cathleen A.

    1997-01-01

    Reproduction of current day large-scale physical features and processes is a critical test of global climate model performance. Without this benchmark, prognoses of future climate conditions are at best speculation. A fundamental question relevant to this issue is, which processes and observations are both robust and sensitive enough to be used for model validation and furthermore are they also indicators of the problem at hand? In the case of global climate, one of the problems at hand is to distinguish between anthropogenic and naturally occuring climate responses. The polar regions provide an excellent testing ground to examine this problem because few humans make their livelihood there, such that anthropogenic influences in the polar regions usually spawn from global redistribution of a source originating elsewhere. Concomitantly, polar regions are one of the few places where responses to climate are non-anthropogenic. Thus, if an anthropogenic effect has reached the polar regions (e.g. the case of upper atmospheric ozone sensitivity to CFCs), it has most likely had an impact globally but is more difficult to sort out from local effects in areas where anthropogenic activity is high. Within this context, sea ice has served as both a monitoring platform and sensitivity parameter of polar climate response since the time of Fridtjof Nansen. Sea ice resides in the polar regions at the air-sea interface such that changes in either the global atmospheric or oceanic circulation set up complex non-linear responses in sea ice which are uniquely determined. Sea ice currently covers a maximum of about 7% of the earth's surface but was completely absent during the Jurassic Period and far more extensive during the various ice ages. It is also geophysically very thin (typically <10 m in Arctic, <3 m in Antarctic) compared to the troposphere (roughly 10 km) and deep ocean (roughly 3 to 4 km). Because of these unique conditions, polar researchers regard sea ice as one of the more important features to monitor in terms of heat, mass, and momentum transfer between the air and sea and furthermore, the impact of such responses to global climate.

  1. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the repeatability of such catastrophe losses in the country. The validation process also included collaboration between Aon Benfield and its client in order to consider the insurance market penetration in Algeria estimated approximately at 5%. Thus, we believe that the applied approach led towards the production of an earthquake model for Algeria that is scientifically sound and reliable from one side and market and client oriented on the other side.

  2. Validation of the integration of CFD and SAS4A/SASSYS-1: Analysis of EBR-II shutdown heat removal test 17

    SciTech Connect

    Thomas, J. W.; Fanning, T. H.; Vilim, R.; Briggs, L. L.

    2012-07-01

    Recent analyses have demonstrated the need to model multidimensional phenomena, particularly thermal stratification in outlet plena, during safety analyses of loss-of-flow transients of certain liquid-metal cooled reactor designs. Therefore, Argonne's reactor systems safety code SAS4A/SASSYS-1 is being enhanced by integrating 3D computational fluid dynamics models of the plena. A validation exercise of the new tool is being performed by analyzing the protected loss-of-flow event demonstrated by the EBR-II Shutdown Heat Removal Test 17. In this analysis, the behavior of the coolant in the cold pool is modeled using the CFD code STAR-CCM+, while the remainder of the cooling system and the reactor core are modeled with SAS4A/SASSYS-1. This paper summarizes the code integration strategy and provides the predicted 3D temperature and velocity distributions inside the cold pool during SHRT-17. The results of the coupled analysis should be considered preliminary at this stage, as the exercise pointed to the need to improve the CFD model of the cold pool tank. (authors)

  3. Transfer matrix modeling and experimental validation of cellular porous material with resonant inclusions.

    PubMed

    Doutres, Olivier; Atalla, Noureddine; Osman, Haisam

    2015-06-01

    Porous materials are widely used for improving sound absorption and sound transmission loss of vibrating structures. However, their efficiency is limited to medium and high frequencies of sound. A solution for improving their low frequency behavior while keeping an acceptable thickness is to embed resonant structures such as Helmholtz resonators (HRs). This work investigates the absorption and transmission acoustic performances of a cellular porous material with a two-dimensional periodic arrangement of HR inclusions. A low frequency model of a resonant periodic unit cell based on the parallel transfer matrix method is presented. The model is validated by comparison with impedance tube measurements and simulations based on both the finite element method and a homogenization based model. At the HR resonance frequency (i) the transmission loss is greatly improved and (ii) the sound absorption of the foam can be either decreased or improved depending on the HR tuning frequency and on the thickness and properties of the host foam. Finally, the diffuse field sound absorption and diffuse field sound transmission loss performance of a 2.6 m(2) resonant cellular material are measured. It is shown that the improvements observed at the Helmholtz resonant frequency on a single cell are confirmed at a larger scale. PMID:26093437

  4. MATSIM -The Development and Validation of a Numerical Voxel Model based on the MATROSHKA Phantom

    NASA Astrophysics Data System (ADS)

    Beck, Peter; Rollet, Sofia; Berger, Thomas; Bergmann, Robert; Hajek, Michael; Latocha, Marcin; Vana, Norbert; Zechner, Andrea; Reitz, Guenther

    The AIT Austrian Institute of Technology coordinates the project MATSIM (MATROSHKA Simulation) in collaboration with the Vienna University of Technology and the German Aerospace Center. The aim of the project is to develop a voxel-based model of the MATROSHKA anthro-pomorphic torso used at the International Space Station (ISS) as foundation to perform Monte Carlo high-energy particle transport simulations for different irradiation conditions. Funded by the Austrian Space Applications Programme (ASAP), MATSIM is a co-investigation with the European Space Agency (ESA) ELIPS project MATROSHKA, an international collaboration of more than 18 research institutes and space agencies from all over the world, under the science and project lead of the German Aerospace Center. The MATROSHKA facility is designed to determine the radiation exposure of an astronaut onboard ISS and especially during an ex-travehicular activity. The numerical model developed in the frame of MATSIM is validated by reference measurements. In this report we give on overview of the model development and compare photon and neutron irradiations of the detector-equipped phantom torso with Monte Carlo simulations using FLUKA. Exposure to Co-60 photons was realized in the standard ir-radiation laboratory at Seibersdorf, while investigations with neutrons were performed at the thermal column of the Vienna TRIGA Mark-II reactor. The phantom was loaded with passive thermoluminescence dosimeters. In addition, first results of the calculated dose distribution within the torso are presented for a simulated exposure in low-Earth orbit.

  5. Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Sebok, Angelia; Gore, Brian; Hooey, Becky

    2012-01-01

    We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures.

  6. Validation of the updated Umkehr ozone retrieval algorithm against SAGE II data

    NASA Astrophysics Data System (ADS)

    Petropavlovskikh, I. V.; Weatherhead, E. C.; Bhartia, P. K.

    2002-05-01

    Improvements to the Umkehr ozone profile retrieval algorithm have been developed and are now being implemented. One of the changes in the algorithm eliminates the bias due to total ozone trends, which is known to affect the existing Umkehr ozone profile record. The updated algorithm is able to simulate observations more accurately and provides data output that is easier to analyze. Among the new diagnostic capabilities that the updated algorithm provides is the averaging kernel (AK) method. The AK approach allows studying how the algorithm responds when a small perturbation is made in a particular layer of the atmosphere [Rodgers 1976, 1990]. For the first time, we will use the AK method to define precisely what Umkehr should measure given a set of profiles measured by other platforms. This method allows us to compare trends more accurately than it has been done in the past. The updated Umkehr retrievals will be compared with SAGE II ozone profiles. Considerable variability of the ozone profile within the 10-degree latitude envelope creates noise in the SAGE matching dataset and makes trend analysis difficult. To eliminate this problem, the SAGE and Umkehr data will be de-seasonalized by subtracting the latitude/season dependent ozone climatology. The SAGE data will be also used to evaluate whether changing the solar zenith angle (SZA) normalization in Umkehr retrievals reduces instrumental errors. The comparison of zenith-sky radiances synthesized for a given set of SAGE profiles will be used to determine whether SAGE-derived N-values agree with the Umkehr-measured N-values. The updated long-term Umkehr dataset can be used to provide high quality information for identifying signs of ozone recovery. Recovery may be detected earlier in some layers than in others, for instance, at around 40 km altitude where CFC chemistry is the prevailing factor in ozone destruction. Both Umkehr and SAGE II measurements have very solid information at the mid- and upper (40 km) levels. The long Umkehr historical record can provide additional information for separating the dynamic and chemical mechanisms of depletion, and can help the community better understand climate change effects.

  7. Optimal temperature input design for estimation of the square root model parameters: parameter accuracy and model validity restrictions.

    PubMed

    Bernaerts, Kristel; Servaes, Roos D; Kooyman, Steven; Versyck, Karina J; Van Impe, Jan F

    2002-03-01

    As part of the model building process, parameter estimation is of great importance in view of accurate prediction making. Confidence limits on the predicted model output are largely determined by the parameter estimation accuracy that is reflected by its parameter estimation covariance matrix. In view of the accurate estimation of the Square Root model parameters, Bernaerts et al. have successfully applied the techniques of optimal experiment design for parameter estimation [Int. J. Food Microbiol. 54 (1-2) (2000) 27]. Simulation-based results have proved that dynamic (i.e., time-varying) temperature conditions characterised by a large abrupt temperature increase yield highly informative cell density data enabling precise estimation of the Square Root model parameters. In this study, it is shown by bioreactor experiments with detailed and precise sampling that extreme temperature shifts disturb the exponential growth of Escherichia coli K12. A too large shift results in an intermediate lag phase. Because common growth models lack the ability to model this intermediate lag phase, temperature conditions should be designed such that exponential growth persist even though the temperature may be changing. The current publication presents (i) the design of an optimal temperature input guaranteeing model validity yet yielding accurate Square Root model parameters, and (ii) the experimental implementation of the optimal input in a computer-controlled bioreactor. Starting values for the experiment design are generated by a traditional two-step procedure based on static experiments. Opposed to the single step temperature profile, the novel temperature input comprises a sequence of smaller temperature increments. The structural development of the temperature input is extensively explained. High quality data of E. coli K12 under optimally varying temperature conditions realised in a computer-controlled bioreactor yield accurate estimates for the Square Root model parameters. The latter is illustrated by means of the individual confidence intervals and the joint confidence region. PMID:11934023

  8. Assessment of the Validity of the Double Superhelix Model for Reconstituted High Density Lipoproteins

    PubMed Central

    Jones, Martin K.; Zhang, Lei; Catte, Andrea; Li, Ling; Oda, Michael N.; Ren, Gang; Segrest, Jere P.

    2010-01-01

    For several decades, the standard model for high density lipoprotein (HDL) particles reconstituted from apolipoprotein A-I (apoA-I) and phospholipid (apoA-I/HDL) has been a discoidal particle ?100 ? in diameter and the thickness of a phospholipid bilayer. Recently, Wu et al. (Wu, Z., Gogonea, V., Lee, X., Wagner, M. A., Li, X. M., Huang, Y., Undurti, A., May, R. P., Haertlein, M., Moulin, M., Gutsche, I., Zaccai, G., Didonato, J. A., and Hazen, S. L. (2009) J. Biol. Chem. 284, 3660536619) used small angle neutron scattering to develop a new model they termed double superhelix (DSH) apoA-I that is dramatically different from the standard model. Their model possesses an open helical shape that wraps around a prolate ellipsoidal type I hexagonal lyotropic liquid crystalline phase. Here, we used three independent approaches, molecular dynamics, EM tomography, and fluorescence resonance energy transfer spectroscopy (FRET) to assess the validity of the DSH model. (i) By using molecular dynamics, two different approaches, all-atom simulated annealing and coarse-grained simulation, show that initial ellipsoidal DSH particles rapidly collapse to discoidal bilayer structures. These results suggest that, compatible with current knowledge of lipid phase diagrams, apoA-I cannot stabilize hexagonal I phase particles of phospholipid. (ii) By using EM, two different approaches, negative stain and cryo-EM tomography, show that reconstituted apoA-I/HDL particles are discoidal in shape. (iii) By using FRET, reconstituted apoA-I/HDL particles show a 2834-? intermolecular separation between terminal domain residues 40 and 240, a distance that is incompatible with the dimensions of the DSH model. Therefore, we suggest that, although novel, the DSH model is energetically unfavorable and not likely to be correct. Rather, we conclude that all evidence supports the likelihood that reconstituted apoA-I/HDL particles, in general, are discoidal in shape. PMID:20974855

  9. Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition

    NASA Technical Reports Server (NTRS)

    Ewing, Anthony; Adams, Charles

    2004-01-01

    Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.

  10. Dynamic characterization of hysteresis elements in mechanical systems. II. Experimental validation.

    PubMed

    Symens, W; Al-Bender, F

    2005-03-01

    The industrial demand for machine tools with ever increasing speed and accuracy calls for a closer look at the physical phenomena that are present at small movements of those machine's slides. One of these phenomena, and probably the most dominant one, is the dependence of the friction force on displacement that can be described by a rate-independent hysteresis function with nonlocal memory. The influence of this highly nonlinear effect on the dynamics of the system has been theoretically analyzed in Part I of this paper. This part (II) aims at verifying these theoretical results on three experimental setups. Two setups, consisting of linearly driven rolling element guideways, have been built to specifically study the hysteretic friction behavior. The experiments performed on these specially designed setups are then repeated on one axis of an industrial pick-and-place device, driven by a linear motor and guided by commercial guideways. The results of the experiments on all the setups agree qualitatively well with the theoretically predicted ones and point to the inherent difficulty of accurate quantitative identification of the hysteretic behavior. They further show that the hysteretic friction behavior has a direct bearing on the dynamics of machine tools and its presence should therefore be carefully considered in the dynamic identification process of these systems. PMID:15836260

  11. Theoretical models for Type I and Type II supernova

    SciTech Connect

    Woosley, S.E.; Weaver, T.A.

    1985-01-01

    Recent theoretical progress in understanding the origin and nature of Type I and Type II supernovae is discussed. New Type II presupernova models characterized by a variety of iron core masses at the time of collapse are presented and the sensitivity to the reaction rate /sup 12/C(..cap alpha..,..gamma..)/sup 16/O explained. Stars heavier than about 20 M/sub solar/ must explode by a ''delayed'' mechanism not directly related to the hydrodynamical core bounce and a subset is likely to leave black hole remnants. The isotopic nucleosynthesis expected from these massive stellar explosions is in striking agreement with the sun. Type I supernovae result when an accreting white dwarf undergoes a thermonuclear explosion. The critical role of the velocity of the deflagration front in determining the light curve, spectrum, and, especially, isotopic nucleosynthesis in these models is explored. 76 refs., 8 figs.

  12. 2013 CEF RUN - PHASE 1 DATA ANALYSIS AND MODEL VALIDATION

    SciTech Connect

    Choi, A.

    2014-05-08

    Phase 1 of the 2013 Cold cap Evaluation Furnace (CEF) test was completed on June 3, 2013 after a 5-day round-the-clock feeding and pouring operation. The main goal of the test was to characterize the CEF off-gas produced from a nitric-formic acid flowsheet feed and confirm whether the CEF platform is capable of producing scalable off-gas data necessary for the revision of the DWPF melter off-gas flammability model; the revised model will be used to define new safety controls on the key operating parameters for the nitric-glycolic acid flowsheet feeds including total organic carbon (TOC). Whether the CEF off-gas data were scalable for the purpose of predicting the potential flammability of the DWPF melter exhaust was determined by comparing the predicted H{sub 2} and CO concentrations using the current DWPF melter off-gas flammability model to those measured during Phase 1; data were deemed scalable if the calculated fractional conversions of TOC-to-H{sub 2} and TOC-to-CO at varying melter vapor space temperatures were found to trend and further bound the respective measured data with some margin of safety. Being scalable thus means that for a given feed chemistry the instantaneous flow rates of H{sub 2} and CO in the DWPF melter exhaust can be estimated with some degree of conservatism by multiplying those of the respective gases from a pilot-scale melter by the feed rate ratio. This report documents the results of the Phase 1 data analysis and the necessary calculations performed to determine the scalability of the CEF off-gas data. A total of six steady state runs were made during Phase 1 under non-bubbled conditions by varying the CEF vapor space temperature from near 700 to below 300°C, as measured in a thermowell (T{sub tw}). At each steady state temperature, the off-gas composition was monitored continuously for two hours using MS, GC, and FTIR in order to track mainly H{sub 2}, CO, CO{sub 2}, NO{sub x}, and organic gases such as CH{sub 4}. The standard deviation of the average vapor space temperature during each steady state ranged from 2 to 6°C; however, those of the measured off-gas data were much larger due to the inherent cold cap instabilities in the slurry-fed melters. In order to predict the off-gas composition at the sampling location downstream of the film cooler, the measured feed composition was charge-reconciled and input into the DWPF melter off-gas flammability model, which was then run under the conditions for each of the six Phase 1 steady states. In doing so, it was necessary to perform an overall heat/mass balance calculation from the melter to the Off-Gas Condensate Tank (OGCT) in order to estimate the rate of air inleakage as well as the true gas temperature in the CEF vapor space (T{sub gas}) during each steady state by taking into account the effects of thermal radiation on the measured temperature (T{sub tw}). The results of Phase 1 data analysis and subsequent model runs showed that the predicted concentrations of H{sub 2} and CO by the DWPF model correctly trended and further bounded the respective measured data in the CEF off-gas by over predicting the TOC-to-H{sub 2} and TOC-to-CO conversion ratios by a factor of 2 to 5; an exception was the 7X over prediction of the latter at T{sub gas} = 371°C but the impact of CO on the off-gas flammability potential is only minor compared to that of H{sub 2}. More importantly, the seemingly-excessive over prediction of the TOC-to-H{sub 2} conversion by a factor of 4 or higher at T{sub gas} < ~350°C was attributed to the conservative antifoam decomposition scheme added recently to the model and therefore is considered a modeling issue and not a design issue. At T{sub gas} > ~350°C, the predicted TOC-to-H{sub 2} conversions were closer to but still higher than the measured data by a factor of 2, which may be regarded as adequate from the safety margin standpoint. The heat/mass balance calculations also showed that the correlation between T{sub tw} and T{sub gas} in the CEF vapor space was close to that of the ½ scale SGM, whose data were taken as directly applicable to the DWPF melter and thus used to set all the parameters of the original model. Based on these results of the CEF Phase 1 off-gas and thermal data analyses, it is concluded that: (1) The thermal characteristics of the CEF vapor space are prototypic thanks to its prototypic design; and (2) The CEF off-gas data are scalable in terms of predicting the flammability potential of the DWPF melter off-gas. These results also show that the existing DWPF safety controls on the TOC and antifoam as a function of nitrate are conservative by the same order of magnitude shown by the Phase 1 data at T{sub gas} < ~350°C, since they were set at T{sub gas} = 294°C, which falls into the region of excessive conservatism for the current DWPF model in terms of predicting the TOC-to-H{sub 2} conversion. In order to remedy the overly-conservative antifoam decomposition scheme used in the current DWPF model, the data from two recent tests will be analyzed in detail in order to gain additional insights into the antifoam decomposition chemistry in the cold cap. The first test was run in a temperature-programmed furnace using both normal and spiked feeds with fresh antifoam under inert and slightly oxidizing vapor space conditions. Phase 2 of the CEF test was run with the baseline nitric-glycolic acid flowsheet feeds that contained the “processed antifoam” and those spiked with fresh antifoam in order to study the effects of antifoam concentration as well as processing history on its decomposition chemistry under actual melter conditions. The goal is to develop an improved antifoam decomposition model from the analysis of these test data and incorporate it into a new multistage cold cap model to be developed concurrently for the nitric-glycolic acid flowsheet feeds. These activities will be documented in the Phase 2 report. Finally, it is recommended that some of the conservatism in the existing DWPF safety controls be removed by improving the existing measured-vs.-true gas temperature correlation used in the melter vapor space combustion calculations. The basis for this recommendation comes from the fact that the existing correlation was developed by linearly extrapolating the SGM data taken over a relatively narrow temperature range down to the safety basis minimum of 460°C, thereby under predicting the true gas temperature considerably, as documented in this report. Specifically, the task of improving the current temperature correlation will involve; (1) performing a similar heat/mass balance analysis used in this study on actual DWPF data, (2) validating the measured-vs.-true gas temperature correlation for the CEF developed in this study against the DWPF melter heat/mass balance results, and (3) making adjustments to the CEF correlation, if necessary, before incorporating it into the DWPF safety basis calculations. The steps described here can be completed with relatively minimum efforts.

  13. A Validity Agenda for Growth Models: One Size Doesn't Fit All!

    ERIC Educational Resources Information Center

    Patelis, Thanos

    2012-01-01

    This is a keynote presentation given at AERA on developing a validity agenda for growth models in a large scale (e.g., state) setting. The emphasis of this presentation was to indicate that growth models and the validity agenda designed to provide evidence in supporting the claims to be made need to be personalized to meet the local or…

  14. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  15. The Validity of the Job Characteristics Model: A Review and Meta-Analysis.

    ERIC Educational Resources Information Center

    Fried, Yitzhak; Ferris, Gerald R.

    1987-01-01

    Assessed the validity of Hackman and Oldham's Job Characteristics Model by conducting a comprehensive review of nearly 200 relevant studies on the model as well as by applying meta-analytic procedures to much of the data. Available correlational results were reasonably valid and support the multidimensionality of job characteristics and their

  16. Atomic Data and Spectral Model for Fe II

    NASA Astrophysics Data System (ADS)

    Bautista, Manuel A.; Fivet, Vanessa; Ballance, Connor; Quinet, Pascal; Ferland, Gary; Mendoza, Claudio; Kallman, Timothy R.

    2015-08-01

    We present extensive calculations of radiative transition rates and electron impact collision strengths for Fe ii. The data sets involve 52 levels from the 3d7, 3d64s, and 3{d}54{s}2 configurations. Computations of A-values are carried out with a combination of state-of-the-art multiconfiguration approaches, namely the relativistic Hartree-Fock, Thomas-Fermi-Dirac potential, and Dirac-Fock methods, while the R-matrix plus intermediate coupling frame transformation, Breit-Pauli R-matrix, and Dirac R-matrix packages are used to obtain collision strengths. We examine the advantages and shortcomings of each of these methods, and estimate rate uncertainties from the resulting data dispersion. We proceed to construct excitation balance spectral models, and compare the predictions from each data set with observed spectra from various astronomical objects. We are thus able to establish benchmarks in the spectral modeling of [Fe ii] emission in the IR and optical regions as well as in the UV Fe ii absorption spectra. Finally, we provide diagnostic line ratios and line emissivities for emission spectroscopy as well as column densities for absorption spectroscopy. All atomic data and models are available online and through the AtomPy atomic data curation environment.

  17. Clinical Validation of Anyplex II HPV HR Detection Test for Cervical Cancer Screening in Korea.

    PubMed

    Jung, Sunkyung; Lee, Byungdoo; Lee, Kap No; Kim, Yonggoo; Oh, Eun-Jee

    2016-03-01

    Context .- The Anyplex II HPV HR detection kit (Seegene Inc, Seoul, Korea) is a new, multiplex, real-time polymerase chain reaction assay to detect individual 14 high-risk (HR) human papillomavirus (HPV) types in a single tube. Objective .- To evaluate the clinical performance of the HPV HR kit in predicting high-grade squamous intraepithelial lesions and cervical intraepithelial lesions grade 2 or worse in cervical cancer screening. Design .- We analyzed 1137 cervical samples in Huro Path medium (CelltraZone, Seoul, Korea) from Korean women. The clinical performance of the HPV HR kit was compared with Hybrid Capture 2 (Qiagen, Valencia, California) using the noninferiority score test in a routine cervical cancer screening setting. The intralaboratory and interlaboratory agreements of HPV HR were also evaluated. Results .- Overall agreement between the 2 assays was 92.4% (1051 of 1137) with a κ value of 0.787. Clinical sensitivity of HPV HR for high-grade squamous intraepithelial lesions and cervical intraepithelial lesions grade 2 or worse was 94.4% (95% confidence interval [CI], 89.2-99.7) and 92.5% (95% CI, 84.3-100.0), respectively. The respective values for Hybrid Capture 2 were 93.1% (95% CI, 87.2-98.9) and 87.5% (95% CI, 77.3-99.7). Clinical sensitivity and specificity of HPV HR were not inferior to those of Hybrid Capture 2 (P = .005 and P = .04, respectively). The HPV HR showed good intralaboratory and interlaboratory reproducibility at 98.0% (κ = 0.953) and 97.4% (κ = 0.940), respectively. Conclusions .- The HPV HR demonstrates comparable performance to the Hybrid Capture 2 test and can be useful for HPV-based cervical cancer screening testing. PMID:26927723

  18. Contributions to the