Code of Federal Regulations, 2014 CFR
2014-07-01
... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... experimental conditions for the validation study and subsequent use during decontamination. The following experimental conditions apply for any solvent: (a) Temperature and pressure. Conduct the validation study and...
Code of Federal Regulations, 2012 CFR
2012-07-01
... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... experimental conditions for the validation study and subsequent use during decontamination. The following experimental conditions apply for any solvent: (a) Temperature and pressure. Conduct the validation study and...
Code of Federal Regulations, 2013 CFR
2013-07-01
... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... experimental conditions for the validation study and subsequent use during decontamination. The following experimental conditions apply for any solvent: (a) Temperature and pressure. Conduct the validation study and...
Code of Federal Regulations, 2011 CFR
2011-07-01
... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Comparison Study for... experimental conditions for the validation study and subsequent use during decontamination. The following...
Code of Federal Regulations, 2010 CFR
2010-07-01
... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Comparison Study for... experimental conditions for the validation study and subsequent use during decontamination. The following...
Solar-Diesel Hybrid Power System Optimization and Experimental Validation
NASA Astrophysics Data System (ADS)
Jacobus, Headley Stewart
As of 2008 1.46 billion people, or 22 percent of the World's population, were without electricity. Many of these people live in remote areas where decentralized generation is the only method of electrification. Most mini-grids are powered by diesel generators, but new hybrid power systems are becoming a reliable method to incorporate renewable energy while also reducing total system cost. This thesis quantifies the measurable Operational Costs for an experimental hybrid power system in Sierra Leone. Two software programs, Hybrid2 and HOMER, are used during the system design and subsequent analysis. Experimental data from the installed system is used to validate the two programs and to quantify the savings created by each component within the hybrid system. This thesis bridges the gap between design optimization studies that frequently lack subsequent validation and experimental hybrid system performance studies.
Schütte, Judith; Wang, Huange; Antoniou, Stella; Jarratt, Andrew; Wilson, Nicola K; Riepsaame, Joey; Calero-Nieto, Fernando J; Moignard, Victoria; Basilico, Silvia; Kinston, Sarah J; Hannah, Rebecca L; Chan, Mun Chiang; Nürnberg, Sylvia T; Ouwehand, Willem H; Bonzanni, Nicola; de Bruijn, Marella FTR; Göttgens, Berthold
2016-01-01
Transcription factor (TF) networks determine cell-type identity by establishing and maintaining lineage-specific expression profiles, yet reconstruction of mammalian regulatory network models has been hampered by a lack of comprehensive functional validation of regulatory interactions. Here, we report comprehensive ChIP-Seq, transgenic and reporter gene experimental data that have allowed us to construct an experimentally validated regulatory network model for haematopoietic stem/progenitor cells (HSPCs). Model simulation coupled with subsequent experimental validation using single cell expression profiling revealed potential mechanisms for cell state stabilisation, and also how a leukaemogenic TF fusion protein perturbs key HSPC regulators. The approach presented here should help to improve our understanding of both normal physiological and disease processes. DOI: http://dx.doi.org/10.7554/eLife.11469.001 PMID:26901438
Experimental economics' inconsistent ban on deception.
Hersch, Gil
2015-08-01
According to what I call the 'argument from public bads', if a researcher deceived subjects in the past, there is a chance that subjects will discount the information that a subsequent researcher provides, thus compromising the validity of the subsequent researcher's experiment. While this argument is taken to justify an existing informal ban on explicit deception in experimental economics, it can also apply to implicit deception, yet implicit deception is not banned and is sometimes used in experimental economics. Thus, experimental economists are being inconsistent when they appeal to the argument from public bads to justify banning explicit deception but not implicit deception. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ravikumar, Balaguru; Parri, Elina; Timonen, Sanna; Airola, Antti; Wennerberg, Krister
2017-01-01
Due to relatively high costs and labor required for experimental profiling of the full target space of chemical compounds, various machine learning models have been proposed as cost-effective means to advance this process in terms of predicting the most potent compound-target interactions for subsequent verification. However, most of the model predictions lack direct experimental validation in the laboratory, making their practical benefits for drug discovery or repurposing applications largely unknown. Here, we therefore introduce and carefully test a systematic computational-experimental framework for the prediction and pre-clinical verification of drug-target interactions using a well-established kernel-based regression algorithm as the prediction model. To evaluate its performance, we first predicted unmeasured binding affinities in a large-scale kinase inhibitor profiling study, and then experimentally tested 100 compound-kinase pairs. The relatively high correlation of 0.77 (p < 0.0001) between the predicted and measured bioactivities supports the potential of the model for filling the experimental gaps in existing compound-target interaction maps. Further, we subjected the model to a more challenging task of predicting target interactions for such a new candidate drug compound that lacks prior binding profile information. As a specific case study, we used tivozanib, an investigational VEGF receptor inhibitor with currently unknown off-target profile. Among 7 kinases with high predicted affinity, we experimentally validated 4 new off-targets of tivozanib, namely the Src-family kinases FRK and FYN A, the non-receptor tyrosine kinase ABL1, and the serine/threonine kinase SLK. Our sub-sequent experimental validation protocol effectively avoids any possible information leakage between the training and validation data, and therefore enables rigorous model validation for practical applications. These results demonstrate that the kernel-based modeling approach offers practical benefits for probing novel insights into the mode of action of investigational compounds, and for the identification of new target selectivities for drug repurposing applications. PMID:28787438
NASA Astrophysics Data System (ADS)
Lock, S. S. M.; Lau, K. K.; Lock Sow Mei, Irene; Shariff, A. M.; Yeong, Y. F.; Bustam, A. M.
2017-08-01
A sequence of molecular modelling procedure has been proposed to simulate experimentally validated membrane structure characterizing the effect of CO2 plasticization, whereby it can be subsequently employed to elucidate the depression in glass transition temperature (Tg ). Based on the above motivation, unswollen and swollen Polysulfone membrane structures with different CO2 loadings have been constructed, whereby the accuracy has been validated through good compliance with experimentally measured physical properties. It is found that the presence of CO2 constitutes to enhancement in polymeric chain relaxation, which consequently promotes the enlargement of molecular spacing and causes dilation in the membrane matrix. A series of glass transition temperature treatment has been conducted on the verified molecular structure to elucidate the effect of CO2 loadings to the depression in Tg induced by plasticization. Subsequently, a modified Michealis-Menten (M-M) function has been implemented to quantify the effect of CO2 loading attributed to plasticization towards Tg .
Lance, Blake W.; Smith, Barton L.
2016-06-23
Transient convection has been investigated experimentally for the purpose of providing Computational Fluid Dynamics (CFD) validation benchmark data. A specialized facility for validation benchmark experiments called the Rotatable Buoyancy Tunnel was used to acquire thermal and velocity measurements of flow over a smooth, vertical heated plate. The initial condition was forced convection downward with subsequent transition to mixed convection, ending with natural convection upward after a flow reversal. Data acquisition through the transient was repeated for ensemble-averaged results. With simple flow geometry, validation data were acquired at the benchmark level. All boundary conditions (BCs) were measured and their uncertainties quantified.more » Temperature profiles on all four walls and the inlet were measured, as well as as-built test section geometry. Inlet velocity profiles and turbulence levels were quantified using Particle Image Velocimetry. System Response Quantities (SRQs) were measured for comparison with CFD outputs and include velocity profiles, wall heat flux, and wall shear stress. Extra effort was invested in documenting and preserving the validation data. Details about the experimental facility, instrumentation, experimental procedure, materials, BCs, and SRQs are made available through this paper. As a result, the latter two are available for download and the other details are included in this work.« less
Verification and Validation of Monte Carlo N-Particle 6 for Computing Gamma Protection Factors
2015-03-26
methods for evaluating RPFs, which it used for the subsequent 30 years. These approaches included computational modeling, radioisotopes , and a high...1.2.1. Past Methods of Experimental Evaluation ........................................................ 2 1.2.2. Modeling Efforts...Other Considerations ......................................................................................... 14 2.4. Monte Carlo Methods
Fovargue, Daniel E; Mitran, Sorin; Smith, Nathan B; Sankin, Georgy N; Simmons, Walter N; Zhong, Pei
2013-08-01
A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model.
Fovargue, Daniel E.; Mitran, Sorin; Smith, Nathan B.; Sankin, Georgy N.; Simmons, Walter N.; Zhong, Pei
2013-01-01
A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model. PMID:23927200
Du, Yongxing; Zhang, Lingze; Sang, Lulu; Wu, Daocheng
2016-04-29
In this paper, an Archimedean planar spiral antenna for the application of thermotherapy was designed. This type of antenna was chosen for its compact structure, flexible application and wide heating area. The temperature field generated by the use of this Two-armed Spiral Antenna in a muscle-equivalent phantom was simulated and subsequently validated by experimentation. First, the specific absorption rate (SAR) of the field was calculated using the Finite Element Method (FEM) by Ansoft's High Frequency Structure Simulation (HFSS). Then, the temperature elevation in the phantom was simulated by an explicit finite difference approximation of the bioheat equation (BHE). The temperature distribution was then validated by a phantom heating experiment. The results showed that this antenna had a good heating ability and a wide heating area. A comparison between the calculation and the measurement showed a fair agreement in the temperature elevation. The validated model could be applied for the analysis of electromagnetic-temperature distribution in phantoms during the process of antenna design or thermotherapy experimentation.
Modeling and Validation of a Three-Stage Solidification Model for Sprays
NASA Astrophysics Data System (ADS)
Tanner, Franz X.; Feigl, Kathleen; Windhab, Erich J.
2010-09-01
A three-stage freezing model and its validation are presented. In the first stage, the cooling of the droplet down to the freezing temperature is described as a convective heat transfer process in turbulent flow. In the second stage, when the droplet has reached the freezing temperature, the solidification process is initiated via nucleation and crystal growth. The latent heat release is related to the amount of heat convected away from the droplet and the rate of solidification is expressed with a freezing progress variable. After completion of the solidification process, in stage three, the cooling of the solidified droplet (particle) is described again by a convective heat transfer process until the particle approaches the temperature of the gaseous environment. The model has been validated by experimental data of a single cocoa butter droplet suspended in air. The subsequent spray validations have been performed with data obtained from a cocoa butter melt in an experimental spray tower using the open-source computational fluid dynamics code KIVA-3.
Code of Federal Regulations, 2014 CFR
2014-07-01
... test system has been designed that is buffered to maintain pH and is pre-aged in sunlight to produce, subsequently, a predictable bleaching behavior. (v) The purpose of Phase 1 is to prepare, pre-age, and dilute... reason, kpE, which contains kIE, is likewise valid only for the experimental data and latitude. (8) The...
Code of Federal Regulations, 2012 CFR
2012-07-01
... test system has been designed that is buffered to maintain pH and is pre-aged in sunlight to produce, subsequently, a predictable bleaching behavior. (v) The purpose of Phase 1 is to prepare, pre-age, and dilute... reason, kpE, which contains kIE, is likewise valid only for the experimental data and latitude. (8) The...
Code of Federal Regulations, 2013 CFR
2013-07-01
... test system has been designed that is buffered to maintain pH and is pre-aged in sunlight to produce, subsequently, a predictable bleaching behavior. (v) The purpose of Phase 1 is to prepare, pre-age, and dilute... reason, kpE, which contains kIE, is likewise valid only for the experimental data and latitude. (8) The...
NASA Technical Reports Server (NTRS)
Johannsen, G.; Govindaraj, T.
1980-01-01
The influence of different types of predictor displays in a longitudinal vertical takeoff and landing (VTOL) hover task is analyzed in a theoretical study. Several cases with differing amounts of predictive and rate information are compared. The optimal control model of the human operator is used to estimate human and system performance in terms of root-mean-square (rms) values and to compute optimized attention allocation. The only part of the model which is varied to predict these data is the observation matrix. Typical cases are selected for a subsequent experimental validation. The rms values as well as eye-movement data are recorded. The results agree favorably with those of the theoretical study in terms of relative differences. Better matching is achieved by revised model input data.
NASA Astrophysics Data System (ADS)
Henn, Philipp; Liewald, Mathias; Sindel, Manfred
2018-05-01
As lightweight design as well as crash performance are crucial to future car body design, exact material characterisation is important to use materials at their full potential and reach maximum efficiency. Within the scope of this paper, the potential of newly established bending-tension test procedure to characterise material crashworthiness is investigated. In this test setup for the determination of material failure, a buckling-bending test is coupled with a subsequent tensile test. If prior bending load is critical, tensile strength and elongation in the subsequent tensile test are dramatically reduced. The new test procedure therefore offers an applicable definition of failure as the incapacity of energy consumption in subsequent phases of the crash represents failure of a component. In addition to that, the correlation of loading condition with actual crash scenarios (buckling and free bending) is improved compared to three- point bending test. The potential of newly established bending-tension test procedure to characterise material crashworthiness is investigated in this experimental studys on two aluminium sheet alloys. Experimental results are validated with existing ductility characterisation from edge compression test.
A model of fluid and solute exchange in the human: validation and implications.
Bert, J L; Gyenge, C C; Bowen, B D; Reed, R K; Lund, T
2000-11-01
In order to understand better the complex, dynamic behaviour of the redistribution and exchange of fluid and solutes administered to normal individuals or to those with acute hypovolemia, mathematical models are used in addition to direct experimental investigation. Initial validation of a model developed by our group involved data from animal experiments (Gyenge, C.C., Bowen, B.D., Reed, R.K. & Bert, J.L. 1999b. Am J Physiol 277 (Heart Circ Physiol 46), H1228-H1240). For a first validation involving humans, we compare the results of simulations with a wide range of different types of data from two experimental studies. These studies involved administration of normal saline or hypertonic saline with Dextran to both normal and 10% haemorrhaged subjects. We compared simulations with data including the dynamic changes in plasma and interstitial fluid volumes VPL and VIT respectively, plasma and interstitial colloid osmotic pressures PiPL and PiIT respectively, haematocrit (Hct), plasma solute concentrations and transcapillary flow rates. The model predictions were overall in very good agreement with the wide range of experimental results considered. Based on the conditions investigated, the model was also validated for humans. We used the model both to investigate mechanisms associated with the redistribution and transport of fluid and solutes administered following a mild haemorrhage and to speculate on the relationship between the timing and amount of fluid infusions and subsequent blood volume expansion.
BATMAN-TCM: a Bioinformatics Analysis Tool for Molecular mechANism of Traditional Chinese Medicine.
Liu, Zhongyang; Guo, Feifei; Wang, Yong; Li, Chun; Zhang, Xinlei; Li, Honglei; Diao, Lihong; Gu, Jiangyong; Wang, Wei; Li, Dong; He, Fuchu
2016-02-16
Traditional Chinese Medicine (TCM), with a history of thousands of years of clinical practice, is gaining more and more attention and application worldwide. And TCM-based new drug development, especially for the treatment of complex diseases is promising. However, owing to the TCM's diverse ingredients and their complex interaction with human body, it is still quite difficult to uncover its molecular mechanism, which greatly hinders the TCM modernization and internationalization. Here we developed the first online Bioinformatics Analysis Tool for Molecular mechANism of TCM (BATMAN-TCM). Its main functions include 1) TCM ingredients' target prediction; 2) functional analyses of targets including biological pathway, Gene Ontology functional term and disease enrichment analyses; 3) the visualization of ingredient-target-pathway/disease association network and KEGG biological pathway with highlighted targets; 4) comparison analysis of multiple TCMs. Finally, we applied BATMAN-TCM to Qishen Yiqi dripping Pill (QSYQ) and combined with subsequent experimental validation to reveal the functions of renin-angiotensin system responsible for QSYQ's cardioprotective effects for the first time. BATMAN-TCM will contribute to the understanding of the "multi-component, multi-target and multi-pathway" combinational therapeutic mechanism of TCM, and provide valuable clues for subsequent experimental validation, accelerating the elucidation of TCM's molecular mechanism. BATMAN-TCM is available at http://bionet.ncpsb.org/batman-tcm.
Martins, Raquel R; McCracken, Andrew W; Simons, Mirre J P; Henriques, Catarina M; Rera, Michael
2018-02-05
The Smurf Assay (SA) was initially developed in the model organism Drosophila melanogaster where a dramatic increase of intestinal permeability has been shown to occur during aging (Rera et al. , 2011). We have since validated the protocol in multiple other model organisms (Dambroise et al. , 2016) and have utilized the assay to further our understanding of aging (Tricoire and Rera, 2015; Rera et al. , 2018). The SA has now also been used by other labs to assess intestinal barrier permeability (Clark et al. , 2015; Katzenberger et al. , 2015; Barekat et al. , 2016; Chakrabarti et al. , 2016; Gelino et al. , 2016). The SA in itself is simple; however, numerous small details can have a considerable impact on its experimental validity and subsequent interpretation. Here, we provide a detailed update on the SA technique and explain how to catch a Smurf while avoiding the most common experimental fallacies.
Improved Navy Maintenance Through Corrosion-Fatigue Assessment Program
2008-03-11
during the run-in period. Rvk (Reduced Valley Depth) The lowest portion of the surface that will retain lubricant. Htp Defined by setting the tp1...can be accounted for using Morrow’s approach: N* = X(σmean)Nf, where the life in the Coffin-Manson relation is subsequently interpreted as N* and X...experimental setting. ESRD will undertake to perform interpretation of validation experiments utilizing its software product StressCheck® and apply appropriate
Small-scale experimental study of vaporization flux of liquid nitrogen released on water.
Gopalaswami, Nirupama; Olewski, Tomasz; Véchot, Luc N; Mannan, M Sam
2015-10-30
A small-scale experimental study was conducted using liquid nitrogen to investigate the convective heat transfer behavior of cryogenic liquids released on water. The experiment was performed by spilling five different amounts of liquid nitrogen at different release rates and initial water temperatures. The vaporization mass fluxes of liquid nitrogen were determined directly from the mass loss measured during the experiment. A variation of initial vaporization fluxes and a subsequent shift in heat transfer mechanism were observed with changes in initial water temperature. The initial vaporization fluxes were directly dependent on the liquid nitrogen spill rate. The heat flux from water to liquid nitrogen determined from experimental data was validated with two theoretical correlations for convective boiling. It was also observed from validation with correlations that liquid nitrogen was found to be predominantly in the film boiling regime. The substantial results provide a suitable procedure for predicting the heat flux from water to cryogenic liquids that is required for source term modeling. Copyright © 2015 Elsevier B.V. All rights reserved.
Manufacturing of tailored tubes with a process integrated heat treatment
NASA Astrophysics Data System (ADS)
Hordych, Illia; Boiarkin, Viacheslav; Rodman, Dmytro; Nürnberger, Florian
2017-10-01
The usage of work-pieces with tailored properties allows for reducing costs and materials. One example are tailored tubes that can be used as end parts e.g. in the automotive industry or in domestic applications as well as semi-finished products for subsequent controlled deformation processes. An innovative technology to manufacture tubes is roll forming with a subsequent inductive heating and adapted quenching to obtain tailored properties in the longitudinal direction. This processing offers a great potential for the production of tubes with a wide range of properties, although this novel approach still requires a suited process design. Based on experimental data, a process simulation is being developed. The simulation shall be suitable for a virtual design of the tubes and allows for gaining a deeper understanding of the required processing. The model proposed shall predict microstructural and mechanical tube properties by considering process parameters, different geometries, batch-related influences etc. A validation is carried out using experimental data of tubes manufactured from various steel grades.
Modelling and validation of electromechanical shock absorbers
NASA Astrophysics Data System (ADS)
Tonoli, Andrea; Amati, Nicola; Girardello Detoni, Joaquim; Galluzzi, Renato; Gasparin, Enrico
2013-08-01
Electromechanical vehicle suspension systems represent a promising substitute to conventional hydraulic solutions. However, the design of electromechanical devices that are able to supply high damping forces without exceeding geometric dimension and mass constraints is a difficult task. All these challenges meet in off-road vehicle suspension systems, where the power density of the dampers is a crucial parameter. In this context, the present paper outlines a particular shock absorber configuration where a suitable electric machine and a transmission mechanism are utilised to meet off-road vehicle requirements. A dynamic model is used to represent the device. Subsequently, experimental tests are performed on an actual prototype to verify the functionality of the damper and validate the proposed model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Utgikar, Vivek; Sun, Xiaodong; Christensen, Richard
2016-12-29
The overall goal of the research project was to model the behavior of the advanced reactorintermediate heat exchange system and to develop advanced control techniques for off-normal conditions. The specific objectives defined for the project were: 1. To develop the steady-state thermal hydraulic design of the intermediate heat exchanger (IHX); 2. To develop mathematical models to describe the advanced nuclear reactor-IHX-chemical process/power generation coupling during normal and off-normal operations, and to simulate models using multiphysics software; 3. To develop control strategies using genetic algorithm or neural network techniques and couple these techniques with the multiphysics software; 4. To validate themore » models experimentally The project objectives were accomplished by defining and executing four different tasks corresponding to these specific objectives. The first task involved selection of IHX candidates and developing steady state designs for those. The second task involved modeling of the transient and offnormal operation of the reactor-IHX system. The subsequent task dealt with the development of control strategies and involved algorithm development and simulation. The last task involved experimental validation of the thermal hydraulic performances of the two prototype heat exchangers designed and fabricated for the project at steady state and transient conditions to simulate the coupling of the reactor- IHX-process plant system. The experimental work utilized the two test facilities at The Ohio State University (OSU) including one existing High-Temperature Helium Test Facility (HTHF) and the newly developed high-temperature molten salt facility.« less
BATMAN-TCM: a Bioinformatics Analysis Tool for Molecular mechANism of Traditional Chinese Medicine
NASA Astrophysics Data System (ADS)
Liu, Zhongyang; Guo, Feifei; Wang, Yong; Li, Chun; Zhang, Xinlei; Li, Honglei; Diao, Lihong; Gu, Jiangyong; Wang, Wei; Li, Dong; He, Fuchu
2016-02-01
Traditional Chinese Medicine (TCM), with a history of thousands of years of clinical practice, is gaining more and more attention and application worldwide. And TCM-based new drug development, especially for the treatment of complex diseases is promising. However, owing to the TCM’s diverse ingredients and their complex interaction with human body, it is still quite difficult to uncover its molecular mechanism, which greatly hinders the TCM modernization and internationalization. Here we developed the first online Bioinformatics Analysis Tool for Molecular mechANism of TCM (BATMAN-TCM). Its main functions include 1) TCM ingredients’ target prediction; 2) functional analyses of targets including biological pathway, Gene Ontology functional term and disease enrichment analyses; 3) the visualization of ingredient-target-pathway/disease association network and KEGG biological pathway with highlighted targets; 4) comparison analysis of multiple TCMs. Finally, we applied BATMAN-TCM to Qishen Yiqi dripping Pill (QSYQ) and combined with subsequent experimental validation to reveal the functions of renin-angiotensin system responsible for QSYQ’s cardioprotective effects for the first time. BATMAN-TCM will contribute to the understanding of the “multi-component, multi-target and multi-pathway” combinational therapeutic mechanism of TCM, and provide valuable clues for subsequent experimental validation, accelerating the elucidation of TCM’s molecular mechanism. BATMAN-TCM is available at http://bionet.ncpsb.org/batman-tcm.
BATMAN-TCM: a Bioinformatics Analysis Tool for Molecular mechANism of Traditional Chinese Medicine
Liu, Zhongyang; Guo, Feifei; Wang, Yong; Li, Chun; Zhang, Xinlei; Li, Honglei; Diao, Lihong; Gu, Jiangyong; Wang, Wei; Li, Dong; He, Fuchu
2016-01-01
Traditional Chinese Medicine (TCM), with a history of thousands of years of clinical practice, is gaining more and more attention and application worldwide. And TCM-based new drug development, especially for the treatment of complex diseases is promising. However, owing to the TCM’s diverse ingredients and their complex interaction with human body, it is still quite difficult to uncover its molecular mechanism, which greatly hinders the TCM modernization and internationalization. Here we developed the first online Bioinformatics Analysis Tool for Molecular mechANism of TCM (BATMAN-TCM). Its main functions include 1) TCM ingredients’ target prediction; 2) functional analyses of targets including biological pathway, Gene Ontology functional term and disease enrichment analyses; 3) the visualization of ingredient-target-pathway/disease association network and KEGG biological pathway with highlighted targets; 4) comparison analysis of multiple TCMs. Finally, we applied BATMAN-TCM to Qishen Yiqi dripping Pill (QSYQ) and combined with subsequent experimental validation to reveal the functions of renin-angiotensin system responsible for QSYQ’s cardioprotective effects for the first time. BATMAN-TCM will contribute to the understanding of the “multi-component, multi-target and multi-pathway” combinational therapeutic mechanism of TCM, and provide valuable clues for subsequent experimental validation, accelerating the elucidation of TCM’s molecular mechanism. BATMAN-TCM is available at http://bionet.ncpsb.org/batman-tcm. PMID:26879404
Physical validation of a patient-specific contact finite element model of the ankle.
Anderson, Donald D; Goldsworthy, Jane K; Li, Wendy; James Rudert, M; Tochigi, Yuki; Brown, Thomas D
2007-01-01
A validation study was conducted to determine the extent to which computational ankle contact finite element (FE) results agreed with experimentally measured tibio-talar contact stress. Two cadaver ankles were loaded in separate test sessions, during which ankle contact stresses were measured with a high-resolution (Tekscan) pressure sensor. Corresponding contact FE analyses were subsequently performed for comparison. The agreement was good between FE-computed and experimentally measured mean (3.2% discrepancy for one ankle, 19.3% for the other) and maximum (1.5% and 6.2%) contact stress, as well as for contact area (1.7% and 14.9%). There was also excellent agreement between histograms of fractional areas of cartilage experiencing specific ranges of contact stress. Finally, point-by-point comparisons between the computed and measured contact stress distributions over the articular surface showed substantial agreement, with correlation coefficients of 90% for one ankle and 86% for the other. In the past, general qualitative, but little direct quantitative agreement has been demonstrated with articular joint contact FE models. The methods used for this validation enable formal comparison of computational and experimental results, and open the way for objective statistical measures of regional correlation between FE-computed contact stress distributions from comparison articular joint surfaces (e.g., those from an intact versus those with residual intra-articular fracture incongruity).
Greenwood, Charles R.; Hops, Hyman; Walker, Hill M.; Guild, Jacqueline J.; Stokes, Judith; Young, K. Richard; Keleman, Kenneth S.; Willardson, Marlyn
1979-01-01
A comprehensive validation study was conducted of the Program for Academic Survival Skills (PASS), a consultant-based, teacher-mediated program for student classroom behavior. The study addressed questions related to: (a) brief consultant training, (b) subsequent teacher training by consultants using PASS manuals, (c) contrasts between PASS experimental teachers and students and equivalent controls on measures of teacher management skills, student classroom behavior, teacher ratings of student problem behaviors, and academic achievement, (d) reported satisfaction of participants, and (e) replication of effects across two separate school sites. Results indicated that in both sites significant effects were noted in favor of the PASS experimental group for (a) teacher approval, (b) student appropriate classroom behavior, and (c) four categories of student inappropriate behavior. Program satisfaction ratings of students, teachers, and consultants were uniformly positive, and continued use of the program was reported a year later. Discussion focused upon issues of cost-effectiveness, differential site effects, and the relationship between appropriate classroom behavior and academic achievement. PMID:16795604
Greenwood, C R; Hops, H; Walker, H M; Guild, J J; Stokes, J; Young, K R; Keleman, K S; Willardson, M
1979-01-01
A comprehensive validation study was conducted of the Program for Academic Survival Skills (PASS), a consultant-based, teacher-mediated program for student classroom behavior. The study addressed questions related to: (a) brief consultant training, (b) subsequent teacher training by consultants using PASS manuals, (c) contrasts between PASS experimental teachers and students and equivalent controls on measures of teacher management skills, student classroom behavior, teacher ratings of student problem behaviors, and academic achievement, (d) reported satisfaction of participants, and (e) replication of effects across two separate school sites. Results indicated that in both sites significant effects were noted in favor of the PASS experimental group for (a) teacher approval, (b) student appropriate classroom behavior, and (c) four categories of student inappropriate behavior. Program satisfaction ratings of students, teachers, and consultants were uniformly positive, and continued use of the program was reported a year later. Discussion focused upon issues of cost-effectiveness, differential site effects, and the relationship between appropriate classroom behavior and academic achievement.
Rohanova, Miroslava; Balikova, Marie
2009-04-01
para-Methoxymethamphetamine (PMMA) is an abused psychedelic compound with reports of several intoxications and deaths after ingestion. However, its pharmacokinetics based on a controlled study is unknown and only partial information on its biotransformation is available. Our experimental study was designed for the time disposition profile of PMMA and its metabolites para-methoxyamphetamine (PMA), para-hydroxymethamphetamine (OH-MAM) and para-hydroxyamphetamine (OH-AM) in blood and biological tissues in rats after the bolus subcutaneous dose 40 mg/kg using a validated GC-MS method. The experimental results ascertained could be useful for subsequent evaluation of PMMA psychotropic or neurotoxic effects and the diagnostic concern of intoxication.
Rectification of General Relativity, Experimental Verifications, and Errors of the Wheeler School
NASA Astrophysics Data System (ADS)
Lo, C. Y.
2013-09-01
General relativity is not yet consistent. Pauli has misinterpreted Einstein's 1916 equivalence principle that can derive a valid field equation. The Wheeler School has distorted Einstein's 1916 principle to be his 1911 assumption of equivalence, and created new errors. Moreover, errors on dynamic solutions have allowed the implicit assumption of a unique coupling sign that violates the principle of causality. This leads to the space-time singularity theorems of Hawking and Penrose who "refute" applications for microscopic phenomena, and obstruct efforts to obtain a valid equation for the dynamic case. These errors also explain the mistakes in the press release of the 1993 Nobel Committee, who was unaware of the non-existence of dynamic solutions. To illustrate the damages to education, the MIT Open Course Phys. 8.033 is chosen. Rectification of errors confirms that E = mc2 is only conditionally valid, and leads to the discovery of the charge-mass interaction that is experimentally confirmed and subsequently the unification of gravitation and electromagnetism. The charge-mass interaction together with the unification predicts the weight reduction (instead of increment) of charged capacitors and heated metals, and helps to explain NASA's Pioneer anomaly and potentially other anomalies as well.
Adaptive vector validation in image velocimetry to minimise the influence of outlier clusters
NASA Astrophysics Data System (ADS)
Masullo, Alessandro; Theunissen, Raf
2016-03-01
The universal outlier detection scheme (Westerweel and Scarano in Exp Fluids 39:1096-1100, 2005) and the distance-weighted universal outlier detection scheme for unstructured data (Duncan et al. in Meas Sci Technol 21:057002, 2010) are the most common PIV data validation routines. However, such techniques rely on a spatial comparison of each vector with those in a fixed-size neighbourhood and their performance subsequently suffers in the presence of clusters of outliers. This paper proposes an advancement to render outlier detection more robust while reducing the probability of mistakenly invalidating correct vectors. Velocity fields undergo a preliminary evaluation in terms of local coherency, which parametrises the extent of the neighbourhood with which each vector will be compared subsequently. Such adaptivity is shown to reduce the number of undetected outliers, even when implemented in the afore validation schemes. In addition, the authors present an alternative residual definition considering vector magnitude and angle adopting a modified Gaussian-weighted distance-based averaging median. This procedure is able to adapt the degree of acceptable background fluctuations in velocity to the local displacement magnitude. The traditional, extended and recommended validation methods are numerically assessed on the basis of flow fields from an isolated vortex, a turbulent channel flow and a DNS simulation of forced isotropic turbulence. The resulting validation method is adaptive, requires no user-defined parameters and is demonstrated to yield the best performances in terms of outlier under- and over-detection. Finally, the novel validation routine is applied to the PIV analysis of experimental studies focused on the near wake behind a porous disc and on a supersonic jet, illustrating the potential gains in spatial resolution and accuracy.
Radiated Sound Power from a Curved Honeycomb Panel
NASA Technical Reports Server (NTRS)
Robinson, Jay H.; Buehrle, Ralph D.; Klos, Jacob; Grosveld, Ferdinand W.
2003-01-01
The validation of finite element and boundary element model for the vibro-acoustic response of a curved honeycomb core composite aircraft panel is completed. The finite element and boundary element models were previously validated separately. This validation process was hampered significantly by the method in which the panel was installed in the test facility. The fixture used was made primarily of fiberboard and the panel was held in a groove in the fiberboard by a compression fitting made of plastic tubing. The validated model is intended to be used to evaluate noise reduction concepts from both an experimental and analytic basis simultaneously. An initial parametric study of the influence of core thickness on the radiated sound power from this panel, using this numerical model was subsequently conducted. This study was significantly influenced by the presence of strong boundary condition effects but indicated that the radiated sound power from this panel was insensitive to core thickness primarily due to the offsetting effects of added mass and added stiffness in the frequency range investigated.
Numerical Analysis of a Pulse Detonation Cross Flow Heat Load Experiment
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Naples, Andrew .; Hoke, John L.; Schauer, Fred
2011-01-01
A comparison between experimentally measured and numerically simulated, time-averaged, point heat transfer rates in a pulse detonation (PDE) engine is presented. The comparison includes measurements and calculations for heat transfer to a cylinder in crossflow and to the tube wall itself using a novel spool design. Measurements are obtained at several locations and under several operating conditions. The measured and computed results are shown to be in substantial agreement, thereby validating the modeling approach. The model, which is based in computational fluid dynamics (CFD) is then used to interpret the results. A preheating of the incoming fuel charge is predicted, which results in increased volumetric flow and subsequent overfilling. The effect is validated with additional measurements.
Lucia, Umberto; Grazzini, Giuseppe; Montrucchio, Bartolomeo; Grisolia, Giulia; Borchiellini, Romano; Gervino, Gianpiero; Castagnoli, Carlotta; Ponzetto, Antonio; Silvagno, Francesca
2015-01-01
The aim of this work was to evaluate differences in energy flows between normal and immortalized cells when these distinct biological systems are exposed to environmental stimulation. These differences were considered using a constructal thermodynamic approach, and were subsequently verified experimentally. The application of constructal law to cell analysis led to the conclusion that temperature differences between cells with distinct behaviour can be amplified by interaction between cells and external fields. Experimental validation of the principle was carried out on two cellular models exposed to electromagnetic fields. By infrared thermography we were able to assess small changes in heat dissipation measured as a variation in cell internal energy. The experimental data thus obtained are in agreement with the theoretical calculation, because they show a different thermal dispersion pattern when normal and immortalized cells are exposed to electromagnetic fields. By using two methods that support and validate each other, we have demonstrated that the cell/environment interaction can be exploited to enhance cell behavior differences, in particular heat dissipation. We propose infrared thermography as a technique effective in discriminating distinct patterns of thermal dispersion and therefore able to distinguish a normal phenotype from a transformed one. PMID:26100383
Lucia, Umberto; Grazzini, Giuseppe; Montrucchio, Bartolomeo; Grisolia, Giulia; Borchiellini, Romano; Gervino, Gianpiero; Castagnoli, Carlotta; Ponzetto, Antonio; Silvagno, Francesca
2015-06-23
The aim of this work was to evaluate differences in energy flows between normal and immortalized cells when these distinct biological systems are exposed to environmental stimulation. These differences were considered using a constructal thermodynamic approach, and were subsequently verified experimentally. The application of constructal law to cell analysis led to the conclusion that temperature differences between cells with distinct behaviour can be amplified by interaction between cells and external fields. Experimental validation of the principle was carried out on two cellular models exposed to electromagnetic fields. By infrared thermography we were able to assess small changes in heat dissipation measured as a variation in cell internal energy. The experimental data thus obtained are in agreement with the theoretical calculation, because they show a different thermal dispersion pattern when normal and immortalized cells are exposed to electromagnetic fields. By using two methods that support and validate each other, we have demonstrated that the cell/environment interaction can be exploited to enhance cell behavior differences, in particular heat dissipation. We propose infrared thermography as a technique effective in discriminating distinct patterns of thermal dispersion and therefore able to distinguish a normal phenotype from a transformed one.
Penloglou, Giannis; Vasileiadou, Athina; Chatzidoukas, Christos; Kiparissides, Costas
2017-08-01
An integrated metabolic-polymerization-macroscopic model, describing the microbial production of polyhydroxybutyrate (PHB) in Azohydromonas lata bacteria, was developed and validated using a comprehensive series of experimental measurements. The model accounted for biomass growth, biopolymer accumulation, carbon and nitrogen sources utilization, oxygen mass transfer and uptake rates and average molecular weights of the accumulated PHB, produced under batch and fed-batch cultivation conditions. Model predictions were in excellent agreement with experimental measurements. The validated model was subsequently utilized to calculate optimal operating conditions and feeding policies for maximizing PHB productivity for desired PHB molecular properties. More specifically, two optimal fed-batch strategies were calculated and experimentally tested: (1) a nitrogen-limited fed-batch policy and (2) a nitrogen sufficient one. The calculated optimal operating policies resulted in a maximum PHB content (94% g/g) in the cultivated bacteria and a biopolymer productivity of 4.2 g/(l h), respectively. Moreover, it was demonstrated that different PHB grades with weight average molecular weights of up to 1513 kg/mol could be produced via the optimal selection of bioprocess operating conditions.
Automatic recognition of ship types from infrared images using superstructure moment invariants
NASA Astrophysics Data System (ADS)
Li, Heng; Wang, Xinyu
2007-11-01
Automatic object recognition is an active area of interest for military and commercial applications. In this paper, a system addressing autonomous recognition of ship types in infrared images is proposed. Firstly, an approach of segmentation based on detection of salient features of the target with subsequent shadow removing is proposed, as is the base of the subsequent object recognition. Considering the differences between the shapes of various ships mainly lie in their superstructures, we then use superstructure moment functions invariant to translation, rotation and scale differences in input patterns and develop a robust algorithm of obtaining ship superstructure. Subsequently a back-propagation neural network is used as a classifier in the recognition stage and projection images of simulated three-dimensional ship models are used as the training sets. Our recognition model was implemented and experimentally validated using both simulated three-dimensional ship model images and real images derived from video of an AN/AAS-44V Forward Looking Infrared(FLIR) sensor.
Hemmerich, Joshua A; Elstein, Arthur S; Schwarze, Margaret L; Moliski, Elizabeth G; Dale, William
2013-01-01
The present study tested predictions derived from the Risk as Feelings hypothesis about the effects of prior patients' negative treatment outcomes on physicians' subsequent treatment decisions. Two experiments at The University of Chicago, U.S.A., utilized a computer simulation of an abdominal aortic aneurysm (AAA) patient with enhanced realism to present participants with one of three experimental conditions: AAA rupture causing a watchful waiting death (WWD), perioperative death (PD), or a successful operation (SO), as well as the statistical treatment guidelines for AAA. Experiment 1 tested effects of these simulated outcomes on (n=76) laboratory participants' (university student sample) self-reported emotions, and their ratings of valence and arousal of the AAA rupture simulation and other emotion inducing picture stimuli. Experiment 2 tested two hypotheses: 1) that experiencing a patient WWD in the practice trial's experimental condition would lead physicians to choose surgery earlier, and 2) experiencing a patient PD would lead physicians to choose surgery later with the next patient. Experiment 2 presented (n=132) physicians (surgeons and geriatricians) with the same experimental manipulation and a second simulated AAA patient. Physicians then chose to either go to surgery or continue watchful waiting. The results of Experiment 1 demonstrated that the WWD experimental condition significantly increased anxiety, and was rated similarly to other negative and arousing pictures. The results of Experiment 2 demonstrated that, after controlling for demographics, baseline anxiety, intolerance for uncertainty, risk attitudes, and the influence of simulation characteristics, the WWD experimental condition significantly expedited decisions to choose surgery for the next patient. The results support the Risk as Feelings hypothesis on physicians' treatment decisions in a realistic AAA patient computer simulation. Bad outcomes affected emotions and decisions, even with statistical AAA rupture risk guidance present. These results suggest that bad patient outcomes cause physicians to experience anxiety and regret that influences their subsequent treatment decision-making for the next patient. PMID:22571890
Bayesian Inference in the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2008-01-01
This paper provides an elementary tutorial overview of Bayesian inference and its potential for application in aerospace experimentation in general and wind tunnel testing in particular. Bayes Theorem is reviewed and examples are provided to illustrate how it can be applied to objectively revise prior knowledge by incorporating insights subsequently obtained from additional observations, resulting in new (posterior) knowledge that combines information from both sources. A logical merger of Bayesian methods and certain aspects of Response Surface Modeling is explored. Specific applications to wind tunnel testing, computational code validation, and instrumentation calibration are discussed.
Inductive detection of the free surface of liquid metals
NASA Astrophysics Data System (ADS)
Zürner, Till; Ratajczak, Matthias; Wondrak, Thomas; Eckert, Sven
2017-11-01
A novel measurement system to determine the surface position and topology of liquid metals is presented. It is based on the induction of eddy currents by a time-harmonic magnetic field and the subsequent measurement of the resulting secondary magnetic field using gradiometric induction coils. The system is validated experimentally for static and dynamic surfaces of the low-melting liquid metal alloy gallium-indium-tin in a narrow vessel. It is shown that a precision below 1 mm and a time resolution of at least 20 Hz can be achieved.
Horton, pipe hydraulics, and the atmospheric boundary layer (The Robert E. Horton Memorial Lecture)
NASA Technical Reports Server (NTRS)
Brutsaert, Wilfried
1993-01-01
The early stages of Horton's scientific career which provided the opportunity and stimulus to delve into the origins of some contemporary concepts on the atmospheric boundary layer are reviewed. The study of Saph and Schoder provided basis for the experimental verification and validation of similarity by Blasius, Staton and Pannel, and for the subsequent developments that led to the present understanding of the turbulent boundary layer. Particular attention is given to incorporation of similarity and scaling in the analysis of turbulent flow.
Computational fluid dynamic modeling of a medium-sized surface mine blasthole drill shroud
Zheng, Y.; Reed, W.R.; Zhou, L.; Rider, J.P.
2016-01-01
The Pittsburgh Mining Research Division of the U.S. National Institute for Occupational Safety and Health (NIOSH) recently developed a series of models using computational fluid dynamics (CFD) to study airflows and respirable dust distribution associated with a medium-sized surface blasthole drill shroud with a dry dust collector system. Previously run experiments conducted in NIOSH’s full-scale drill shroud laboratory were used to validate the models. The setup values in the CFD models were calculated from experimental data obtained from the drill shroud laboratory and measurements of test material particle size. Subsequent simulation results were compared with the experimental data for several test scenarios, including 0.14 m3/s (300 cfm) and 0.24 m3/s (500 cfm) bailing airflow with 2:1, 3:1 and 4:1 dust collector-to-bailing airflow ratios. For the 2:1 and 3:1 ratios, the calculated dust concentrations from the CFD models were within the 95 percent confidence intervals of the experimental data. This paper describes the methodology used to develop the CFD models, to calculate the model input and to validate the models based on the experimental data. Problem regions were identified and revealed by the study. The simulation results could be used for future development of dust control methods for a surface mine blasthole drill shroud. PMID:27932851
A sealed capsule system for biological and liquid shock-recovery experiments.
Leighs, James A; Appleby-Thomas, Gareth J; Stennett, Chris; Hameed, Amer; Wilgeroth, James M; Hazell, Paul J
2012-11-01
This paper presents an experimental method designed to one-dimensionally shock load and subsequently recover liquid samples. Resultant loading profiles have been interrogated via hydrocode simulation as the nature of the target did not allow for direct application of the diagnostics typically employed in shock physics (e.g., manganin stress gauges or Heterodyne velocimeter (Het-V)). The target setup has been experimentally tested using aluminium flyer plates accelerated by a 50-mm bore single-stage gas-gun reaching projectile impact velocities of up to ~500 ms(-1) (corresponding to peak pressures of up to ca. 4 GPa being experienced by fluid samples). Recovered capsules survived well showing only minor signs of damage. Modelled gauge traces have been validated through the use of a (slightly modified) experiment in which a Het-V facing the rear of the inner capsule was employed. In these tests, good correlation between simulated and experimental traces was observed.
A sealed capsule system for biological and liquid shock-recovery experiments
NASA Astrophysics Data System (ADS)
Leighs, James A.; Appleby-Thomas, Gareth J.; Stennett, Chris; Hameed, Amer; Wilgeroth, James M.; Hazell, Paul J.
2012-11-01
This paper presents an experimental method designed to one-dimensionally shock load and subsequently recover liquid samples. Resultant loading profiles have been interrogated via hydrocode simulation as the nature of the target did not allow for direct application of the diagnostics typically employed in shock physics (e.g., manganin stress gauges or Heterodyne velocimeter (Het-V)). The target setup has been experimentally tested using aluminium flyer plates accelerated by a 50-mm bore single-stage gas-gun reaching projectile impact velocities of up to ˜500 ms-1 (corresponding to peak pressures of up to ca. 4 GPa being experienced by fluid samples). Recovered capsules survived well showing only minor signs of damage. Modelled gauge traces have been validated through the use of a (slightly modified) experiment in which a Het-V facing the rear of the inner capsule was employed. In these tests, good correlation between simulated and experimental traces was observed.
Computational and experimental analysis of DNA shuffling
Maheshri, Narendra; Schaffer, David V.
2003-01-01
We describe a computational model of DNA shuffling based on the thermodynamics and kinetics of this process. The model independently tracks a representative ensemble of DNA molecules and records their states at every stage of a shuffling reaction. These data can subsequently be analyzed to yield information on any relevant metric, including reassembly efficiency, crossover number, type and distribution, and DNA sequence length distributions. The predictive ability of the model was validated by comparison to three independent sets of experimental data, and analysis of the simulation results led to several unique insights into the DNA shuffling process. We examine a tradeoff between crossover frequency and reassembly efficiency and illustrate the effects of experimental parameters on this relationship. Furthermore, we discuss conditions that promote the formation of useless “junk” DNA sequences or multimeric sequences containing multiple copies of the reassembled product. This model will therefore aid in the design of optimal shuffling reaction conditions. PMID:12626764
A Computational and Experimental Study of Slit Resonators
NASA Technical Reports Server (NTRS)
Tam, C. K. W.; Ju, H.; Jones, M. G.; Watson, W. R.; Parrott, T. L.
2003-01-01
Computational and experimental studies are carried out to offer validation of the results obtained from direct numerical simulation (DNS) of the flow and acoustic fields of slit resonators. The test cases include slits with 90-degree corners and slits with 45-degree bevel angle housed inside an acoustic impedance tube. Three slit widths are used. Six frequencies from 0.5 to 3.0 kHz are chosen. Good agreement is found between computed and measured reflection factors. In addition, incident sound waves having white noise spectrum and a prescribed pseudo-random noise spectrum are used in subsequent series of tests. The computed broadband results are again found to agree well with experimental data. It is believed the present results provide strong support that DNS can eventually be a useful and accurate prediction tool for liner aeroacoustics. The usage of DNS as a design tool is discussed and illustrated by a simple example.
Studies on the Parametric Effects of Plasma Arc Welding of 2205 Duplex Stainless Steel
NASA Astrophysics Data System (ADS)
Selva Bharathi, R.; Siva Shanmugam, N.; Murali Kannan, R.; Arungalai Vendan, S.
2018-03-01
This research study attempts to create an optimized parametric window by employing Taguchi algorithm for Plasma Arc Welding (PAW) of 2 mm thick 2205 duplex stainless steel. The parameters considered for experimentation and optimization are the welding current, welding speed and pilot arc length respectively. The experimentation involves the parameters variation and subsequently recording the depth of penetration and bead width. Welding current of 60-70 A, welding speed of 250-300 mm/min and pilot arc length of 1-2 mm are the range between which the parameters are varied. Design of experiments is used for the experimental trials. Back propagation neural network, Genetic algorithm and Taguchi techniques are used for predicting the bead width, depth of penetration and validated with experimentally achieved results which were in good agreement. Additionally, micro-structural characterizations are carried out to examine the weld quality. The extrapolation of these optimized parametric values yield enhanced weld strength with cost and time reduction.
Experimental and analytical investigation of a modified ring cusp NSTAR engine
NASA Technical Reports Server (NTRS)
Sengupta, Anita
2005-01-01
A series of experimental measurements on a modified laboratory NSTAR engine were used to validate a zero dimensional analytical discharge performance model of a ring cusp ion thruster. The model predicts the discharge performance of a ring cusp NSTAR thruster as a function the magnetic field configuration, thruster geometry, and throttle level. Analytical formalisms for electron and ion confinement are used to predict the ionization efficiency for a given thruster design. Explicit determination of discharge loss and volume averaged plasma parameters are also obtained. The model was used to predict the performance of the nominal and modified three and four ring cusp 30-cm ion thruster configurations operating at the full power (2.3 kW) NSTAR throttle level. Experimental measurements of the modified engine configuration discharge loss compare well with the predicted value for propellant utilizations from 80 to 95%. The theory, as validated by experiment, indicates that increasing the magnetic strength of the minimum closed reduces maxwellian electron diffusion and electrostatically confines the ion population and subsequent loss to the anode wall. The theory also indicates that increasing the cusp strength and minimizing the cusp area improves primary electron confinement increasing the probability of an ionization collision prior to loss at the cusp.
Gitifar, Vahid; Eslamloueyan, Reza; Sarshar, Mohammad
2013-11-01
In this study, pretreatment of sugarcane bagasse and subsequent enzymatic hydrolysis is investigated using two categories of pretreatment methods: dilute acid (DA) pretreatment and combined DA-ozonolysis (DAO) method. Both methods are accomplished at different solid ratios, sulfuric acid concentrations, autoclave residence times, bagasse moisture content, and ozonolysis time. The results show that the DAO pretreatment can significantly increase the production of glucose compared to DA method. Applying k-fold cross validation method, two optimal artificial neural networks (ANNs) are trained for estimations of glucose concentrations for DA and DAO pretreatment methods. Comparing the modeling results with experimental data indicates that the proposed ANNs have good estimation abilities. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kassem, A.; Sawan, M.; Boukadoum, M.; Haidar, A.
2005-12-01
We are concerned with the design, implementation, and validation of a perception SoC based on an ultrasonic array of sensors. The proposed SoC is dedicated to ultrasonic echography applications. A rapid prototyping platform is used to implement and validate the new architecture of the digital signal processing (DSP) core. The proposed DSP core efficiently integrates all of the necessary ultrasonic B-mode processing modules. It includes digital beamforming, quadrature demodulation of RF signals, digital filtering, and envelope detection of the received signals. This system handles 128 scan lines and 6400 samples per scan line with a[InlineEquation not available: see fulltext.] angle of view span. The design uses a minimum size lookup memory to store the initial scan information. Rapid prototyping using an ARM/FPGA combination is used to validate the operation of the described system. This system offers significant advantages of portability and a rapid time to market.
Validation of High Displacement Piezoelectric Actuator Finite Element Models
NASA Technical Reports Server (NTRS)
Taleghani, B. K.
2000-01-01
The paper presents the results obtained by using NASTRAN(Registered Trademark) and ANSYS(Regitered Trademark) finite element codes to predict doming of the THUNDER piezoelectric actuators during the manufacturing process and subsequent straining due to an applied input voltage. To effectively use such devices in engineering applications, modeling and characterization are essential. Length, width, dome height, and thickness are important parameters for users of such devices. Therefore, finite element models were used to assess the effects of these parameters. NASTRAN(Registered Trademark) and ANSYS(Registered Trademark) used different methods for modeling piezoelectric effects. In NASTRAN(Registered Trademark), a thermal analogy was used to represent voltage at nodes as equivalent temperatures, while ANSYS(Registered Trademark) processed the voltage directly using piezoelectric finite elements. The results of finite element models were validated by using the experimental results.
Automated identification of reference genes based on RNA-seq data.
Carmona, Rosario; Arroyo, Macarena; Jiménez-Quesada, María José; Seoane, Pedro; Zafra, Adoración; Larrosa, Rafael; Alché, Juan de Dios; Claros, M Gonzalo
2017-08-18
Gene expression analyses demand appropriate reference genes (RGs) for normalization, in order to obtain reliable assessments. Ideally, RG expression levels should remain constant in all cells, tissues or experimental conditions under study. Housekeeping genes traditionally fulfilled this requirement, but they have been reported to be less invariant than expected; therefore, RGs should be tested and validated for every particular situation. Microarray data have been used to propose new RGs, but only a limited set of model species and conditions are available; on the contrary, RNA-seq experiments are more and more frequent and constitute a new source of candidate RGs. An automated workflow based on mapped NGS reads has been constructed to obtain highly and invariantly expressed RGs based on a normalized expression in reads per mapped million and the coefficient of variation. This workflow has been tested with Roche/454 reads from reproductive tissues of olive tree (Olea europaea L.), as well as with Illumina paired-end reads from two different accessions of Arabidopsis thaliana and three different human cancers (prostate, small-cell cancer lung and lung adenocarcinoma). Candidate RGs have been proposed for each species and many of them have been previously reported as RGs in literature. Experimental validation of significant RGs in olive tree is provided to support the algorithm. Regardless sequencing technology, number of replicates, and library sizes, when RNA-seq experiments are designed and performed, the same datasets can be analyzed with our workflow to extract suitable RGs for subsequent PCR validation. Moreover, different subset of experimental conditions can provide different suitable RGs.
NASA Astrophysics Data System (ADS)
Starke, R.; Schober, G. A. H.
2018-03-01
We provide a systematic theoretical, experimental, and historical critique of the standard derivation of Fresnel's equations, which shows in particular that these well-established equations actually contradict the traditional, macroscopic approach to electrodynamics in media. Subsequently, we give a rederivation of Fresnel's equations which is exclusively based on the microscopic Maxwell equations and hence in accordance with modern first-principles materials physics. In particular, as a main outcome of this analysis being of a more general interest, we propose the most general boundary conditions on electric and magnetic fields which are valid on the microscopic level.
Tasolamprou, Anna C; Zhang, Lei; Kafesaki, Maria; Koschny, Thomas; Soukoulis, Costas M
2015-06-01
We demonstrate the numerical design and the experimental validation of frequency dependent directional emission from a dielectric photonic crystal structure. The wave propagates through a photonic crystal line-defect waveguide, while a surface layer at the termination of the photonic crystal enables the excitation of surface modes and a subsequent grating layer transforms the surface energy into outgoing propagating waves of the form of a directional beam. The angle of the beam is controlled by the frequency and the structure operates as a frequency splitter in the intermediate and far field region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tasolamprou, Anna C.; Zhang, Lei; Kafesaki, Maria
2015-05-19
We demonstrate the numerical design and the experimental validation of frequency dependent directional emission from a dielectric photonic crystal structure. The wave propagates through a photonic crystal line-defect waveguide, while a surface layer at the termination of the photonic crystal enables the excitation of surface modes and a subsequent grating layer transforms the surface energy into outgoing propagating waves of the form of a directional beam. Furthermore, the angle of the beam is controlled by the frequency and the structure operates as a frequency splitter in the intermediate and far field region.
Hong, Young-Joo; Makita, Shuichi; Sugiyama, Satoshi; Yasuno, Yoshiaki
2014-01-01
Polarization mode dispersion (PMD) degrades the performance of Jones-matrix-based polarization-sensitive multifunctional optical coherence tomography (JM-OCT). The problem is specially acute for optically buffered JM-OCT, because the long fiber in the optical buffering module induces a large amount of PMD. This paper aims at presenting a method to correct the effect of PMD in JM-OCT. We first mathematically model the PMD in JM-OCT and then derive a method to correct the PMD. This method is a combination of simple hardware modification and subsequent software correction. The hardware modification is introduction of two polarizers which transform the PMD into global complex modulation of Jones matrix. Subsequently, the software correction demodulates the global modulation. The method is validated with an experimentally obtained point spread function with a mirror sample, as well as by in vivo measurement of a human retina. PMID:25657888
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuniga, Cristal; Li, Chien -Ting; Huelsman, Tyler
The green microalgae Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organismmore » to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Moreover, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine.« less
Zuniga, Cristal; Li, Chien -Ting; Huelsman, Tyler; ...
2016-07-02
The green microalgae Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organismmore » to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Moreover, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine.« less
Zuñiga, Cristal; Li, Chien-Ting; Huelsman, Tyler; Levering, Jennifer; Zielinski, Daniel C; McConnell, Brian O; Long, Christopher P; Knoshaug, Eric P; Guarnieri, Michael T; Antoniewicz, Maciek R; Betenbaugh, Michael J; Zengler, Karsten
2016-09-01
The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. © 2016 American Society of Plant Biologists. All rights reserved.
Zuñiga, Cristal; Li, Chien-Ting; Zielinski, Daniel C.; Guarnieri, Michael T.; Antoniewicz, Maciek R.; Zengler, Karsten
2016-01-01
The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. PMID:27372244
Perry, Jeffrey J; Losier, Justin H; Stiell, Ian G; Sharma, Mukul; Abdulaziz, Kasim
2016-01-01
Five percent of transient ischemic attack (TIA) patients have a subsequent stroke within 7 days. The Canadian TIA Score uses clinical findings to calculate the subsequent stroke risk within 7 days. Our objectives were to assess 1) anticipated use; 2) component face validity; 3) risk strata for stroke within 7 days; and 4) actions required, for a given risk for subsequent stroke. After a rigorous development process, a survey questionnaire was administered to a random sample of 300 emergency physicians selected from those registered in a national medical directory. The surveys were distributed using a modified Dillman technique. From a total of 271 eligible surveys, we received 131 (48.3%) completed surveys; 96.2% of emergency physicians would use a validated Canadian TIA Score; 8 of 13 components comprising the Canadian TIA Score were rated as Very Important or Important by survey respondents. Risk categories for subsequent stroke were defined as minimal-risk: 10% risk of subsequent stroke within 7 days. A validated Canadian TIA Score will likely be used by emergency physicians. Most components of the TIA Score have high face validity. Risk strata are definable, which may allow physicians to determine immediate actions, based on subsequent stroke risk, in the emergency department.
NASA Astrophysics Data System (ADS)
Dhakal, B.; Nicholson, D. E.; Saleeb, A. F.; Padula, S. A., II; Vaidyanathan, R.
2016-09-01
Shape memory alloy (SMA) actuators often operate under a complex state of stress for an extended number of thermomechanical cycles in many aerospace and engineering applications. Hence, it becomes important to account for multi-axial stress states and deformation characteristics (which evolve with thermomechanical cycling) when calibrating any SMA model for implementation in large-scale simulation of actuators. To this end, the present work is focused on the experimental validation of an SMA model calibrated for the transient and cyclic evolutionary behavior of shape memory Ni49.9Ti50.1, for the actuation of axially loaded helical-coil springs. The approach requires both experimental and computational aspects to appropriately assess the thermomechanical response of these multi-dimensional structures. As such, an instrumented and controlled experimental setup was assembled to obtain temperature, torque, degree of twist and extension, while controlling end constraints during heating and cooling of an SMA spring under a constant externally applied axial load. The computational component assesses the capabilities of a general, multi-axial, SMA material-modeling framework, calibrated for Ni49.9Ti50.1 with regard to its usefulness in the simulation of SMA helical-coil spring actuators. Axial extension, being the primary response, was examined on an axially-loaded spring with multiple active coils. Two different conditions of end boundary constraint were investigated in both the numerical simulations as well as the validation experiments: Case (1) where the loading end is restrained against twist (and the resulting torque measured as the secondary response) and Case (2) where the loading end is free to twist (and the degree of twist measured as the secondary response). The present study focuses on the transient and evolutionary response associated with the initial isothermal loading and the subsequent thermal cycles under applied constant axial load. The experimental results for the helical-coil actuator under two different boundary conditions are found to be within error to their counterparts in the numerical simulations. The numerical simulation and the experimental validation demonstrate similar transient and evolutionary behavior in the deformation response under the complex, inhomogeneous, multi-axial stress-state and large deformations of the helical-coil actuator. This response, although substantially different in magnitude, exhibited similar evolutionary characteristics to the simple, uniaxial, homogeneous, stress-state of the isobaric tensile tests results used for the model calibration. There was no significant difference in the axial displacement (primary response) magnitudes observed between Cases (1) and (2) for the number of cycles investigated here. The simulated secondary responses of the two cases evolved in a similar manner when compared to the experimental validation of the respective cases.
Collisional Cooling of Light Ions by Cotrapped Heavy Atoms.
Dutta, Sourav; Sawant, Rahul; Rangwala, S A
2017-03-17
We experimentally demonstrate cooling of trapped ions by collisions with cotrapped, higher-mass neutral atoms. It is shown that the lighter ^{39}K^{+} ions, created by ionizing ^{39}K atoms in a magneto-optical trap (MOT), when trapped in an ion trap and subsequently allowed to cool by collisions with ultracold, heavier ^{85}Rb atoms in a MOT, exhibit a longer trap lifetime than without the localized ^{85}Rb MOT atoms. A similar cooling of trapped ^{85}Rb^{+} ions by ultracold ^{133}Cs atoms in a MOT is also demonstrated in a different experimental configuration to validate this mechanism of ion cooling by localized and centered ultracold neutral atoms. Our results suggest that the cooling of ions by localized cold atoms holds for any mass ratio, thereby enabling studies on a wider class of atom-ion systems irrespective of their masses.
Liu, Gui-Long; Huang, Shi-Hong; Shi, Che-Si; Zeng, Bin; Zhang, Ke-Shi; Zhong, Xian-Ci
2018-02-10
Using copper thin-walled tubular specimens, the subsequent yield surfaces under pre-tension, pre-torsion and pre-combined tension-torsion are measured, where the single-sample and multi-sample methods are applied respectively to determine the yield stresses at specified offset strain. The rule and characteristics of the evolution of the subsequent yield surface are investigated. Under the conditions of different pre-strains, the influence of test point number, test sequence and specified offset strain on the measurement of subsequent yield surface and the concave phenomenon for measured yield surface are studied. Moreover, the feasibility and validity of the two methods are compared. The main conclusions are drawn as follows: (1) For the single or multi-sample method, the measured subsequent yield surfaces are remarkably different from cylindrical yield surfaces proposed by the classical plasticity theory; (2) there are apparent differences between the test results from the two kinds of methods: the multi-sample method is not influenced by the number of test points, test order and the cumulative effect of residual plastic strain resulting from the other test point, while those are very influential in the single-sample method; and (3) the measured subsequent yield surface may appear concave, which can be transformed to convex for single-sample method by changing the test sequence. However, for the multiple-sample method, the concave phenomenon will disappear when a larger offset strain is specified.
NASA Astrophysics Data System (ADS)
van Ness, Katherine; Hill, Craig; Aliseda, Alberto; Polagye, Brian
2017-11-01
Experimental measurements of a 0.45-m diameter, variable-pitch marine hydrokinetic (MHK) turbine were collected in a tow tank at different tip speed ratios and blade pitch angles. The coefficients of power and thrust are computed from direct measurements of torque, force and angular speed at the hub level. Loads on individual blades were measured with a six-degree of freedom load cell mounted at the root of one of the turbine blades. This information is used to validate the performance predictions provided by blade element model (BEM) simulations used in the turbine design, specifically the open-source code WTPerf developed by the National Renewable Energy Lab (NREL). Predictions of blade and hub loads by NREL's AeroDyn are also validated for the first time for an axial-flow MHK turbine. The influence of design twist angle, combined with the variable pitch angle, on the flow separation and subsequent blade loading will be analyzed with the complementary information from simulations and experiments. Funding for this research was provided by the United States Naval Facilities Engineering Command.
NASA Astrophysics Data System (ADS)
Ma, Zhisai; Liu, Li; Zhou, Sida; Naets, Frank; Heylen, Ward; Desmet, Wim
2017-03-01
The problem of linear time-varying(LTV) system modal analysis is considered based on time-dependent state space representations, as classical modal analysis of linear time-invariant systems and current LTV system modal analysis under the "frozen-time" assumption are not able to determine the dynamic stability of LTV systems. Time-dependent state space representations of LTV systems are first introduced, and the corresponding modal analysis theories are subsequently presented via a stability-preserving state transformation. The time-varying modes of LTV systems are extended in terms of uniqueness, and are further interpreted to determine the system's stability. An extended modal identification is proposed to estimate the time-varying modes, consisting of the estimation of the state transition matrix via a subspace-based method and the extraction of the time-varying modes by the QR decomposition. The proposed approach is numerically validated by three numerical cases, and is experimentally validated by a coupled moving-mass simply supported beam experimental case. The proposed approach is capable of accurately estimating the time-varying modes, and provides a new way to determine the dynamic stability of LTV systems by using the estimated time-varying modes.
Xin, F X; Lu, T J
2009-03-01
The air-borne sound insulation performance of a rectangular double-panel partition clamp mounted on an infinite acoustic rigid baffle is investigated both analytically and experimentally and compared with that of a simply supported one. With the clamped (or simply supported) boundary accounted for by using the method of modal function, a double series solution for the sound transmission loss (STL) of the structure is obtained by employing the weighted residual (Galerkin) method. Experimental measurements with Al double-panel partitions having air cavity are subsequently carried out to validate the theoretical model for both types of the boundary condition, and good overall agreement is achieved. A consistency check of the two different models (based separately on clamped modal function and simply supported modal function) is performed by extending the panel dimensions to infinite where no boundaries exist. The significant discrepancies between the two different boundary conditions are demonstrated in terms of the STL versus frequency plots as well as the panel deflection mode shapes.
Momota, Yutaka; Shimada, Kenichiro; Gin, Azusa; Matsubara, Takako; Azakami, Daigo; Ishioka, Katsumi; Nakamura, Yuka; Sako, Toshinori
2016-10-01
A closed chamber evaporimeter is suitable for measuring transepidermal water loss (TEWL) in cats because of the compact device size, tolerance to sudden movement and short measuring time. TEWL is a representative parameter for skin barrier dysfunction, which is one of the clinical signs of atopic dermatitis in humans and dogs. Measurement of feline TEWL has been reported, but applicability of this parameter has not been validated. The aims of this study were to determine if tape stripping is a valid experimental model in cats for studying TEWL and to determine if a closed chambered system is a suitable measurement tool for cats. Ten clinically normal cats. In order to evaluate variation of the measured values, TEWL was measured at the right and left side of the three clipped regions (axillae, lateral thigh and groin). Subsequently, TEWL was measured using sequential tape stripping of the stratum corneum as a model of acute barrier disruption. The variations between both sides of the three regions showed no significant difference. Sequential tape stripping was associated with increasing values for TEWL. Feline TEWL was shown to reflect changes in the skin barrier in an experimental model using a closed chamber system and has the potential for evaluating skin barrier function in cats with skin diseases. © 2016 ESVD and ACVD.
Jayaswal, Vivek; Lutherborrow, Mark; Ma, David D F; Hwa Yang, Yee
2009-05-01
Over the past decade, a class of small RNA molecules called microRNAs (miRNAs) has been shown to regulate gene expression at the post-transcription stage. While early work focused on the identification of miRNAs using a combination of experimental and computational techniques, subsequent studies have focused on identification of miRNA-target mRNA pairs as each miRNA can have hundreds of mRNA targets. The experimental validation of some miRNAs as oncogenic has provided further motivation for research in this area. In this article we propose an odds-ratio (OR) statistic for identification of regulatory miRNAs. It is based on integrative analysis of matched miRNA and mRNA time-course microarray data. The OR-statistic was used for (i) identification of miRNAs with regulatory potential, (ii) identification of miRNA-target mRNA pairs and (iii) identification of time lags between changes in miRNA expression and those of its target mRNAs. We applied the OR-statistic to a cancer data set and identified a small set of miRNAs that were negatively correlated to mRNAs. A literature survey revealed that some of the miRNAs that were predicted to be regulatory, were indeed oncogenic or tumor suppressors. Finally, some of the predicted miRNA targets have been shown to be experimentally valid.
Large Eddy Simulation of Flame Flashback in Swirling Premixed Flames
NASA Astrophysics Data System (ADS)
Lietz, Christopher; Raman, Venkatramanan
2014-11-01
In the design of high-hydrogen content gas turbines for power generation, flashback of the turbulent flame by propagation through the low velocity boundary layers in the premixing region is an operationally dangerous event. Predictive models that could accurately capture the onset and subsequent behavior of flashback would be indispensable in gas turbine design. The large eddy simulation (LES) approach is used here to model this process. The goal is to examine the validity of a probability distribution function (PDF) based model in the context of a lean premixed flame in a confined geometry. A turbulent swirling flow geometry and corresponding experimental data is used for validation. A suite of LES calculations are performed on a large unstructured mesh for varying fuel compositions operating at several equivalence ratios. It is shown that the PDF based method can predict some statistical properties of the flame front, with improvement over other models in the same application.
Oberg, Tomas
2004-01-01
Halogenated aliphatic compounds have many technical uses, but substances within this group are also ubiquitous environmental pollutants that can affect the ozone layer and contribute to global warming. The establishment of quantitative structure-property relationships is of interest not only to fill in gaps in the available database but also to validate experimental data already acquired. The three-dimensional structures of 240 compounds were modeled with molecular mechanics prior to the generation of empirical descriptors. Two bilinear projection methods, principal component analysis (PCA) and partial-least-squares regression (PLSR), were used to identify outliers. PLSR was subsequently used to build a multivariate calibration model by extracting the latent variables that describe most of the covariation between the molecular structure and the boiling point. Boiling points were also estimated with an extension of the group contribution method of Stein and Brown.
Khanfar, Mohammad A; Banat, Fahmy; Alabed, Shada; Alqtaishat, Saja
2017-02-01
High expression of Nek2 has been detected in several types of cancer and it represents a novel target for human cancer. In the current study, structure-based pharmacophore modeling combined with multiple linear regression (MLR)-based QSAR analyses was applied to disclose the structural requirements for NEK2 inhibition. Generated pharmacophoric models were initially validated with receiver operating characteristic (ROC) curve, and optimum models were subsequently implemented in QSAR modeling with other physiochemical descriptors. QSAR-selected models were implied as 3D search filters to mine the National Cancer Institute (NCI) database for novel NEK2 inhibitors, whereas the associated QSAR model prioritized the bioactivities of captured hits for in vitro evaluation. Experimental validation identified several potent NEK2 inhibitors of novel structural scaffolds. The most potent captured hit exhibited an [Formula: see text] value of 237 nM.
King, Marika R.; Binger, Cathy; Kent-Walsh, Jennifer
2015-01-01
The developmental readiness of four 5-year-old children to produce basic sentences using graphic symbols on an augmentative and alternative communication (AAC) device during a dynamic assessment (DA) task was examined. Additionally, the ability of the DA task to predict performance on a subsequent experimental task was evaluated. A graduated prompting framework was used during DA. Measures included amount of support required to produce the targets, modifiability (change in participant performance) within a DA session, and predictive validity of DA. Participants accurately produced target structures with varying amounts of support. Modifiability within DA sessions was evident for some participants, and partial support was provided for the measures of predictive validity. These initial results indicate that DA may be a viable way to measure young children’s developmental readiness to learn how to sequence simple, rule-based messages via aided AAC. PMID:25621928
Piroth, Tobias; Pauly, Marie-Christin; Schneider, Christian; Wittmer, Annette; Möllers, Sven; Döbrössy, Máté; Winkler, Christian; Nikkhah, Guido
2014-01-01
Restorative cell therapy concepts in neurodegenerative diseases are aimed at replacing lost neurons. Despite advances in research on pluripotent stem cells, fetal tissue from routine elective abortions is still regarded as the only safe cell source. Progenitor cells isolated from distinct first-trimester fetal CNS regions have already been used in clinical trials and will be used again in a new multicenter trial funded by the European Union (TRANSEURO). Bacterial contamination of human fetal tissue poses a potential risk of causing infections in the brain of the recipient. Thus, effective methods of microbial decontamination and validation of these methods are required prior to approval of a neurorestorative cell therapy trial. We have developed a protocol consisting of subsequent washing steps at different stages of tissue processing. Efficacy of microbial decontamination was assessed on rat embryonic tissue incubated with high concentrations of defined microbe solutions including representative bacterial and fungal species. Experimental microbial contamination was reduced by several log ranks. Subsequently, we have analyzed the spectrum of microbial contamination and the effect of subsequent washing steps on aborted human fetal tissue; 47.7% of the samples taken during human fetal tissue processing were positive for a microbial contamination, but after washing, no sample exhibited bacterial growth. Our data suggest that human fetal tissue for neural repair can carry microbes of various species, highlighting the need for decontamination procedures. The decontamination protocol described in this report has been shown to be effective as no microbes could be detected at the end of the procedure.
NASA Astrophysics Data System (ADS)
Percoco, Gianluca; Sánchez Salmerón, Antonio J.
2015-09-01
The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.
Experimental rhinovirus infection in volunteers.
Bardin, P G; Sanderson, G; Robinson, B S; Holgate, S T; Tyrrell, D A
1996-11-01
Experimental viral disease studies in volunteers have clarified many aspects of the pathogenesis of human viral disease. Recently, interest has focused on rhinovirus-associated asthma exacerbations, and new volunteer studies have suggested that airway responsiveness (AR) is enhanced during a cold. For scientific, ethical and safety reasons, it is important to use validated methods for the preparation of a virus inoculum and that the particular virological characteristics and host responses should not be altered. We have prepared a new human rhinovirus (HRV) inoculum using recent guidelines and assessed whether disease characteristics (for example, severity of colds or changes in AR) were retained. Studies were conducted in 25 clinically healthy volunteers using a validated HRV inoculum in the first 17 and a new inoculum in the subsequent eight subjects. Severity of cold symptoms, nasal wash albumin levels and airway responsiveness were measured, and the new inoculum was prepared from nasal washes obtained during the cold. The new inoculum was tested using standard virological and serological techniques, as well as a polymerase chain reaction for Mycoplasma pneumoniae. No contaminating viruses or organisms were detected and the methods suggested were workable. Good clinical colds developed in 20 of the 25 subjects and median symptom scores were similar in the validated and new inoculum groups (18 and 17.5, respectively; p=0.19). All subjects shed virus, and there were no differences noted in viral culture scores, nasal wash albumin and rates of seroconversion in the two groups. Although airway responsiveness increased in both groups (p=0.02 and p=0.05), the degree of change was similar. We have performed experimental rhinovirus infection studies and demonstrated similar clinical disease in two inoculum groups. Amplified airway responsiveness was induced; continuing studies will define the mechanisms and suggest modes of treatment.
Buras, Zachary J; Chu, Te-Chun; Jamal, Adeel; Yee, Nathan W; Middaugh, Joshua E; Green, William H
2018-05-16
The C9H11 potential energy surface (PES) was experimentally and theoretically explored because it is a relatively simple, prototypical alkylaromatic radical system. Although the C9H11 PES has already been extensively studied both experimentally (under single-collision and thermal conditions) and theoretically, new insights were made in this work by taking a new experimental approach: flash photolysis combined with time-resolved molecular beam mass spectrometry (MBMS) and visible laser absorbance. The C9H11 PES was experimentally accessed by photolytic generation of the phenyl radical and subsequent reaction with excess propene (C6H5 + C3H6). The overall kinetics of C6H5 + C3H6 was measured using laser absorbance with high time-resolution from 300 to 700 K and was found to be in agreement with earlier measurements over a lower temperature range. Five major product channels of C6H5 + C3H6 were observed with MBMS at 600 and 700 K, four of which were expected: hydrogen (H)-abstraction (measured by the stable benzene, C6H6, product), methyl radical (CH3)-loss (styrene detected), H-loss (phenylpropene isomers detected) and radical adduct stabilization. The fifth, unexpected product observed was the benzyl radical, which was rationalized by the inclusion of a previously unreported pathway on the C9H11 PES: aromatic-catalysed 1,2-H-migration and subsequent resonance stabilized radical (RSR, benzyl radical in this case) formation. The current theoretical understanding of the C9H11 PES was supported (including the aromatic-catalyzed pathway) by quantitative comparisons between modelled and experimental MBMS results. At 700 K, the branching to styrene + CH3 was 2-4 times greater than that of any other product channel, while benzyl radical + C2H4 from the aromatic-catalyzed pathway accounted for ∼10% of the branching. Single-collision conditions were also simulated on the updated PES to explain why previous crossed molecular beam experiments did not see evidence of the aromatic-catalyzed pathway. This experimentally validated knowledge of the C9H11 PES was added to the database of the open-source Reaction Mechanism Generator (RMG), which was then used to generalize the findings on the C9H11 PES to a slightly more complicated alkylaromatic system.
A Recipe for Soft Fluidic Elastomer Robots
Marchese, Andrew D.; Katzschmann, Robert K.
2015-01-01
Abstract This work provides approaches to designing and fabricating soft fluidic elastomer robots. That is, three viable actuator morphologies composed entirely from soft silicone rubber are explored, and these morphologies are differentiated by their internal channel structure, namely, ribbed, cylindrical, and pleated. Additionally, three distinct casting-based fabrication processes are explored: lamination-based casting, retractable-pin-based casting, and lost-wax-based casting. Furthermore, two ways of fabricating a multiple DOF robot are explored: casting the complete robot as a whole and casting single degree of freedom (DOF) segments with subsequent concatenation. We experimentally validate each soft actuator morphology and fabrication process by creating multiple physical soft robot prototypes. PMID:27625913
A Recipe for Soft Fluidic Elastomer Robots.
Marchese, Andrew D; Katzschmann, Robert K; Rus, Daniela
2015-03-01
This work provides approaches to designing and fabricating soft fluidic elastomer robots. That is, three viable actuator morphologies composed entirely from soft silicone rubber are explored, and these morphologies are differentiated by their internal channel structure, namely, ribbed, cylindrical, and pleated. Additionally, three distinct casting-based fabrication processes are explored: lamination-based casting, retractable-pin-based casting, and lost-wax-based casting. Furthermore, two ways of fabricating a multiple DOF robot are explored: casting the complete robot as a whole and casting single degree of freedom (DOF) segments with subsequent concatenation. We experimentally validate each soft actuator morphology and fabrication process by creating multiple physical soft robot prototypes.
A model for the solution structure of the rod arrestin tetramer.
Hanson, Susan M; Dawson, Eric S; Francis, Derek J; Van Eps, Ned; Klug, Candice S; Hubbell, Wayne L; Meiler, Jens; Gurevich, Vsevolod V
2008-06-01
Visual rod arrestin has the ability to self-associate at physiological concentrations. We previously demonstrated that only monomeric arrestin can bind the receptor and that the arrestin tetramer in solution differs from that in the crystal. We employed the Rosetta docking software to generate molecular models of the physiologically relevant solution tetramer based on the monomeric arrestin crystal structure. The resulting models were filtered using the Rosetta energy function, experimental intersubunit distances measured with DEER spectroscopy, and intersubunit contact sites identified by mutagenesis and site-directed spin labeling. This resulted in a unique model for subsequent evaluation. The validity of the model is strongly supported by model-directed crosslinking and targeted mutagenesis that yields arrestin variants deficient in self-association. The structure of the solution tetramer explains its inability to bind rhodopsin and paves the way for experimental studies of the physiological role of rod arrestin self-association.
Liebl, Hans; Garcia, Eduardo Grande; Holzner, Fabian; Noel, Peter B.; Burgkart, Rainer; Rummeny, Ernst J.; Baum, Thomas; Bauer, Jan S.
2015-01-01
Purpose To experimentally validate a non-linear finite element analysis (FEA) modeling approach assessing in-vitro fracture risk at the proximal femur and to transfer the method to standard in-vivo multi-detector computed tomography (MDCT) data of the hip aiming to predict additional hip fracture risk in subjects with and without osteoporosis associated vertebral fractures using bone mineral density (BMD) measurements as gold standard. Methods One fresh-frozen human femur specimen was mechanically tested and fractured simulating stance and clinically relevant fall loading configurations to the hip. After experimental in-vitro validation, the FEA simulation protocol was transferred to standard contrast-enhanced in-vivo MDCT images to calculate individual hip fracture risk each for 4 subjects with and without a history of osteoporotic vertebral fractures matched by age and gender. In addition, FEA based risk factor calculations were compared to manual femoral BMD measurements of all subjects. Results In-vitro simulations showed good correlation with the experimentally measured strains both in stance (R2 = 0.963) and fall configuration (R2 = 0.976). The simulated maximum stress overestimated the experimental failure load (4743 N) by 14.7% (5440 N) while the simulated maximum strain overestimated by 4.7% (4968 N). The simulated failed elements coincided precisely with the experimentally determined fracture locations. BMD measurements in subjects with a history of osteoporotic vertebral fractures did not differ significantly from subjects without fragility fractures (femoral head: p = 0.989; femoral neck: p = 0.366), but showed higher FEA based risk factors for additional incident hip fractures (p = 0.028). Conclusion FEA simulations were successfully validated by elastic and destructive in-vitro experiments. In the subsequent in-vivo analyses, MDCT based FEA based risk factor differences for additional hip fractures were not mirrored by according BMD measurements. Our data suggests, that MDCT derived FEA models may assess bone strength more accurately than BMD measurements alone, providing a valuable in-vivo fracture risk assessment tool. PMID:25723187
Ning, Bo; Guo, Geng; Liu, Hong; Ning, Lei; Sun, Bao-Liang; Li, Zhen; Wang, Shuo; Lv, Zheng-Wen; Fan, Cun-Dong
2017-09-01
MSK (mitogen- and stress-activated protein kinase) proteins are a family of mitogen-activated protein kinases. MSKs represent a novel type of pro-survival genes, potentially enhancing the phosphorylation of Bcl2-associated agonist of cell death. However, MSK's function and expression are poorly understood in the central nervous system. In the present study, a subarachnoid hemorrhage (SAH) model was established in SD rats and the expression of MSK1 in the brain subsequent to experimental SAH was investigated. In response to SAH, MSK1 mRNA and protein levels gradually declined, reaching the lowest point at 3 days, and increased thereafter. The expression of active caspase-3 was negatively correlated with MSK1 level. Colocalization and correlating changes in expression of MSK1 and active caspase-3 at neurons and astrocytes indicated that MSK1 downregulation may contribute to SAH-induced apoptosis, validating that MSK1 may be involved in the pathophysiology of the brain cortex subsequent to SAH.
Brine reuse in ion-exchange softening: salt discharge, hardness leakage, and capacity tradeoffs.
Flodman, Hunter R; Dvorak, Bruce I
2012-06-01
Ion-exchange water softening results in the discharge of excess sodium chloride to the aquatic environment during the regeneration cycle. In order to reduce sodium chloride use and subsequent discharge from ion-exchange processes, either brine reclaim operations can be implemented or salt application during regeneration can be reduced. Both result in tradeoffs related to loss of bed volumes treated per cycle and increased hardness leakage. An experimentally validated model was used to compare concurrent water softening operations at various salt application quantities with and without the direct reuse of waste brine for treated tap water of typical midwestern water quality. Both approaches were able to reduce salt use and subsequent discharge. Reducing salt use and discharge by lowering the salt application rate during regeneration consequently increased hardness leakage and decreased treatment capacity. Single or two tank brine recycling systems are capable of reducing salt use and discharge without increasing hardness leakage, although treatment capacity is reduced.
NASA Astrophysics Data System (ADS)
Altan-Bonnet, Gregoire
The immune system is a collection of cells whose function is to eradicate pathogenic infections and malignant tumors while protecting healthy tissues. Recent work has delineated key molecular and cellular mechanisms associated with the ability to discriminate self from non-self agents. For example, structural studies have quantified the biophysical characteristics of antigenic molecules (those prone to trigger lymphocyte activation and a subsequent immune response). However, such molecular mechanisms were found to be highly unreliable at the individual cellular level. We will present recent efforts to build experimentally validated computational models of the immune responses at the collective cell level. Such models have become critical to delineate how higher-level integration through nonlinear amplification in signal transduction, dynamic feedback in lymphocyte differentiation and cell-to-cell communication allows the immune system to enforce reliable self/non-self discrimination at the organism level. In particular, we will present recent results demonstrating how T cells tune their antigen discrimination according to cytokine cues, and how competition for cytokine within polyclonal populations of cells shape the repertoire of responding clones. Additionally, we will present recent theoretical and experimental results demonstrating how competition between diffusion and consumption of cytokines determine the range of cell-cell communications within lymphoid organs. Finally, we will discuss how biochemically explicit models, combined with quantitative experimental validation, unravel the relevance of new feedbacks for immune regulations across multiple spatial and temporal scales.
Carlson, Jean M.
2018-01-01
In this paper we study antibiotic-induced C. difficile infection (CDI), caused by the toxin-producing C. difficile (CD), and implement clinically-inspired simulated treatments in a computational framework that synthesizes a generalized Lotka-Volterra (gLV) model with SIR modeling techniques. The gLV model uses parameters derived from an experimental mouse model, in which the mice are administered antibiotics and subsequently dosed with CD. We numerically identify which of the experimentally measured initial conditions are vulnerable to CD colonization, then formalize the notion of CD susceptibility analytically. We simulate fecal transplantation, a clinically successful treatment for CDI, and discover that both the transplant timing and transplant donor are relevant to the the efficacy of the treatment, a result which has clinical implications. We incorporate two nongeneric yet dangerous attributes of CD into the gLV model, sporulation and antibiotic-resistant mutation, and for each identify relevant SIR techniques that describe the desired attribute. Finally, we rely on the results of our framework to analyze an experimental study of fecal transplants in mice, and are able to explain observed experimental results, validate our simulated results, and suggest model-motivated experiments. PMID:29451873
Jones, Eric W; Carlson, Jean M
2018-02-01
In this paper we study antibiotic-induced C. difficile infection (CDI), caused by the toxin-producing C. difficile (CD), and implement clinically-inspired simulated treatments in a computational framework that synthesizes a generalized Lotka-Volterra (gLV) model with SIR modeling techniques. The gLV model uses parameters derived from an experimental mouse model, in which the mice are administered antibiotics and subsequently dosed with CD. We numerically identify which of the experimentally measured initial conditions are vulnerable to CD colonization, then formalize the notion of CD susceptibility analytically. We simulate fecal transplantation, a clinically successful treatment for CDI, and discover that both the transplant timing and transplant donor are relevant to the the efficacy of the treatment, a result which has clinical implications. We incorporate two nongeneric yet dangerous attributes of CD into the gLV model, sporulation and antibiotic-resistant mutation, and for each identify relevant SIR techniques that describe the desired attribute. Finally, we rely on the results of our framework to analyze an experimental study of fecal transplants in mice, and are able to explain observed experimental results, validate our simulated results, and suggest model-motivated experiments.
Fluid Merging Viscosity Measurement (FMVM) Experiment on the International Space Station
NASA Technical Reports Server (NTRS)
Antar, Basil N.; Ethridge, Edwin; Lehman, Daniel; Kaukler, William
2007-01-01
The concept of using low gravity experimental data together with fluid dynamical numerical simulations for measuring the viscosity of highly viscous liquids was recently validated on the International Space Station (ISS). After testing the proof of concept for this method with parabolic flight experiments, an ISS experiment was proposed and later conducted onboard the ISS in July, 2004 and subsequently in May of 2005. In that experiment a series of two liquid drops were brought manually together until they touched and then were allowed to merge under the action of capillary forces alone. The merging process was recorded visually in order to measure the contact radius speed as the merging proceeded. Several liquids were tested and for each liquid several drop diameters were used. It has been shown that when the coefficient of surface tension for the liquid is known, the contact radius speed can then determine the coefficient of viscosity for that liquid. The viscosity is determined by fitting the experimental speed to theoretically calculated contact radius speed for the same experimental parameters. Experimental and numerical results will be presented in which the viscosity of different highly viscous liquids were determined, to a high degree of accuracy, using this technique.
A magneto-rheological fluid mount featuring squeeze mode: analysis and testing
NASA Astrophysics Data System (ADS)
Chen, Peng; Bai, Xian-Xu; Qian, Li-Jun; Choi, Seung-Bok
2016-05-01
This paper presents a mathematical model for a new semi-active vehicle engine mount utilizing magneto-rheological (MR) fluids in squeeze mode (MR mount in short) and validates the model by comparing analysis results with experimental tests. The proposed MR mount is mainly comprised of a frame for installation, a main rubber, a squeeze plate and a bobbin for coil winding. When the magnetic fields on, MR effect occurs in the upper gap between the squeeze plate and the bobbin, and the dynamic stiffness can be controlled by tuning the applied currents. Employing Bingham model and flow properties between parallel plates of MR fluids, a mathematical model for the squeeze type of MR mount is formulated with consideration of the fluid inertia, MR effect and hysteresis property. The field-dependent dynamic stiffness of the MR mount is then analyzed using the established mathematical model. Subsequently, in order to validate the mathematical model, an appropriate size of MR mount is fabricated and tested. The field-dependent force and dynamic stiffness of the proposed MR mount are evaluated and compared between the model and experimental tests in both time and frequency domains to verify the model efficiency. In addition, it is shown that both the damping property and the stiffness property of the proposed MR mount can be simultaneously controlled.
Finite element modeling of human brain response to football helmet impacts.
Darling, T; Muthuswamy, J; Rajan, S D
2016-10-01
The football helmet is used to help mitigate the occurrence of impact-related traumatic (TBI) and minor traumatic brain injuries (mTBI) in the game of American football. While the current helmet design methodology may be adequate for reducing linear acceleration of the head and minimizing TBI, it however has had less effect in minimizing mTBI. The objectives of this study are (a) to develop and validate a coupled finite element (FE) model of a football helmet and the human body, and (b) to assess responses of different regions of the brain to two different impact conditions - frontal oblique and crown impact conditions. The FE helmet model was validated using experimental results of drop tests. Subsequently, the integrated helmet-human body FE model was used to assess the responses of different regions of the brain to impact loads. Strain-rate, strain, and stress measures in the corpus callosum, midbrain, and brain stem were assessed. Results show that maximum strain-rates of 27 and 19 s(-1) are observed in the brain-stem and mid-brain, respectively. This could potentially lead to axonal injuries and neuronal cell death during crown impact conditions. The developed experimental-numerical framework can be used in the study of other helmet-related impact conditions.
Predictive QSAR modeling workflow, model applicability domains, and virtual screening.
Tropsha, Alexander; Golbraikh, Alexander
2007-01-01
Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.
Luo, Wen; Medrek, Sarah; Misra, Jatin; Nohynek, Gerhard J
2007-02-01
The objective of this study was to construct and validate a quantitative structure-activity relationship model for skin absorption. Such models are valuable tools for screening and prioritization in safety and efficacy evaluation, and risk assessment of drugs and chemicals. A database of 340 chemicals with percutaneous absorption was assembled. Two models were derived from the training set consisting 306 chemicals (90/10 random split). In addition to the experimental K(ow) values, over 300 2D and 3D atomic and molecular descriptors were analyzed using MDL's QsarIS computer program. Subsequently, the models were validated using both internal (leave-one-out) and external validation (test set) procedures. Using the stepwise regression analysis, three molecular descriptors were determined to have significant statistical correlation with K(p) (R2 = 0.8225): logK(ow), X0 (quantification of both molecular size and the degree of skeletal branching), and SsssCH (count of aromatic carbon groups). In conclusion, two models to estimate skin absorption were developed. When compared to other skin absorption QSAR models in the literature, our model incorporated more chemicals and explored a large number of descriptors. Additionally, our models are reasonably predictive and have met both internal and external statistical validations.
Taffarel, Marilda Onghero; Luna, Stelio Pacca Loureiro; de Oliveira, Flavia Augusta; Cardoso, Guilherme Schiess; Alonso, Juliana de Moura; Pantoja, Jose Carlos; Brondani, Juliana Tabarelli; Love, Emma; Taylor, Polly; White, Kate; Murrell, Joanna C
2015-04-01
Quantification of pain plays a vital role in the diagnosis and management of pain in animals. In order to refine and validate an acute pain scale for horses a prospective, randomized, blinded study was conducted. Twenty-four client owned adult horses were recruited and allocated to one of four following groups: anaesthesia only (GA); pre-emptive analgesia and anaesthesia (GAA,); anaesthesia, castration and postoperative analgesia (GC); or pre-emptive analgesia, anaesthesia and castration (GCA). One investigator, unaware of the treatment group, assessed all horses at time-points before and after intervention and completed the pain scale. Videos were also obtained at these time-points and were evaluated by a further four blinded evaluators who also completed the scale. The data were used to investigate the relevance, specificity, criterion validity and inter- and intra-observer reliability of each item on the pain scale, and to evaluate construct validity and responsiveness of the scale. Construct validity was demonstrated by the observed differences in scores between the groups, four hours after anaesthetic recovery and before administration of systemic analgesia in the GC group. Inter- and intra-observer reliability for the items was only satisfactory. Subsequently the pain scale was refined, based on results for relevance, specificity and total item correlation. Scale refinement and exclusion of items that did not meet predefined requirements generated a selection of relevant pain behaviours in horses. After further validation for reliability, these may be used to evaluate pain under clinical and experimental conditions.
Three-dimensional numerical and experimental studies on transient ignition of hybrid rocket motor
NASA Astrophysics Data System (ADS)
Tian, Hui; Yu, Ruipeng; Zhu, Hao; Wu, Junfeng; Cai, Guobiao
2017-11-01
This paper presents transient simulations and experimental studies of the ignition process of the hybrid rocket motors (HRMs) using 90% hydrogen peroxide (HP) as the oxidizer and polymethyl methacrylate (PMMA) and Polyethylene (PE) as fuels. A fluid-solid coupling numerically method is established based on the conserved form of the three-dimensional unsteady Navier-Stokes (N-S) equations, considering gas fluid with chemical reactions and heat transfer between the fluid and solid region. Experiments are subsequently conducted using high-speed camera to record the ignition process. The flame propagation, chamber pressurizing process and average fuel regression rate of the numerical simulation results show good agreement with the experimental ones, which demonstrates the validity of the simulations in this study. The results also indicate that the flame propagation time is mainly affected by fluid dynamics and it increases with an increasing grain port area. The chamber pressurizing process begins when the flame propagation completes in the grain port. Furthermore, the chamber pressurizing time is about 4 times longer than the time of flame propagation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sileghem, L.; Wallner, T.; Verhelst, S.
As knock is one of the main factors limiting the efficiency of spark-ignition engines, the introduction of alcohol blends could help to mitigate knock concerns due to the elevated knock resistance of these blends. A model that can accurately predict their autoignition behavior would be of great value to engine designers. The current work aims to develop such a model for alcohol–gasoline blends. First, a mixing rule for the autoignition delay time of alcohol–gasoline blends is proposed. Subsequently, this mixing rule is used together with an autoignition delay time correlation of gasoline and an autoignition delay time cor-relation of methanolmore » in a knock integral model that is implemented in a two-zone engine code. The pre-dictive performance of the resulting model is validated through comparison against experimental measurements on a CFR engine for a range of gasoline–methanol blends. The knock limited spark advance, the knock intensity, the knock onset crank angle and the value of the knock integral at the experimental knock onset have been simulated and compared to the experimental values derived from in-cylinder pressure measurements.« less
Ng, Candy K S; Osuna-Sanchez, Hector; Valéry, Eric; Sørensen, Eva; Bracewell, Daniel G
2012-06-15
An integrated experimental and modeling approach for the design of high productivity protein A chromatography is presented to maximize productivity in bioproduct manufacture. The approach consists of four steps: (1) small-scale experimentation, (2) model parameter estimation, (3) productivity optimization and (4) model validation with process verification. The integrated use of process experimentation and modeling enables fewer experiments to be performed, and thus minimizes the time and materials required in order to gain process understanding, which is of key importance during process development. The application of the approach is demonstrated for the capture of antibody by a novel silica-based high performance protein A adsorbent named AbSolute. In the example, a series of pulse injections and breakthrough experiments were performed to develop a lumped parameter model, which was then used to find the best design that optimizes the productivity of a batch protein A chromatographic process for human IgG capture. An optimum productivity of 2.9 kg L⁻¹ day⁻¹ for a column of 5mm diameter and 8.5 cm length was predicted, and subsequently verified experimentally, completing the whole process design approach in only 75 person-hours (or approximately 2 weeks). Copyright © 2012 Elsevier B.V. All rights reserved.
Demonstration of full 4×4 Mueller polarimetry through an optical fiber for endoscopic applications.
Manhas, Sandeep; Vizet, Jérémy; Deby, Stanislas; Vanel, Jean-Charles; Boito, Paola; Verdier, Mireille; De Martino, Antonello; Pagnoux, Dominique
2015-02-09
A novel technique to measure the full 4 × 4 Mueller matrix of a sample through an optical fiber is proposed, opening the way for endoscopic applications of Mueller polarimetry for biomedical diagnosis. The technique is based on two subsequent Mueller matrices measurements: one for characterizing the fiber only, and another for the assembly of fiber and sample. From this differential measurement, we proved theoretically that the polarimetric properties of the sample can be deduced. The proof of principle was experimentally validated by measuring various polarimetric parameters of known optical components. Images of manufactured and biological samples acquired by using this approach are also presented.
Production of extra quarks decaying to dark matter beyond the narrow width approximation at the LHC
NASA Astrophysics Data System (ADS)
Moretti, Stefano; O'Brien, Dermot; Panizzi, Luca; Prager, Hugo
2017-08-01
This paper explores the effects of finite width in processes of pair production of an extra heavy quark with charge 2 /3 (top partner) and its subsequent decay into a bosonic dark matter (DM) candidate—either scalar or vector—and SM up-type quarks at the Large Hadron Collider (LHC). This dynamics has been ignored so far in standard experimental searches of heavy quarks decaying to DM and we assess herein the regions of validity of current approaches, based on the assumption that the extra quarks have a narrow width. Further, we discuss the configurations of masses, widths and couplings where the latter breaks down.
Role of metabolism and viruses in aflatoxin-induced liver cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groopman, John D.; Kensler, Thomas W.
The use of biomarkers in molecular epidemiology studies for identifying stages in the progression of development of the health effects of environmental agents has the potential for providing important information for critical regulatory, clinical and public health problems. Investigations of aflatoxins probably represent one of the most extensive data sets in the field and this work may serve as a template for future studies of other environmental agents. The aflatoxins are naturally occurring mycotoxins found on foods such as corn, peanuts, various other nuts and cottonseed and they have been demonstrated to be carcinogenic in many experimental models. As amore » result of nearly 30 years of study, experimental data and epidemiological studies in human populations, aflatoxin B{sub 1} was classified as carcinogenic to humans by the International Agency for Research on Cancer. The long-term goal of the research described herein is the application of biomarkers to the development of preventative interventions for use in human populations at high-risk for cancer. Several of the aflatoxin-specific biomarkers have been validated in epidemiological studies and are now being used as intermediate biomarkers in prevention studies. The development of these aflatoxin biomarkers has been based upon the knowledge of the biochemistry and toxicology of aflatoxins gleaned from both experimental and human studies. These biomarkers have subsequently been utilized in experimental models to provide data on the modulation of these markers under different situations of disease risk. This systematic approach provides encouragement for preventive interventions and should serve as a template for the development, validation and application of other chemical-specific biomarkers to cancer or other chronic diseases.« less
Validation of the FEA of a deep drawing process with additional force transmission
NASA Astrophysics Data System (ADS)
Behrens, B.-A.; Bouguecha, A.; Bonk, C.; Grbic, N.; Vucetic, M.
2017-10-01
In order to meet requirements by automotive industry like decreasing the CO2 emissions, which reflects in reducing vehicles mass in the car body, the chassis and the powertrain, the continuous innovation and further development of existing production processes are required. In sheet metal forming processes the process limits and components characteristics are defined through the process specific loads. While exceeding the load limits, a failure in the material occurs, which can be avoided by additional force transmission activated in the deep drawing process before the process limit is achieved. This contribution deals with experimental investigations of a forming process with additional force transmission regarding the extension of the process limits. Based on FEA a tool system is designed and developed by IFUM. For this purpose, the steel material HCT600 is analyzed numerically. Within the experimental investigations, the deep drawing processes, with and without the additional force transmission are carried out. Here, a comparison of the produced rectangle cups is done. Subsequently, the identical deep drawing processes are investigated numerically. Thereby, the values of the punch reaction force and displacement are estimated and compared with experimental results. Thus, the validation of material model is successfully carried out on process scale. For further quantitative verification of the FEA results the experimental determined geometry of the rectangular cup is measured optically with ATOS system of the company GOM mbH and digitally compared with external software Geomagic®QualifyTM. The goal of this paper is the verification of the transferability of the FEA model for a conventional deep drawing process to a deep drawing process with additional force transmission with a counter punch.
Hemmerich, Joshua A; Elstein, Arthur S; Schwarze, Margaret L; Moliski, Elizabeth Ghini; Dale, William
2012-07-01
The present study tested predictions derived from the Risk as Feelings hypothesis about the effects of prior patients' negative treatment outcomes on physicians' subsequent treatment decisions. Two experiments at The University of Chicago, U.S.A., utilized a computer simulation of an abdominal aortic aneurysm (AAA) patient with enhanced realism to present participants with one of three experimental conditions: AAA rupture causing a watchful waiting death (WWD), perioperative death (PD), or a successful operation (SO), as well as the statistical treatment guidelines for AAA. Experiment 1 tested effects of these simulated outcomes on (n = 76) laboratory participants' (university student sample) self-reported emotions, and their ratings of valence and arousal of the AAA rupture simulation and other emotion-inducing picture stimuli. Experiment 2 tested two hypotheses: 1) that experiencing a patient WWD in the practice trial's experimental condition would lead physicians to choose surgery earlier, and 2) experiencing a patient PD would lead physicians to choose surgery later with the next patient. Experiment 2 presented (n = 132) physicians (surgeons and geriatricians) with the same experimental manipulation and a second simulated AAA patient. Physicians then chose to either go to surgery or continue watchful waiting. The results of Experiment 1 demonstrated that the WWD experimental condition significantly increased anxiety, and was rated similarly to other negative and arousing pictures. The results of Experiment 2 demonstrated that, after controlling for demographics, baseline anxiety, intolerance for uncertainty, risk attitudes, and the influence of simulation characteristics, the WWD experimental condition significantly expedited decisions to choose surgery for the next patient. The results support the Risk as Feelings hypothesis on physicians' treatment decisions in a realistic AAA patient computer simulation. Bad outcomes affected emotions and decisions, even with statistical AAA rupture risk guidance present. These results suggest that bad patient outcomes cause physicians to experience anxiety and regret that influences their subsequent treatment decision-making for the next patient. Copyright © 2012 Elsevier Ltd. All rights reserved.
Automatic three-dimensional registration of intravascular optical coherence tomography images
NASA Astrophysics Data System (ADS)
Ughi, Giovanni J.; Adriaenssens, Tom; Larsson, Matilda; Dubois, Christophe; Sinnaeve, Peter R.; Coosemans, Mark; Desmet, Walter; D'hooge, Jan
2012-02-01
Intravascular optical coherence tomography (IV-OCT) is a catheter-based high-resolution imaging technique able to visualize the inner wall of the coronary arteries and implanted devices in vivo with an axial resolution below 20 μm. IV-OCT is being used in several clinical trials aiming to quantify the vessel response to stent implantation over time. However, stent analysis is currently performed manually and corresponding images taken at different time points are matched through a very labor-intensive and subjective procedure. We present an automated method for the spatial registration of IV-OCT datasets. Stent struts are segmented through consecutive images and three-dimensional models of the stents are created for both datasets to be registered. The two models are initially roughly registered through an automatic initialization procedure and an iterative closest point algorithm is subsequently applied for a more precise registration. To correct for nonuniform rotational distortions (NURDs) and other potential acquisition artifacts, the registration is consecutively refined on a local level. The algorithm was first validated by using an in vitro experimental setup based on a polyvinyl-alcohol gel tubular phantom. Subsequently, an in vivo validation was obtained by exploiting stable vessel landmarks. The mean registration error in vitro was quantified to be 0.14 mm in the longitudinal axis and 7.3-deg mean rotation error. In vivo validation resulted in 0.23 mm in the longitudinal axis and 10.1-deg rotation error. These results indicate that the proposed methodology can be used for automatic registration of in vivo IV-OCT datasets. Such a tool will be indispensable for larger studies on vessel healing pathophysiology and reaction to stent implantation. As such, it will be valuable in testing the performance of new generations of intracoronary devices and new therapeutic drugs.
Stable Eutectoid Transformation in Nodular Cast Iron: Modeling and Validation
NASA Astrophysics Data System (ADS)
Carazo, Fernando D.; Dardati, Patricia M.; Celentano, Diego J.; Godoy, Luis A.
2017-01-01
This paper presents a new microstructural model of the stable eutectoid transformation in a spheroidal cast iron. The model takes into account the nucleation and growth of ferrite grains and the growth of graphite spheroids. Different laws are assumed for the growth of both phases during and below the intercritical stable eutectoid. At a microstructural level, the initial conditions for the phase transformations are obtained from the microstructural simulation of solidification of the material, which considers the divorced eutectic and the subsequent growth of graphite spheroids up to the initiation of the stable eutectoid transformation. The temperature field is obtained by solving the energy equation by means of finite elements. The microstructural (phase change) and macrostructural (energy balance) models are coupled by a sequential multiscale procedure. Experimental validation of the model is achieved by comparison with measured values of fractions and radius of 2D view of ferrite grains. Agreement with such experiments indicates that the present model is capable of predicting ferrite phase fraction and grain size with reasonable accuracy.
A validation study of the simulation software gprMax by varying antenna stand-off height
NASA Astrophysics Data System (ADS)
Wilkinson, Josh; Davidson, Nigel
2018-04-01
The design and subsequent testing of suitable antennas and of complete ground-penetrating radar (GPR) systems can be both time consuming and expensive, with the need to understand the performance of a system in realistic environments of great importance to the end user. Through the use of suitably validated simulations, these costs could be significantly reduced, allowing an economical capability to be built which can accurately predict the performance of novel GPR antennas and existing commercial-off-the-shelf (COTS) systems in a user defined environment. This paper focuses on a preliminary validation of the open source software gprMax1 which features the ability to custom define antennas, targets, clutter objects and realistic heterogeneous soils. As an initial step in the assessment of the software, a comparison of the modelled response of targets buried in sand to experimental data has been undertaken, with the variation in response with antenna stand-off height investigated. This was conducted for both a simple bespoke bow-tie antenna design as well as for a Geophysical Survey Systems, Inc. (GSSI) commercial system,2 building upon previous work3 which explored the fidelity of gprMax in reproducing the S11 of simple antenna designs.
Modeling Adhesive Anchors in a Discrete Element Framework
Marcon, Marco; Vorel, Jan; Ninčević, Krešimir; Wan-Wendner, Roman
2017-01-01
In recent years, post-installed anchors are widely used to connect structural members and to fix appliances to load-bearing elements. A bonded anchor typically denotes a threaded bar placed into a borehole filled with adhesive mortar. The high complexity of the problem, owing to the multiple materials and failure mechanisms involved, requires a numerical support for the experimental investigation. A reliable model able to reproduce a system’s short-term behavior is needed before the development of a more complex framework for the subsequent investigation of the lifetime of fasteners subjected to various deterioration processes can commence. The focus of this contribution is the development and validation of such a model for bonded anchors under pure tension load. Compression, modulus, fracture and splitting tests are performed on standard concrete specimens. These serve for the calibration and validation of the concrete constitutive model. The behavior of the adhesive mortar layer is modeled with a stress-slip law, calibrated on a set of confined pull-out tests. The model validation is performed on tests with different configurations comparing load-displacement curves, crack patterns and concrete cone shapes. A model sensitivity analysis and the evaluation of the bond stress and slippage along the anchor complete the study. PMID:28786964
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Y; Rottmann, J; Myronakis, M
2016-06-15
Purpose: The purpose of this study was to validate the use of a cascaded linear system model for MV cone-beam CT (CBCT) using a multi-layer (MLI) electronic portal imaging device (EPID) and provide experimental insight into image formation. A validated 3D model provides insight into salient factors affecting reconstructed image quality, allowing potential for optimizing detector design for CBCT applications. Methods: A cascaded linear system model was developed to investigate the potential improvement in reconstructed image quality for MV CBCT using an MLI EPID. Inputs to the three-dimensional (3D) model include projection space MTF and NPS. Experimental validation was performedmore » on a prototype MLI detector installed on the portal imaging arm of a Varian TrueBeam radiotherapy system. CBCT scans of up to 898 projections over 360 degrees were acquired at exposures of 16 and 64 MU. Image volumes were reconstructed using a Feldkamp-type (FDK) filtered backprojection (FBP) algorithm. Flat field images and scans of a Catphan model 604 phantom were acquired. The effect of 2×2 and 4×4 detector binning was also examined. Results: Using projection flat fields as an input, examination of the modeled and measured NPS in the axial plane exhibits good agreement. Binning projection images was shown to improve axial slice SDNR by a factor of approximately 1.4. This improvement is largely driven by a decrease in image noise of roughly 20%. However, this effect is accompanied by a subsequent loss in image resolution. Conclusion: The measured axial NPS shows good agreement with the theoretical calculation using a linear system model. Binning of projection images improves SNR of large objects on the Catphan phantom by decreasing noise. Specific imaging tasks will dictate the implementation image binning to two-dimensional projection images. The project was partially supported by a grant from Varian Medical Systems, Inc. and grant No. R01CA188446-01 from the National Cancer Institute.« less
Using Numerical Modeling to Simulate Space Capsule Ground Landings
NASA Technical Reports Server (NTRS)
Heymsfield, Ernie; Fasanella, Edwin L.
2009-01-01
Experimental work is being conducted at the National Aeronautics and Space Administration s (NASA) Langley Research Center (LaRC) to investigate ground landing capabilities of the Orion crew exploration vehicle (CEV). The Orion capsule is NASA s replacement for the Space Shuttle. The Orion capsule will service the International Space Station and be used for future space missions to the Moon and to Mars. To evaluate the feasibility of Orion ground landings, a series of capsule impact tests are being performed at the NASA Langley Landing and Impact Research Facility (LandIR). The experimental results derived at LandIR provide means to validate and calibrate nonlinear dynamic finite element models, which are also being developed during this study. Because of the high cost and time involvement intrinsic to full-scale testing, numerical simulations are favored over experimental work. Subsequent to a numerical model validated by actual test responses, impact simulations will be conducted to study multiple impact scenarios not practical to test. Twenty-one swing tests using the LandIR gantry were conducted during the June 07 through October 07 time period to evaluate the Orion s impact response. Results for two capsule initial pitch angles, 0deg and -15deg , along with their computer simulations using LS-DYNA are presented in this article. A soil-vehicle friction coefficient of 0.45 was determined by comparing the test stopping distance with computer simulations. In addition, soil modeling accuracy is presented by comparing vertical penetrometer impact tests with computer simulations for the soil model used during the swing tests.
Galle, J; Hoffmann, M; Aust, G
2009-01-01
Collective phenomena in multi-cellular assemblies can be approached on different levels of complexity. Here, we discuss a number of mathematical models which consider the dynamics of each individual cell, so-called agent-based or individual-based models (IBMs). As a special feature, these models allow to account for intracellular decision processes which are triggered by biomechanical cell-cell or cell-matrix interactions. We discuss their impact on the growth and homeostasis of multi-cellular systems as simulated by lattice-free models. Our results demonstrate that cell polarisation subsequent to cell-cell contact formation can be a source of stability in epithelial monolayers. Stroma contact-dependent regulation of tumour cell proliferation and migration is shown to result in invasion dynamics in accordance with the migrating cancer stem cell hypothesis. However, we demonstrate that different regulation mechanisms can equally well comply with present experimental results. Thus, we suggest a panel of experimental studies for the in-depth validation of the model assumptions.
Mozumder, Md Salatul Islam; Garcia-Gonzalez, Linsey; De Wever, Heleen; Volcke, Eveline I P
2015-09-01
This study evaluates the effect of sodium (Na(+)) concentration on the growth and PHB production by Cupriavidus necator. Both biomass growth and PHB production were inhibited by Na(+): biomass growth became zero at 8.9 g/L Na(+) concentration while PHB production was completely stopped at 10.5 g/L Na(+). A mathematical model for pure culture heterotrophic PHB production was set up to describe the Na(+) inhibition effect. The parameters related to Na(+) inhibition were estimated based on shake flask experiments. The accumulated Na(+) showed non-linear inhibition effect on biomass growth but linear inhibition effect on PHB production kinetics. Fed-batch experiments revealed that a high accumulation of Na(+) due to a prolonged growth phase, using NaOH for pH control, decreased the subsequent PHB production. The model was validated based on independent experimental data sets, showing a good agreement between experimental data and simulation results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Scattered acoustic field above a grating of parallel rectangular cavities
NASA Astrophysics Data System (ADS)
Khanfir, A.; Faiz, A.; Ducourneau, J.; Chatillon, J.; Skali Lami, S.
2013-02-01
The aim of this research project was to predict the sound pressure above a wall facing composed of N parallel rectangular cavities. The diffracted acoustic field is processed by generalizing the Kobayashi Potential (KP) method used for determining the electromagnetic field diffracted by a rectangular cavity set in a thick screen. This model enables the diffracted field to be expressed in modal form. Modal amplitudes are subsequently calculated using matrix equations obtained by enforcing boundary conditions. Solving these equations allows the determination of the total reflected acoustic field above the wall facing. This model was compared with experimental results obtained in a semi-anechoic room for a single cavity, a periodic array of three rectangular cavities and an aperiodic grating of nine rectangular cavities of different size and spacing. These facings were insonified by an incident spherical acoustic field, which was decomposed into plane waves. The validity of this model is supported by the agreement between the numerical and experimental results observed.
Thirunathan, Praveena; Arnz, Patrik; Husny, Joeska; Gianfrancesco, Alessandro; Perdana, Jimmy
2018-03-01
Accurate description of moisture diffusivity is key to precisely understand and predict moisture transfer behaviour in a matrix. Unfortunately, measuring moisture diffusivity is not trivial, especially at low moisture values and/or elevated temperatures. This paper presents a novel experimental procedure to accurately measure moisture diffusivity based on thermogravimetric approach. The procedure is capable to measure diffusivity even at elevated temperatures (>70°C) and low moisture values (>1%). Diffusivity was extracted from experimental data based on "regular regime approach". The approach was tailored to determine diffusivity from thin film and from poly-dispersed powdered samples. Subsequently, measured diffusivity was validated by comparing to available literature data, showing good agreement. Ability of this approach to accurately measure diffusivity at a wider range of temperatures provides better insight on temperature dependency of diffusivity. Thus, this approach can be crucial to ensure good accuracy of moisture transfer description/prediction especially when involving elevated temperatures. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cai, Jun; Wang, Kuaishe; Shi, Jiamin; Wang, Wen; Liu, Yingying
2018-01-01
Constitutive analysis for hot working of BFe10-1-2 alloy was carried out by using experimental stress-strain data from isothermal hot compression tests, in a wide range of temperature of 1,023 1,273 K, and strain rate range of 0.001 10 s-1. A constitutive equation based on modified double multiple nonlinear regression was proposed considering the independent effects of strain, strain rate, temperature and their interrelation. The predicted flow stress data calculated from the developed equation was compared with the experimental data. Correlation coefficient (R), average absolute relative error (AARE) and relative errors were introduced to verify the validity of the developed constitutive equation. Subsequently, a comparative study was made on the capability of strain-compensated Arrhenius-type constitutive model. The results showed that the developed constitutive equation based on modified double multiple nonlinear regression could predict flow stress of BFe10-1-2 alloy with good correlation and generalization.
NASA Astrophysics Data System (ADS)
Pandey, Saurabh; Majhi, Somanath; Ghorai, Prasenjit
2017-07-01
In this paper, the conventional relay feedback test has been modified for modelling and identification of a class of real-time dynamical systems in terms of linear transfer function models with time-delay. An ideal relay and unknown systems are connected through a negative feedback loop to bring the sustained oscillatory output around the non-zero setpoint. Thereafter, the obtained limit cycle information is substituted in the derived mathematical equations for accurate identification of unknown plants in terms of overdamped, underdamped, critically damped second-order plus dead time and stable first-order plus dead time transfer function models. Typical examples from the literature are included for the validation of the proposed identification scheme through computer simulations. Subsequently, the comparisons between estimated model and true system are drawn through integral absolute error criterion and frequency response plots. Finally, the obtained output responses through simulations are verified experimentally on real-time liquid level control system using Yokogawa Distributed Control System CENTUM CS3000 set up.
NASA Technical Reports Server (NTRS)
Wayner, P. C., Jr.; Plawsky, J. L.; Wong, Harris
2004-01-01
The major accomplishments of the experimental portion of the research were documented in Ling Zheng's doctoral dissertation. Using Pentane, he obtained a considerable amount of data on the stability and heat transfer characteristics of an evaporating meniscus. The important points are that experimental equipment to obtain data on the stability and heat transfer characteristics of an evaporating meniscus were built and successfully operated. The data and subsequent analyses were accepted by the Journal of Heat Transfer for publication in 2004 [PU4]. The work was continued by a new graduate student using HFE-7000 [PU3] and then Pentane at lower heat fluxes. The Pentane results are being analyzed for publication. The experimental techniques are currently being used in our other NASA Grant. The oscillation of the contact line observed in the experiments involves evaporation (retraction part) and spreading. Since both processes occur with finite contact angles, it is important to derive a precise equation of the intermolecular forces (disjoining pressure) valid for non-zero contact angles. This theoretical derivation was accepted for publication by Journal of Fluid Mechanics [PU5]. The evaporation process near the contact line is complicated, and an idealized micro heat pipe has been proposed to help in elucidating the detailed evaporation process [manuscripts in preparation].
NASA Astrophysics Data System (ADS)
Ottewill, J. R.; Ruszczyk, A.; Broda, D.
2017-02-01
Time-varying transmission paths and inaccessibility can increase the difficulty in both acquiring and processing vibration signals for the purpose of monitoring epicyclic gearboxes. Recent work has shown that the synchronous signal averaging approach may be applied to measured motor currents in order to diagnose tooth faults in parallel shaft gearboxes. In this paper we further develop the approach, so that it may also be applied to monitor tooth faults in epicyclic gearboxes. A low-degree-of-freedom model of an epicyclic gearbox which incorporates the possibility of simulating tooth faults, as well as any subsequent tooth contact loss due to these faults, is introduced. By combining this model with a simple space-phasor model of an induction motor it is possible to show that, in theory, tooth faults in epicyclic gearboxes may be identified from motor currents. Applying the synchronous averaging approach to experimentally recorded motor currents and angular displacements recorded from a shaft mounted encoder, validate this finding. Comparison between experiments and theory highlight the influence of operating conditions, backlash and shaft couplings on the transient response excited in the currents by the tooth fault. The results obtained suggest that the method may be a viable alternative or complement to more traditional methods for monitoring gearboxes. However, general observations also indicate that further investigations into the sensitivity and robustness of the method would be beneficial.
Design validation and performance of closed loop gas recirculation system
NASA Astrophysics Data System (ADS)
Kalmani, S. D.; Joshi, A. V.; Majumder, G.; Mondal, N. K.; Shinde, R. R.
2016-11-01
A pilot experimental set up of the India Based Neutrino Observatory's ICAL detector has been operational for the last 4 years at TIFR, Mumbai. Twelve glass RPC detectors of size 2 × 2 m2, with a gas gap of 2 mm are under test in a closed loop gas recirculation system. These RPCs are continuously purged individually, with a gas mixture of R134a (C2H2F4), isobutane (iC4H10) and sulphur hexafluoride (SF6) at a steady rate of 360 ml/h to maintain about one volume change a day. To economize gas mixture consumption and to reduce the effluents from being released into the atmosphere, a closed loop system has been designed, fabricated and installed at TIFR. The pressure and flow rate in the loop is controlled by mass flow controllers and pressure transmitters. The performance and integrity of RPCs in the pilot experimental set up is being monitored to assess the effect of periodic fluctuation and transients in atmospheric pressure and temperature, room pressure variation, flow pulsations, uniformity of gas distribution and power failures. The capability of closed loop gas recirculation system to respond to these changes is also studied. The conclusions from the above experiment are presented. The validations of the first design considerations and subsequent modifications have provided improved guidelines for the future design of the engineering module gas system.
Genome-Scale Screening of Drug-Target Associations Relevant to Ki Using a Chemogenomics Approach
Cao, Dong-Sheng; Liang, Yi-Zeng; Deng, Zhe; Hu, Qian-Nan; He, Min; Xu, Qing-Song; Zhou, Guang-Hua; Zhang, Liu-Xia; Deng, Zi-xin; Liu, Shao
2013-01-01
The identification of interactions between drugs and target proteins plays a key role in genomic drug discovery. In the present study, the quantitative binding affinities of drug-target pairs are differentiated as a measurement to define whether a drug interacts with a protein or not, and then a chemogenomics framework using an unbiased set of general integrated features and random forest (RF) is employed to construct a predictive model which can accurately classify drug-target pairs. The predictability of the model is further investigated and validated by several independent validation sets. The built model is used to predict drug-target associations, some of which were confirmed by comparing experimental data from public biological resources. A drug-target interaction network with high confidence drug-target pairs was also reconstructed. This network provides further insight for the action of drugs and targets. Finally, a web-based server called PreDPI-Ki was developed to predict drug-target interactions for drug discovery. In addition to providing a high-confidence list of drug-target associations for subsequent experimental investigation guidance, these results also contribute to the understanding of drug-target interactions. We can also see that quantitative information of drug-target associations could greatly promote the development of more accurate models. The PreDPI-Ki server is freely available via: http://sdd.whu.edu.cn/dpiki. PMID:23577055
Icing Simulation Research Supporting the Ice-Accretion Testing of Large-Scale Swept-Wing Models
NASA Technical Reports Server (NTRS)
Yadlin, Yoram; Monnig, Jaime T.; Malone, Adam M.; Paul, Bernard P.
2018-01-01
The work summarized in this report is a continuation of NASA's Large-Scale, Swept-Wing Test Articles Fabrication; Research and Test Support for NASA IRT contract (NNC10BA05 -NNC14TA36T) performed by Boeing under the NASA Research and Technology for Aerospace Propulsion Systems (RTAPS) contract. In the study conducted under RTAPS, a series of icing tests in the Icing Research Tunnel (IRT) have been conducted to characterize ice formations on large-scale swept wings representative of modern commercial transport airplanes. The outcome of that campaign was a large database of ice-accretion geometries that can be used for subsequent aerodynamic evaluation in other experimental facilities and for validation of ice-accretion prediction codes.
Birth and evolution of an optical vortex.
Vallone, Giuseppe; Sponselli, Anna; D'Ambrosio, Vincenzo; Marrucci, Lorenzo; Sciarrino, Fabio; Villoresi, Paolo
2016-07-25
When a phase singularity is suddenly imprinted on the axis of an ordinary Gaussian beam, an optical vortex appears and starts to grow radially, by effect of diffraction. This radial growth and the subsequent evolution of the optical vortex under focusing or imaging can be well described in general within the recently introduced theory of circular beams, which generalize the hypergeometric-Gaussian beams and which obey novel kinds of ABCD rules. Here, we investigate experimentally these vortex propagation phenomena and test the validity of circular-beam theory. Moreover, we analyze the difference in radial structure between the newly generated optical vortex and the vortex obtained in the image plane, where perfect imaging would lead to complete closure of the vortex core.
Rational design of new electrolyte materials for electrochemical double layer capacitors
NASA Astrophysics Data System (ADS)
Schütter, Christoph; Husch, Tamara; Viswanathan, Venkatasubramanian; Passerini, Stefano; Balducci, Andrea; Korth, Martin
2016-09-01
The development of new electrolytes is a centerpiece of many strategies to improve electrochemical double layer capacitor (EDLC) devices. We present here a computational screening-based rational design approach to find new electrolyte materials. As an example application, the known chemical space of almost 70 million compounds is investigated in search of electrochemically more stable solvents. Cyano esters are identified as especially promising new compound class. Theoretical predictions are validated with subsequent experimental studies on a selected case. These studies show that based on theoretical predictions only, a previously untested, but very well performing compound class was identified. We thus find that our rational design strategy is indeed able to successfully identify completely new materials with substantially improved properties.
Vallabhajosyula, Prashanth; Hirakata, Atsushi; Weiss, Matthew; Griesemer, Adam; Shimizu, Akira; Hong, Hanzhou; Habertheuer, Andreas; Tchipashvili, Vaja; Yamada, Kazuhiko; Sachs, David H
2017-11-01
In islet transplantation, in addition to immunologic and ischemic factors, the diabetic/hyperglycemic state of the recipient has been proposed, although not yet validated, as a possible cause of islet toxicity, contributing to islet loss during the engraftment period. Using a miniature swine model of islet transplantation, we have now assessed the effect of a persistent state of hyperglycemia on islet engraftment and subsequent function. An islet-kidney (IK) model previously described by our laboratory was utilized. Three experimental donor animals underwent total pancreatectomy and autologous islet transplantation underneath the renal capsule to prepare an IK at a load of ≤1,000 islet equivalents (IE)/kg donor weight, leading to a chronic diabetic state during the engraftment period (fasting blood glucose >250 mg/dL). Three control donor animals underwent partial pancreatectomy (sufficient to maintain normoglycemia during islet engraftment period) and IK preparation. As in vivo functional readout for islet engraftment, the IKs were transplanted across an immunologic minor or class I mismatch barrier into diabetic, nephrectomized recipients at an islet load of ∼4,500 IE/kg recipient weight. A 12-d course of cyclosporine was administered for tolerance induction. All experimental donors became diabetic and showed signs of end organ injury, while control donors maintained normoglycemia. All recipients of IK from both experimental and control donors achieved glycemic control over long-term follow-up, with reversal of diabetic nephropathy and with similar glucose tolerance tests. In this preclinical, large animal model, neither islet engraftment nor subsequent long-term islet function after transplantation appear to be affected by the diabetic state.
Hirakata, Atsushi; Weiss, Matthew; Griesemer, Adam; Shimizu, Akira; Hong, Hanzhou; Habertheuer, Andreas; Tchipashvili, Vaja; Yamada, Kazuhiko; Sachs, David H.
2018-01-01
In islet transplantation, in addition to immunologic and ischemic factors, the diabetic/hyperglycemic state of the recipient has been proposed, although not yet validated, as a possible cause of islet toxicity, contributing to islet loss during the engraftment period. Using a miniature swine model of islet transplantation, we have now assessed the effect of a persistent state of hyperglycemia on islet engraftment and subsequent function. An islet–kidney (IK) model previously described by our laboratory was utilized. Three experimental donor animals underwent total pancreatectomy and autologous islet transplantation underneath the renal capsule to prepare an IK at a load of ≤1,000 islet equivalents (IE)/kg donor weight, leading to a chronic diabetic state during the engraftment period (fasting blood glucose >250 mg/dL). Three control donor animals underwent partial pancreatectomy (sufficient to maintain normoglycemia during islet engraftment period) and IK preparation. As in vivo functional readout for islet engraftment, the IKs were transplanted across an immunologic minor or class I mismatch barrier into diabetic, nephrectomized recipients at an islet load of ∼4,500 IE/kg recipient weight. A 12-d course of cyclosporine was administered for tolerance induction. All experimental donors became diabetic and showed signs of end organ injury, while control donors maintained normoglycemia. All recipients of IK from both experimental and control donors achieved glycemic control over long-term follow-up, with reversal of diabetic nephropathy and with similar glucose tolerance tests. In this preclinical, large animal model, neither islet engraftment nor subsequent long-term islet function after transplantation appear to be affected by the diabetic state. PMID:29338381
Experimental validation of structural optimization methods
NASA Technical Reports Server (NTRS)
Adelman, Howard M.
1992-01-01
The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokuhiro, Akira; Potirniche, Gabriel; Cogliati, Joshua
2014-07-08
An experimental and computational study, consisting of modeling and simulation (M&S), of key thermal-mechanical issues affecting the design and safety of pebble-bed (PB) reactors was conducted. The objective was to broaden understanding and experimentally validate thermal-mechanic phenomena of nuclear grade graphite, specifically, spheres in frictional contact as anticipated in the bed under reactor relevant pressures and temperatures. The contact generates graphite dust particulates that can subsequently be transported into the flowing gaseous coolent. Under postulated depressurization transients and with the potential for leaked fission products to be adsorbed onto graphite 'dust', there is the potential for fission products to escapemore » from the primary volume. This is a design safety concern. Furthermore, earlier safety assessment identified the distinct possibility for the dispersed dust to combust in contact with air if sufficient conditions are met. Both of these phenomena were noted as important to design review and containing uncertainty to warrant study. The team designed and conducted two separate effects tests to study and benchmark the potential dust-generation rate, as well as study the conditions under which a dust explosion may occure in a standardized, instrumented explosion chamber.« less
Structural health monitoring for DOT using magnetic shape memory alloy cables in concrete
NASA Astrophysics Data System (ADS)
Davis, Allen; Mirsayar, Mirmilad; Sheahan, Emery; Hartl, Darren
2018-03-01
Embedding shape memory alloy (SMA) wires in concrete components offers the potential to monitor their structural health via external magnetic field sensing. Currently, structural health monitoring (SHM) is dominated by acoustic emission and vibration-based methods. Thus, it is attractive to pursue alternative damage sensing techniques that may lower the cost or increase the accuracy of SHM. In this work, SHM via magnetic field detection applied to embedded magnetic shape memory alloy (MSMA) is demonstrated both experimentally and using computational models. A concrete beam containing iron-based MSMA wire is subjected to a 3-point bend test where structural damage is induced, thereby resulting in a localized phase change of the MSMA wire. Magnetic field lines passing through the embedded MSMA domain are altered by this phase change and can thus be used to detect damage within the structure. A good correlation is observed between the computational and experimental results. Additionally, the implementation of stranded MSMA cables in place of the MSMA wire is assessed through similar computational models. The combination of these computational models and their subsequent experimental validation provide sufficient support for the feasibility of SHM using magnetic field sensing via MSMA embedded components.
Zhou, Bailing; Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu; Yang, Yuedong; Zhou, Yaoqi; Wang, Jihua
2018-01-04
Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu
2018-01-01
Abstract Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. PMID:28985416
Ribeiro, Pedro Leite; Camacho, Agustín; Navas, Carlos Arturo
2012-01-01
The thermal limits of individual animals were originally proposed as a link between animal physiology and thermal ecology. Although this link is valid in theory, the evaluation of physiological tolerances involves some problems that are the focus of this study. One rationale was that heating rates shall influence upper critical limits, so that ecological thermal limits need to consider experimental heating rates. In addition, if thermal limits are not surpassed in experiments, subsequent tests of the same individual should yield similar results or produce evidence of hardening. Finally, several non-controlled variables such as time under experimental conditions and procedures may affect results. To analyze these issues we conducted an integrative study of upper critical temperatures in a single species, the ant Atta sexdens rubropiosa, an animal model providing large numbers of individuals of diverse sizes but similar genetic makeup. Our specific aims were to test the 1) influence of heating rates in the experimental evaluation of upper critical temperature, 2) assumptions of absence of physical damage and reproducibility, and 3) sources of variance often overlooked in the thermal-limits literature; and 4) to introduce some experimental approaches that may help researchers to separate physiological and methodological issues. The upper thermal limits were influenced by both heating rates and body mass. In the latter case, the effect was physiological rather than methodological. The critical temperature decreased during subsequent tests performed on the same individual ants, even one week after the initial test. Accordingly, upper thermal limits may have been overestimated by our (and typical) protocols. Heating rates, body mass, procedures independent of temperature and other variables may affect the estimation of upper critical temperatures. Therefore, based on our data, we offer suggestions to enhance the quality of measurements, and offer recommendations to authors aiming to compile and analyze databases from the literature. PMID:22384147
Genome-Wide Analysis of A-to-I RNA Editing.
Savva, Yiannis A; Laurent, Georges St; Reenan, Robert A
2016-01-01
Adenosine (A)-to-inosine (I) RNA editing is a fundamental posttranscriptional modification that ensures the deamination of A-to-I in double-stranded (ds) RNA molecules. Intriguingly, the A-to-I RNA editing system is particularly active in the nervous system of higher eukaryotes, altering a plethora of noncoding and coding sequences. Abnormal RNA editing is highly associated with many neurological phenotypes and neurodevelopmental disorders. However, the molecular mechanisms underlying RNA editing-mediated pathogenesis still remain enigmatic and have attracted increasing attention from researchers. Over the last decade, methods available to perform genome-wide transcriptome analysis, have evolved rapidly. Within the RNA editing field researchers have adopted next-generation sequencing technologies to identify RNA-editing sites within genomes and to elucidate the underlying process. However, technical challenges associated with editing site discovery have hindered efforts to uncover comprehensive editing site datasets, resulting in the general perception that the collections of annotated editing sites represent only a small minority of the total number of sites in a given organism, tissue, or cell type of interest. Additionally to doubts about sensitivity, existing RNA-editing site lists often contain high percentages of false positives, leading to uncertainty about their validity and usefulness in downstream studies. An accurate investigation of A-to-I editing requires properly validated datasets of editing sites with demonstrated and transparent levels of sensitivity and specificity. Here, we describe a high signal-to-noise method for RNA-editing site detection using single-molecule sequencing (SMS). With this method, authentic RNA-editing sites may be differentiated from artifacts. Machine learning approaches provide a procedure to improve upon and experimentally validate sequencing outcomes through use of computationally predicted, iterative feedback loops. Subsequent use of extensive Sanger sequencing validations can generate accurate editing site lists. This approach has broad application and accurate genome-wide editing analysis of various tissues from clinical specimens or various experimental organisms is now a possibility.
Etude par elements finis du comportement thermo-chimiomecanique de la pâte monolithique
NASA Astrophysics Data System (ADS)
Girard, Pierre-Luc
Aluminum industry is in a fierce international competition requiring the constant improvement of the electrolysis cell effectiveness and longevity. The selection of the cell's materials components becomes an important factor to increase the cell's life. The ramming paste, used to seal the cathode lining, is compacted in the joints between the cathode and the side wall of the cell. It is a complex thermo-chemo-reactive material whose proprieties change with the evolution of his baking level. Therefore, the objective of this project is to propose a thermo-chemo-mechanical constitutive law for the ramming paste and implement it in the finite element software ANSYSRTM. A constitutive model was first chosen from the available literature on the subject. It is a pressure dependent model that uses hardening, softening and baking mechanisms in its definition to mimic the behavior of carbon-based materials. Subsequently, the numerical tool was validated using the finite element toolbox FESh++, which contains the most representative carbon-based thermochimio- mechanical material constitutive law at this time. Finally, a validation of the experimental setup BERTA (Banc d'essai de resistance thermomecanique ALCAN) was made in prevision of a larger scale experimental validation of the constitutive law in a near future. However, the analysis of the results shows that BERTA is not suited to adequately measure the mechanical deformation of such kind of material. Following this project, the numerical tool will be used in numerical simulation to introduce the various effects of the baking of the ramming paste during the cell startup. This new tool will help the industrial partner to enhance the understanding of Hall-Heroult cell start-up and optimize this critical step.
Assessment of protein set coherence using functional annotations
Chagoyen, Monica; Carazo, Jose M; Pascual-Montano, Alberto
2008-01-01
Background Analysis of large-scale experimental datasets frequently produces one or more sets of proteins that are subsequently mined for functional interpretation and validation. To this end, a number of computational methods have been devised that rely on the analysis of functional annotations. Although current methods provide valuable information (e.g. significantly enriched annotations, pairwise functional similarities), they do not specifically measure the degree of homogeneity of a protein set. Results In this work we present a method that scores the degree of functional homogeneity, or coherence, of a set of proteins on the basis of the global similarity of their functional annotations. The method uses statistical hypothesis testing to assess the significance of the set in the context of the functional space of a reference set. As such, it can be used as a first step in the validation of sets expected to be homogeneous prior to further functional interpretation. Conclusion We evaluate our method by analysing known biologically relevant sets as well as random ones. The known relevant sets comprise macromolecular complexes, cellular components and pathways described for Saccharomyces cerevisiae, which are mostly significantly coherent. Finally, we illustrate the usefulness of our approach for validating 'functional modules' obtained from computational analysis of protein-protein interaction networks. Matlab code and supplementary data are available at PMID:18937846
A single secreted luciferase-based gene reporter assay.
Barriscale, Kathy A; O'Sullivan, Sharon A; McCarthy, Tommie V
2014-05-15
Promoter analysis typically employs a reporter gene fused to a test promoter combined with a second reporter fused to a control promoter that is used for normalization purposes. However, this approach is not valid when experimental conditions affect the control promoter. We have developed and validated a single secreted luciferase reporter (SSLR) assay for promoter analysis that avoids the use of a control reporter. The approach uses an early level of expression of a secreted luciferase linked to a test promoter as an internal normalization control for subsequent analysis of the same promoter. Comparison of the SSLR assay with the dual luciferase reporter (DLR) assay using HMGCR (3-hydroxy-3-methylglutaryl-coenzyme A reductase) and LDLR (low-density lipoprotein receptor) promoter constructs, which are down-regulated by 25-hydroxycholesterol, show that both assays yield similar results. Comparison of the response of the HMGCR promoter in SSLR transient assays compared very favorably with the response of the same promoter in the stable cell line. Overall, the SSLR assay proved to be a valid alternative to the DLR assay for certain applications and had significant advantages in that measurement of only one luciferase is required and monitoring can be continuous because cell lysis is not necessary. Copyright © 2014 Elsevier Inc. All rights reserved.
MicroRNA expression in benign breast tissue and risk of subsequent invasive breast cancer.
Rohan, Thomas; Ye, Kenny; Wang, Yihong; Glass, Andrew G; Ginsberg, Mindy; Loudig, Olivier
2018-01-01
MicroRNAs are endogenous, small non-coding RNAs that control gene expression by directing their target mRNAs for degradation and/or posttranscriptional repression. Abnormal expression of microRNAs is thought to contribute to the development and progression of cancer. A history of benign breast disease (BBD) is associated with increased risk of subsequent breast cancer. However, no large-scale study has examined the association between microRNA expression in BBD tissue and risk of subsequent invasive breast cancer (IBC). We conducted discovery and validation case-control studies nested in a cohort of 15,395 women diagnosed with BBD in a large health plan between 1971 and 2006 and followed to mid-2015. Cases were women with BBD who developed subsequent IBC; controls were matched 1:1 to cases on age, age at diagnosis of BBD, and duration of plan membership. The discovery stage (316 case-control pairs) entailed use of the Illumina MicroRNA Expression Profiling Assay (in duplicate) to identify breast cancer-associated microRNAs. MicroRNAs identified at this stage were ranked by the strength of the correlation between Illumina array and quantitative PCR results for 15 case-control pairs. The top ranked 14 microRNAs entered the validation stage (165 case-control pairs) which was conducted using quantitative PCR (in triplicate). In both stages, linear regression was used to evaluate the association between the mean expression level of each microRNA (response variable) and case-control status (independent variable); paired t-tests were also used in the validation stage. None of the 14 validation stage microRNAs was associated with breast cancer risk. The results of this study suggest that microRNA expression in benign breast tissue does not influence the risk of subsequent IBC.
MicroRNA expression in benign breast tissue and risk of subsequent invasive breast cancer
Ye, Kenny; Wang, Yihong; Ginsberg, Mindy; Loudig, Olivier
2018-01-01
MicroRNAs are endogenous, small non-coding RNAs that control gene expression by directing their target mRNAs for degradation and/or posttranscriptional repression. Abnormal expression of microRNAs is thought to contribute to the development and progression of cancer. A history of benign breast disease (BBD) is associated with increased risk of subsequent breast cancer. However, no large-scale study has examined the association between microRNA expression in BBD tissue and risk of subsequent invasive breast cancer (IBC). We conducted discovery and validation case-control studies nested in a cohort of 15,395 women diagnosed with BBD in a large health plan between 1971 and 2006 and followed to mid-2015. Cases were women with BBD who developed subsequent IBC; controls were matched 1:1 to cases on age, age at diagnosis of BBD, and duration of plan membership. The discovery stage (316 case-control pairs) entailed use of the Illumina MicroRNA Expression Profiling Assay (in duplicate) to identify breast cancer-associated microRNAs. MicroRNAs identified at this stage were ranked by the strength of the correlation between Illumina array and quantitative PCR results for 15 case-control pairs. The top ranked 14 microRNAs entered the validation stage (165 case-control pairs) which was conducted using quantitative PCR (in triplicate). In both stages, linear regression was used to evaluate the association between the mean expression level of each microRNA (response variable) and case-control status (independent variable); paired t-tests were also used in the validation stage. None of the 14 validation stage microRNAs was associated with breast cancer risk. The results of this study suggest that microRNA expression in benign breast tissue does not influence the risk of subsequent IBC. PMID:29432432
Electrophysiological biomarkers of epileptogenicity after traumatic brain injury.
Perucca, Piero; Smith, Gregory; Santana-Gomez, Cesar; Bragin, Anatol; Staba, Richard
2018-06-05
Post-traumatic epilepsy is the architype of acquired epilepsies, wherein a brain insult initiates an epileptogenic process culminating in an unprovoked seizure after weeks, months or years. Identifying biomarkers of such process is a prerequisite for developing and implementing targeted therapies aimed at preventing the development of epilepsy. Currently, there are no validated electrophysiological biomarkers of post-traumatic epileptogenesis. Experimental EEG studies using the lateral fluid percussion injury model have identified three candidate biomarkers of post-traumatic epileptogenesis: pathological high-frequency oscillations (HFOs, 80-300 Hz); repetitive HFOs and spikes (rHFOSs); and reduction in sleep spindle duration and dominant frequency at the transition from stage III to rapid eye movement sleep. EEG studies in humans have yielded conflicting data; recent evidence suggests that epileptiform abnormalities detected acutely after traumatic brain injury carry a significantly increased risk of subsequent epilepsy. Well-designed studies are required to validate these promising findings, and ultimately establish whether there are post-traumatic electrophysiological features which can guide the development of 'antiepileptogenic' therapies. Copyright © 2018 Elsevier Inc. All rights reserved.
NIMROD Modeling of Sawtooth Modes Using Hot-Particle Closures
NASA Astrophysics Data System (ADS)
Kruger, Scott; Jenkins, T. G.; Held, E. D.; King, J. R.
2015-11-01
In DIII-D shot 96043, RF heating gives rise to an energetic ion population that alters the sawtooth stability boundary, replacing conventional sawtooth cycles by longer-period, larger-amplitude `giant sawtooth' oscillations. We explore the use of particle-in-cell closures within the NIMROD code to numerically represent the RF-induced hot-particle distribution, and investigate the role of this distribution in determining the altered mode onset threshold and subsequent nonlinear evolution. Equilibrium reconstructions from the experimental data are used to enable these detailed validation studies. Effects of other parameters on the sawtooth behavior, such as the plasma Lundquist number and hot-particle beta-fraction, are also considered. The fast energetic particles present many challenges for the PIC closure. We review new algorithm and performance improvements to address these challenges, and provide a preliminary assessment of the efficacy of the PIC closure versus a continuum model for energetic particle modeling. We also compare our results with those of, and discuss plans for a more complete validation campaign for this discharge. Supported by US Department of Energy via the SciDAC Center for Extended MHD Modeling (CEMM).
Validation of a coupled core-transport, pedestal-structure, current-profile and equilibrium model
NASA Astrophysics Data System (ADS)
Meneghini, O.
2015-11-01
The first workflow capable of predicting the self-consistent solution to the coupled core-transport, pedestal structure, and equilibrium problems from first-principles and its experimental tests are presented. Validation with DIII-D discharges in high confinement regimes shows that the workflow is capable of robustly predicting the kinetic profiles from on axis to the separatrix and matching the experimental measurements to within their uncertainty, with no prior knowledge of the pedestal height nor of any measurement of the temperature or pressure. Self-consistent coupling has proven to be essential to match the experimental results, and capture the non-linear physics that governs the core and pedestal solutions. In particular, clear stabilization of the pedestal peeling ballooning instabilities by the global Shafranov shift and destabilization by additional edge bootstrap current, and subsequent effect on the core plasma profiles, have been clearly observed and documented. In our model, self-consistency is achieved by iterating between the TGYRO core transport solver (with NEO and TGLF for neoclassical and turbulent flux), and the pedestal structure predicted by the EPED model. A self-consistent equilibrium is calculated by EFIT, while the ONETWO transport package evolves the current profile and calculates the particle and energy sources. The capabilities of such workflow are shown to be critical for the design of future experiments such as ITER and FNSF, which operate in a regime where the equilibrium, the pedestal, and the core transport problems are strongly coupled, and for which none of these quantities can be assumed to be known. Self-consistent core-pedestal predictions for ITER, as well as initial optimizations, will be presented. Supported by the US Department of Energy under DE-FC02-04ER54698, DE-SC0012652.
Bayesian Adaptive Trial Design for a Newly Validated Surrogate Endpoint
Renfro, Lindsay A.; Carlin, Bradley P.; Sargent, Daniel J.
2011-01-01
Summary The evaluation of surrogate endpoints for primary use in future clinical trials is an increasingly important research area, due to demands for more efficient trials coupled with recent regulatory acceptance of some surrogates as ‘valid.’ However, little consideration has been given to how a trial which utilizes a newly-validated surrogate endpoint as its primary endpoint might be appropriately designed. We propose a novel Bayesian adaptive trial design that allows the new surrogate endpoint to play a dominant role in assessing the effect of an intervention, while remaining realistically cautious about its use. By incorporating multi-trial historical information on the validated relationship between the surrogate and clinical endpoints, then subsequently evaluating accumulating data against this relationship as the new trial progresses, we adaptively guard against an erroneous assessment of treatment based upon a truly invalid surrogate. When the joint outcomes in the new trial seem plausible given similar historical trials, we proceed with the surrogate endpoint as the primary endpoint, and do so adaptively–perhaps stopping the trial for early success or inferiority of the experimental treatment, or for futility. Otherwise, we discard the surrogate and switch adaptive determinations to the original primary endpoint. We use simulation to test the operating characteristics of this new design compared to a standard O’Brien-Fleming approach, as well as the ability of our design to discriminate trustworthy from untrustworthy surrogates in hypothetical future trials. Furthermore, we investigate possible benefits using patient-level data from 18 adjuvant therapy trials in colon cancer, where disease-free survival is considered a newly-validated surrogate endpoint for overall survival. PMID:21838811
Threats to the Internal Validity of Experimental and Quasi-Experimental Research in Healthcare.
Flannelly, Kevin J; Flannelly, Laura T; Jankowski, Katherine R B
2018-01-01
The article defines, describes, and discusses the seven threats to the internal validity of experiments discussed by Donald T. Campbell in his classic 1957 article: history, maturation, testing, instrument decay, statistical regression, selection, and mortality. These concepts are said to be threats to the internal validity of experiments because they pose alternate explanations for the apparent causal relationship between the independent variable and dependent variable of an experiment if they are not adequately controlled. A series of simple diagrams illustrate three pre-experimental designs and three true experimental designs discussed by Campbell in 1957 and several quasi-experimental designs described in his book written with Julian C. Stanley in 1966. The current article explains why each design controls for or fails to control for these seven threats to internal validity.
A simplified focusing and astigmatism correction method for a scanning electron microscope
NASA Astrophysics Data System (ADS)
Lu, Yihua; Zhang, Xianmin; Li, Hai
2018-01-01
Defocus and astigmatism can lead to blurred images and poor resolution. This paper presents a simplified method for focusing and astigmatism correction of a scanning electron microscope (SEM). The method consists of two steps. In the first step, the fast Fourier transform (FFT) of the SEM image is performed and the FFT is subsequently processed with a threshold to achieve a suitable result. In the second step, the threshold FFT is used for ellipse fitting to determine the presence of defocus and astigmatism. The proposed method clearly provides the relationships between the defocus, the astigmatism and the direction of stretching of the FFT, and it can determine the astigmatism in a single image. Experimental studies are conducted to demonstrate the validity of the proposed method.
NASA Technical Reports Server (NTRS)
Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor)
2012-01-01
This invention develops a mathematical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Effects of temperature and load current have also been incorporated into the model. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics for a sample case. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices.
Jordheim, Lars Petter; Barakat, Khaled H; Heinrich-Balard, Laurence; Matera, Eva-Laure; Cros-Perrial, Emeline; Bouledrak, Karima; El Sabeh, Rana; Perez-Pineiro, Rolando; Wishart, David S; Cohen, Richard; Tuszynski, Jack; Dumontet, Charles
2013-07-01
The benefit of cancer chemotherapy based on alkylating agents is limited because of the action of DNA repair enzymes, which mitigate the damage induced by these agents. The interaction between the proteins ERCC1 and XPF involves two major components of the nucleotide excision repair pathway. Here, novel inhibitors of this interaction were identified by virtual screening based on available structures with use of the National Cancer Institute diversity set and a panel of DrugBank small molecules. Subsequently, experimental validation of the in silico screening was undertaken. Top hits were evaluated on A549 and HCT116 cancer cells. In particular, the compound labeled NSC 130813 [4-[(6-chloro-2-methoxy-9-acridinyl)amino]-2-[(4-methyl-1-piperazinyl)methyl
A validated approach for modeling collapse of steel structures
NASA Astrophysics Data System (ADS)
Saykin, Vitaliy Victorovich
A civil engineering structure is faced with many hazardous conditions such as blasts, earthquakes, hurricanes, tornadoes, floods, and fires during its lifetime. Even though structures are designed for credible events that can happen during a lifetime of the structure, extreme events do happen and cause catastrophic failures. Understanding the causes and effects of structural collapse is now at the core of critical areas of national need. One factor that makes studying structural collapse difficult is the lack of full-scale structural collapse experimental test results against which researchers could validate their proposed collapse modeling approaches. The goal of this work is the creation of an element deletion strategy based on fracture models for use in validated prediction of collapse of steel structures. The current work reviews the state-of-the-art of finite element deletion strategies for use in collapse modeling of structures. It is shown that current approaches to element deletion in collapse modeling do not take into account stress triaxiality in vulnerable areas of the structure, which is important for proper fracture and element deletion modeling. The report then reviews triaxiality and its role in fracture prediction. It is shown that fracture in ductile materials is a function of triaxiality. It is also shown that, depending on the triaxiality range, different fracture mechanisms are active and should be accounted for. An approach using semi-empirical fracture models as a function of triaxiality are employed. The models to determine fracture initiation, softening and subsequent finite element deletion are outlined. This procedure allows for stress-displacement softening at an integration point of a finite element in order to subsequently remove the element. This approach avoids abrupt changes in the stress that would create dynamic instabilities, thus making the results more reliable and accurate. The calibration and validation of these models are shown. The calibration is performed using a particle swarm optimization algorithm to establish accurate parameters when calibrated to circumferentially notched tensile coupons. It is shown that consistent, accurate predictions are attained using the chosen models. The variation of triaxiality in steel material during plastic hardening and softening is reported. The range of triaxiality in steel structures undergoing collapse is investigated in detail and the accuracy of the chosen finite element deletion approaches is discussed. This is done through validation of different structural components and structural frames undergoing severe fracture and collapse.
Cross-Validation of Levenson's Psychopathy Scale in a Sample of Federal Female Inmates
ERIC Educational Resources Information Center
Brinkley, Chad A.; Diamond, Pamela M.; Magaletta, Philip R.; Heigel, Caron P.
2008-01-01
Levenson, Kiehl, and Fitzpatrick's Self-Report Psychopathy Scale (LSRPS) is evaluated to determine the factor structure and concurrent validity of the instrument among 430 federal female inmates. Confirmatory factor analysis fails to validate the expected 2-factor structure. Subsequent exploratory factor analysis reveals a 3-factor structure…
A Comprehensive Validation Methodology for Sparse Experimental Data
NASA Technical Reports Server (NTRS)
Norman, Ryan B.; Blattnig, Steve R.
2010-01-01
A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.
NASA Astrophysics Data System (ADS)
Wagner, David R.; Holmgren, Per; Skoglund, Nils; Broström, Markus
2018-06-01
The design and validation of a newly commissioned entrained flow reactor is described in the present paper. The reactor was designed for advanced studies of fuel conversion and ash formation in powder flames, and the capabilities of the reactor were experimentally validated using two different solid biomass fuels. The drop tube geometry was equipped with a flat flame burner to heat and support the powder flame, optical access ports, a particle image velocimetry (PIV) system for in situ conversion monitoring, and probes for extraction of gases and particulate matter. A detailed description of the system is provided based on simulations and measurements, establishing the detailed temperature distribution and gas flow profiles. Mass balance closures of approximately 98% were achieved by combining gas analysis and particle extraction. Biomass fuel particles were successfully tracked using shadow imaging PIV, and the resulting data were used to determine the size, shape, velocity, and residence time of converting particles. Successful extractive sampling of coarse and fine particles during combustion while retaining their morphology was demonstrated, and it opens up for detailed time resolved studies of rapid ash transformation reactions; in the validation experiments, clear and systematic fractionation trends for K, Cl, S, and Si were observed for the two fuels tested. The combination of in situ access, accurate residence time estimations, and precise particle sampling for subsequent chemical analysis allows for a wide range of future studies, with implications and possibilities discussed in the paper.
2012-08-01
U0=15m/s, Lv =350m Cloud Wind and Clear Sky Gust Simulation Using Dryden PSD* Harvested Energy from Normal Vibration (Red) to...energy control law based on limited energy constraints 4) Experimentally validated simultaneous energy harvesting and vibration control Summary...Experimental Characterization and Validation of Simultaneous Gust Alleviation and Energy Harvesting for Multifunctional Wing Spars AFOSR
Jonker, Michiel T O
2016-06-01
Octanol-water partition coefficients (KOW ) are widely used in fate and effects modeling of chemicals. Still, high-quality experimental KOW data are scarce, in particular for very hydrophobic chemicals. This hampers reliable assessments of several fate and effect parameters and the development and validation of new models. One reason for the limited availability of experimental values may relate to the challenging nature of KOW measurements. In the present study, KOW values for 13 polycyclic aromatic hydrocarbons were determined with the gold standard "slow-stirring" method (log KOW 4.6-7.2). These values were then used as reference data for the development of an alternative method for measuring KOW . This approach combined slow stirring and equilibrium sampling of the extremely low aqueous concentrations with polydimethylsiloxane-coated solid-phase microextraction fibers, applying experimentally determined fiber-water partition coefficients. It resulted in KOW values matching the slow-stirring data very well. Therefore, the method was subsequently applied to a series of 17 moderately to extremely hydrophobic petrochemical compounds. The obtained KOW values spanned almost 6 orders of magnitude, with the highest value measuring 10(10.6) . The present study demonstrates that the hydrophobicity domain within which experimental KOW measurements are possible can be extended with the help of solid-phase microextraction and that experimentally determined KOW values can exceed the proposed upper limit of 10(9) . Environ Toxicol Chem 2016;35:1371-1377. © 2015 SETAC. © 2015 SETAC.
An electromechanical coupling model of a bending vibration type piezoelectric ultrasonic transducer.
Zhang, Qiang; Shi, Shengjun; Chen, Weishan
2016-03-01
An electromechanical coupling model of a bending vibration type piezoelectric ultrasonic transducer is proposed. The transducer is a Langevin type transducer which is composed of an exponential horn, four groups of PZT ceramics and a back beam. The exponential horn can focus the vibration energy, and can enlarge vibration amplitude and velocity efficiently. A bending vibration model of the transducer is first constructed, and subsequently an electromechanical coupling model is constructed based on the vibration model. In order to obtain the most suitable excitation position of the PZT ceramics, the effective electromechanical coupling coefficient is optimized by means of the quadratic interpolation method. When the effective electromechanical coupling coefficient reaches the peak value of 42.59%, the optimal excitation position (L1=22.52 mm) is found. The FEM method and the experimental method are used to validate the developed analytical model. Two groups of the FEM model (the Group A center bolt is not considered, and but the Group B center bolt is considered) are constructed and separately compared with the analytical model and the experimental model. Four prototype transducers around the peak value are fabricated and tested to validate the analytical model. A scanning laser Doppler vibrometer is employed to test the bending vibration shape and resonance frequency. Finally, the electromechanical coupling coefficient is tested indirectly through an impedance analyzer. Comparisons of the analytical results, FEM results and experiment results are presented, and the results show good agreement. Copyright © 2015 Elsevier B.V. All rights reserved.
An approach to an acute emotional stress reference scale.
Garzon-Rey, J M; Arza, A; de-la-Camara, C; Lobo, A; Armario, A; Aguilo, J
2017-06-16
The clinical diagnosis aims to identify the degree of affectation of the psycho-physical state of the patient as a guide to therapeutic intervention. In stress, the lack of a measurement tool based on a reference makes it difficult to quantitatively assess this degree of affectation. To define and perform a primary assessment of a standard reference in order to measure acute emotional stress from the markers identified as indicators of the degree. Psychometric tests and biochemical variables are, in general, the most accepted stress measurements by the scientific community. Each one of them probably responds to different and complementary processes related to the reaction to a stress stimulus. The reference that is proposed is a weighted mean of these indicators by assigning them relative weights in accordance with a principal components analysis. An experimental study was conducted on 40 healthy young people subjected to the psychosocial stress stimulus of the Trier Social Stress Test in order to perform a primary assessment and consistency check of the proposed reference. The proposed scale clearly differentiates between the induced relax and stress states. Accepting the subjectivity of the definition and the lack of a subsequent validation with new experimental data, the proposed standard differentiates between a relax state and an emotional stress state triggered by a moderate stress stimulus, as it is the Trier Social Stress Test. The scale is robust. Although the variations in the percentage composition slightly affect the score, but they do not affect the valid differentiation between states.
2017-09-01
VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY
Data Analysis for the LISA Pathfinder Mission
NASA Technical Reports Server (NTRS)
Thorpe, James Ira
2009-01-01
The LTP (LISA Technology Package) is the core part of the Laser Interferometer Space Antenna (LISA) Pathfinder mission. The main goal of the mission is to study the sources of any disturbances that perturb the motion of the freely-falling test masses from their geodesic trajectories as well as 10 test various technologies needed for LISA. The LTP experiment is designed as a sequence of experimental runs in which the performance of the instrument is studied and characterized under different operating conditions. In order to best optimize subsequent experimental runs, each run must be promptly analysed to ensure that the following ones make best use of the available knowledge of the instrument ' In order to do this, all analyses must be designed and tested in advance of the mission and have sufficient built-in flexibility to account for unexpected results or behaviour. To support this activity, a robust and flexible data analysis software package is also required. This poster presents two of the main components that make up the data analysis effort: the data analysis software and the mock-data challenges used to validate analysis procedures and experiment designs.
Zhang, Tao; Wei, Dong-Qing; Chou, Kuo-Chen
2012-03-01
Comparative molecular field analysis (CoMFA) is a widely used 3D-QSAR method by which we can investigate the potential relation between biological activity of compounds and their structural features. In this study, a new application of this approach is presented by combining the molecular modeling with a new developed pharmacophore model specific to CYP1A2 active site. During constructing the model, we used the molecular dynamics simulation and molecular docking method to select the sensible binding conformations for 17 CYP1A2 substrates based on the experimental data. Subsequently, the results obtained via the alignment of binding conformations of substrates were projected onto the active- site residues, upon which a simple blueprint of active site was produced. It was validated by the experimental and computational results that the model did exhibit the high degree of rationality and provide useful insights into the substrate binding. It is anticipated that our approach can be extended to investigate the protein-ligand interactions for many other enzyme-catalyzed systems as well.
Modeling and Analysis of the Reverse Water Gas Shift Process for In-Situ Propellant Production
NASA Technical Reports Server (NTRS)
Whitlow, Jonathan E.
2000-01-01
This report focuses on the development of mathematical models and simulation tools developed for the Reverse Water Gas Shift (RWGS) process. This process is a candidate technology for oxygen production on Mars under the In-Situ Propellant Production (ISPP) project. An analysis of the RWGS process was performed using a material balance for the system. The material balance is very complex due to the downstream separations and subsequent recycle inherent with the process. A numerical simulation was developed for the RWGS process to provide a tool for analysis and optimization of experimental hardware, which will be constructed later this year at Kennedy Space Center (KSC). Attempts to solve the material balance for the system, which can be defined by 27 nonlinear equations, initially failed. A convergence scheme was developed which led to successful solution of the material balance, however the simplified equations used for the gas separation membrane were found insufficient. Additional more rigorous models were successfully developed and solved for the membrane separation. Sample results from these models are included in this report, with recommendations for experimental work needed for model validation.
Cavity-coupled double-quantum dot at finite bias: Analogy with lasers and beyond
NASA Astrophysics Data System (ADS)
Kulkarni, Manas; Cotlet, Ovidiu; Türeci, Hakan E.
2014-09-01
We present a theoretical and experimental study of photonic and electronic transport properties of a voltage biased InAs semiconductor double quantum dot (DQD) that is dipole coupled to a superconducting transmission line resonator. We obtain the master equation for the reduced density matrix of the coupled system of cavity photons and DQD electrons accounting systematically for both the presence of phonons and the effect of leads at finite voltage bias. We subsequently derive analytical expressions for transmission, phase response, photon number, and the nonequilibrium steady-state electron current. We show that the coupled system under finite bias realizes an unconventional version of a single-atom laser and analyze the spectrum and the statistics of the photon flux leaving the cavity. In the transmission mode, the system behaves as a saturable single-atom amplifier for the incoming photon flux. Finally, we show that the back action of the photon emission on the steady-state current can be substantial. Our analytical results are compared to exact master equation results establishing regimes of validity of various analytical models. We compare our findings to available experimental measurements.
Song, Mingkai; Cui, Linlin; Kuang, Han; Zhou, Jingwei; Yang, Pengpeng; Zhuang, Wei; Chen, Yong; Liu, Dong; Zhu, Chenjie; Chen, Xiaochun; Ying, Hanjie; Wu, Jinglan
2018-08-10
An intermittent simulated moving bed (3F-ISMB) operation scheme, the extension of the 3W-ISMB to the non-linear adsorption region, has been introduced for separation of glucose, lactic acid and acetic acid ternary-mixture. This work focuses on exploring the feasibility of the proposed process theoretically and experimentally. Firstly, the real 3F-ISMB model coupled with the transport dispersive model (TDM) and the Modified-Langmuir isotherm was established to build up the separation parameter plane. Subsequently, three operating conditions were selected from the plane to run the 3F-ISMB unit. The experimental results were used to verify the model. Afterwards, the influences of the various flow rates on the separation performances were investigated systematically by means of the validated 3F-ISMB model. The intermittent-retained component lactic acid was finally obtained with the purity of 98.5%, recovery of 95.5% and the average concentration of 38 g/L. The proposed 3F-ISMB process can efficiently separate the mixture with low selectivity into three fractions. Copyright © 2018 Elsevier B.V. All rights reserved.
Intralaminar and Interlaminar Progressive Failure Analysis of Composite Panels with Circular Cutouts
NASA Technical Reports Server (NTRS)
Goyal, Vinay K.; Jaunky, Navin; Johnson, Eric R.; Ambur, Damodar
2002-01-01
A progressive failure methodology is developed and demonstrated to simulate the initiation and material degradation of a laminated panel due to intralaminar and interlaminar failures. Initiation of intralaminar failure can be by a matrix-cracking mode, a fiber-matrix shear mode, and a fiber failure mode. Subsequent material degradation is modeled using damage parameters for each mode to selectively reduce lamina material properties. The interlaminar failure mechanism such as delamination is simulated by positioning interface elements between adjacent sublaminates. A nonlinear constitutive law is postulated for the interface element that accounts for a multi-axial stress criteria to detect the initiation of delamination, a mixed-mode fracture criteria for delamination progression, and a damage parameter to prevent restoration of a previous cohesive state. The methodology is validated using experimental data available in the literature on the response and failure of quasi-isotropic panels with centrally located circular cutouts loaded into the postbuckling regime. Very good agreement between the progressive failure analyses and the experimental results is achieved if the failure analyses includes the interaction of intralaminar and interlaminar failures.
Complex Water Impact Visitor Information Validation and Qualification Sciences Experimental Complex Our the problem space. The Validation and Qualification Sciences Experimental Complex (VQSEC) at Sandia
Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2
NASA Technical Reports Server (NTRS)
Muylaert, Jean (Editor); Kumar, Ajay (Editor); Dujarric, Christian (Editor)
1998-01-01
The results of the phase 2 effort conducted under AGARD Working Group 18 on Hypersonic Experimental and Computational Capability, Improvement and Validation are presented in this report. The first volume, published in May 1996, mainly focused on the design methodology, plans and some initial results of experiments that had been conducted to serve as validation benchmarks. The current volume presents the detailed experimental and computational data base developed during this effort.
Experimental Design and Some Threats to Experimental Validity: A Primer
ERIC Educational Resources Information Center
Skidmore, Susan
2008-01-01
Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…
Fischer, Peter; Fischer, Julia; Weisweiler, Silke; Frey, Dieter
2010-12-01
We investigated whether different modes of decision making (deliberate, intuitive, distracted) affect subsequent confirmatory processing of decision-consistent and inconsistent information. Participants showed higher levels of confirmatory information processing when they made a deliberate or an intuitive decision versus a decision under distraction (Studies 1 and 2). As soon as participants have a cognitive (i.e., deliberate cognitive analysis) or affective (i.e., intuitive and gut feeling) reason for their decision, the subjective confidence in the validity of their decision increases, which results in increased levels of confirmatory information processing (Study 2). In contrast, when participants are distracted during decision making, they are less certain about the validity of their decision and thus are subsequently more balanced in the processing of decision-relevant information.
Evidence of Construct Validity in Published Achievement Tests.
ERIC Educational Resources Information Center
Nolet, Victor; Tindal, Gerald
Valid interpretation of test scores is the shared responsibility of the test designer and the test user. Test publishers must provide evidence of the validity of the decisions their tests are intended to support, while test users are responsible for analyzing this evidence and subsequently using the test in the manner indicated by the publisher.…
Protein adsorption in microengraving immunoassays.
Song, Qing
2015-10-16
Microengraving is a novel immunoassay for characterizing multiple protein secretions from single cells. During the immunoassay, characteristic diffusion and kinetic time scales and determine the time for molecular diffusion of proteins secreted from the activated single lymphocytes and subsequent binding onto the glass slide surface respectively. Our results demonstrate that molecular diffusion plays important roles in the early stage of protein adsorption dynamics which shifts to a kinetic controlled mechanism in the later stage. Similar dynamic pathways are observed for protein adsorption with significantly fast rates and rapid shifts in transport mechanisms when is increased a hundred times from 0.313 to 31.3. Theoretical adsorption isotherms follow the trend of experimentally obtained data. Adsorption isotherms indicate that amount of proteins secreted from individual cells and subsequently captured on a clean glass slide surface increases monotonically with time. Our study directly validates that protein secretion rates can be quantified by the microengraving immunoassay. This will enable us to apply microengraving immunoassays to quantify secretion rates from 10⁴-10⁵ single cells in parallel, screen antigen-specific cells with the highest secretion rate for clonal expansion and quantitatively reveal cellular heterogeneity within a small cell sample.
Protein Adsorption in Microengraving Immunoassays
Song, Qing
2015-01-01
Microengraving is a novel immunoassay forcharacterizing multiple protein secretions from single cells. During the immunoassay, characteristic diffusion and kinetic time scales τD and τK determine the time for molecular diffusion of proteins secreted from the activated single lymphocytes and subsequent binding onto the glass slide surface respectively. Our results demonstrate that molecular diffusion plays important roles in the early stage of protein adsorption dynamics which shifts to a kinetic controlled mechanism in the later stage. Similar dynamic pathways are observed for protein adsorption with significantly fast rates and rapid shifts in transport mechanisms when C0* is increased a hundred times from 0.313 to 31.3. Theoretical adsorption isotherms follow the trend of experimentally obtained data. Adsorption isotherms indicate that amount of proteins secreted from individual cells and subsequently captured on a clean glass slide surface increases monotonically with time. Our study directly validates that protein secretion rates can be quantified by the microengraving immunoassay. This will enable us to apply microengraving immunoassays to quantify secretion rates from 104–105 single cells in parallel, screen antigen-specific cells with the highest secretion rate for clonal expansion and quantitatively reveal cellular heterogeneity within a small cell sample. PMID:26501282
Methodological convergence of program evaluation designs.
Chacón-Moscoso, Salvador; Anguera, M Teresa; Sanduvete-Chaves, Susana; Sánchez-Martín, Milagrosa
2014-01-01
Nowadays, the confronting dichotomous view between experimental/quasi-experimental and non-experimental/ethnographic studies still exists but, despite the extensive use of non-experimental/ethnographic studies, the most systematic work on methodological quality has been developed based on experimental and quasi-experimental studies. This hinders evaluators and planners' practice of empirical program evaluation, a sphere in which the distinction between types of study is changing continually and is less clear. Based on the classical validity framework of experimental/quasi-experimental studies, we carry out a review of the literature in order to analyze the convergence of design elements in methodological quality in primary studies in systematic reviews and ethnographic research. We specify the relevant design elements that should be taken into account in order to improve validity and generalization in program evaluation practice in different methodologies from a practical methodological and complementary view. We recommend ways to improve design elements so as to enhance validity and generalization in program evaluation practice.
Simulation of thin aluminium-foil in the packaging industry
NASA Astrophysics Data System (ADS)
Eskil, Andreasson; Lindström, Tommy; Käck, Britta; Malmberg, Christoffer; Asp, Ann-Magret
2017-10-01
This work present an approach of how to account for the anisotropic mechanical material behaviour in the simulation models of the thin aluminium foil layer (≈10 µm) used in the Packaging Industry. Furthermore, the experimental results from uniaxial tensile tests are parameterised into an analytical expression and the slope of the hardening subsequently extended way beyond the experimental data points. This in order to accommodate the locally high stresses present in the experiments at the neck formation. An analytical expression, denominated Ramberg-Osgood, is used to describe the non-linear mechanical behaviour. Moreover it is possible with a direct method to translate the experimental uniaxial tensile test results into useful numerical material model parameters in Abaqus™. In addition to this the extended material behaviour including the plastic flow i.e. hardening, valid after onset of localisation, the described procedure can also capture the microscopic events, i.e. geometrical thinning, ongoing in the deformation of the aluminium foil. This method has earlier successfully been applied by Petri Mäkelä for paperboard material [1]. The engineering sound and parameterised description of the mechanical material behaviour facilitates an efficient categorisation of different aluminium foil alloys and aid the identification of the correct anisotropic (RD/TD/45°) mechanical material behaviour derived from the physical testing.
Simulations of viscous and compressible gas-gas flows using high-order finite difference schemes
NASA Astrophysics Data System (ADS)
Capuano, M.; Bogey, C.; Spelt, P. D. M.
2018-05-01
A computational method for the simulation of viscous and compressible gas-gas flows is presented. It consists in solving the Navier-Stokes equations associated with a convection equation governing the motion of the interface between two gases using high-order finite-difference schemes. A discontinuity-capturing methodology based on sensors and a spatial filter enables capturing shock waves and deformable interfaces. One-dimensional test cases are performed as validation and to justify choices in the numerical method. The results compare well with analytical solutions. Shock waves and interfaces are accurately propagated, and remain sharp. Subsequently, two-dimensional flows are considered including viscosity and thermal conductivity. In Richtmyer-Meshkov instability, generated on an air-SF6 interface, the influence of the mesh refinement on the instability shape is studied, and the temporal variations of the instability amplitude is compared with experimental data. Finally, for a plane shock wave propagating in air and impacting a cylindrical bubble filled with helium or R22, numerical Schlieren pictures obtained using different grid refinements are found to compare well with experimental shadow-photographs. The mass conservation is verified from the temporal variations of the mass of the bubble. The mean velocities of pressure waves and bubble interface are similar to those obtained experimentally.
Experimental Validation Techniques for the Heleeos Off-Axis Laser Propagation Model
2010-03-01
EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER PROPAGATION MODEL THESIS John Haiducek, 1st Lt, USAF AFIT/GAP/ENP/10-M07 DEPARTMENT...Department of Defense, or the United States Government. AFIT/GAP/ENP/10-M07 EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER ...BS, Physics 1st Lt, USAF March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT/GAP/ENP/10-M07 Abstract The High Energy Laser End-to-End
A research program in empirical computer science
NASA Technical Reports Server (NTRS)
Knight, J. C.
1991-01-01
During the grant reporting period our primary activities have been to begin preparation for the establishment of a research program in experimental computer science. The focus of research in this program will be safety-critical systems. Many questions that arise in the effort to improve software dependability can only be addressed empirically. For example, there is no way to predict the performance of the various proposed approaches to building fault-tolerant software. Performance models, though valuable, are parameterized and cannot be used to make quantitative predictions without experimental determination of underlying distributions. In the past, experimentation has been able to shed some light on the practical benefits and limitations of software fault tolerance. It is common, also, for experimentation to reveal new questions or new aspects of problems that were previously unknown. A good example is the Consistent Comparison Problem that was revealed by experimentation and subsequently studied in depth. The result was a clear understanding of a previously unknown problem with software fault tolerance. The purpose of a research program in empirical computer science is to perform controlled experiments in the area of real-time, embedded control systems. The goal of the various experiments will be to determine better approaches to the construction of the software for computing systems that have to be relied upon. As such it will validate research concepts from other sources, provide new research results, and facilitate the transition of research results from concepts to practical procedures that can be applied with low risk to NASA flight projects. The target of experimentation will be the production software development activities undertaken by any organization prepared to contribute to the research program. Experimental goals, procedures, data analysis and result reporting will be performed for the most part by the University of Virginia.
Reliability, Validity, and Usability of Data Extraction Programs for Single-Case Research Designs.
Moeyaert, Mariola; Maggin, Daniel; Verkuilen, Jay
2016-11-01
Single-case experimental designs (SCEDs) have been increasingly used in recent years to inform the development and validation of effective interventions in the behavioral sciences. An important aspect of this work has been the extension of meta-analytic and other statistical innovations to SCED data. Standard practice within SCED methods is to display data graphically, which requires subsequent users to extract the data, either manually or using data extraction programs. Previous research has examined issues of reliability and validity of data extraction programs in the past, but typically at an aggregate level. Little is known, however, about the coding of individual data points. We focused on four different software programs that can be used for this purpose (i.e., Ungraph, DataThief, WebPlotDigitizer, and XYit), and examined the reliability of numeric coding, the validity compared with real data, and overall program usability. This study indicates that the reliability and validity of the retrieved data are independent of the specific software program, but are dependent on the individual single-case study graphs. Differences were found in program usability in terms of user friendliness, data retrieval time, and license costs. Ungraph and WebPlotDigitizer received the highest usability scores. DataThief was perceived as unacceptable and the time needed to retrieve the data was double that of the other three programs. WebPlotDigitizer was the only program free to use. As a consequence, WebPlotDigitizer turned out to be the best option in terms of usability, time to retrieve the data, and costs, although the usability scores of Ungraph were also strong. © The Author(s) 2016.
Literature Mining for the Discovery of Hidden Connections between Drugs, Genes and Diseases
Frijters, Raoul; van Vugt, Marianne; Smeets, Ruben; van Schaik, René; de Vlieg, Jacob; Alkema, Wynand
2010-01-01
The scientific literature represents a rich source for retrieval of knowledge on associations between biomedical concepts such as genes, diseases and cellular processes. A commonly used method to establish relationships between biomedical concepts from literature is co-occurrence. Apart from its use in knowledge retrieval, the co-occurrence method is also well-suited to discover new, hidden relationships between biomedical concepts following a simple ABC-principle, in which A and C have no direct relationship, but are connected via shared B-intermediates. In this paper we describe CoPub Discovery, a tool that mines the literature for new relationships between biomedical concepts. Statistical analysis using ROC curves showed that CoPub Discovery performed well over a wide range of settings and keyword thesauri. We subsequently used CoPub Discovery to search for new relationships between genes, drugs, pathways and diseases. Several of the newly found relationships were validated using independent literature sources. In addition, new predicted relationships between compounds and cell proliferation were validated and confirmed experimentally in an in vitro cell proliferation assay. The results show that CoPub Discovery is able to identify novel associations between genes, drugs, pathways and diseases that have a high probability of being biologically valid. This makes CoPub Discovery a useful tool to unravel the mechanisms behind disease, to find novel drug targets, or to find novel applications for existing drugs. PMID:20885778
Modeling dioxygen reduction at multicopper oxidase cathodes.
Agbo, Peter; Heath, James R; Gray, Harry B
2014-10-01
We report a general kinetics model for catalytic dioxygen reduction on multicopper oxidase (MCO) cathodes. Our rate equation combines Butler-Volmer (BV) electrode kinetics and the Michaelis-Menten (MM) formalism for enzymatic catalysis, with the BV model accounting for interfacial electron transfer (ET) between the electrode surface and the MCO type 1 copper site. Extending the principles of MM kinetics to this system produced an analytical expression incorporating the effects of subsequent intramolecular ET and dioxygen binding to the trinuclear copper cluster into the cumulative model. We employed experimental electrochemical data on Thermus thermophilus laccase as benchmarks to validate our model, which we suggest will aid in the design of more efficient MCO cathodes. In addition, we demonstrate the model's utility in determining estimates for both the electronic coupling and average distance between the laccase type-1 active site and the cathode substrate.
Evaluation of MARC for the analysis of rotating composite blades
NASA Technical Reports Server (NTRS)
Bartos, Karen F.; Ernst, Michael A.
1993-01-01
The suitability of the MARC code for the analysis of rotating composite blades was evaluated using a four-task process. A nonlinear displacement analysis and subsequent eigenvalue analysis were performed on a rotating spring mass system to ensure that displacement-dependent centrifugal forces were accounted for in the eigenvalue analysis. Normal modes analyses were conducted on isotropic plates with various degrees of twist to evaluate MARC's ability to handle blade twist. Normal modes analyses were conducted on flat composite plates to validate the newly developed coupled COBSTRAN-MARC methodology. Finally, normal modes analyses were conducted on four composite propfan blades that were designed, analyzed, and fabricated at NASA Lewis Research Center. Results were compared with experimental data. The research documented herein presents MARC as a viable tool for the analysis of rotating composite blades.
Evaluation of MARC for the analysis of rotating composite blades
NASA Astrophysics Data System (ADS)
Bartos, Karen F.; Ernst, Michael A.
1993-03-01
The suitability of the MARC code for the analysis of rotating composite blades was evaluated using a four-task process. A nonlinear displacement analysis and subsequent eigenvalue analysis were performed on a rotating spring mass system to ensure that displacement-dependent centrifugal forces were accounted for in the eigenvalue analysis. Normal modes analyses were conducted on isotropic plates with various degrees of twist to evaluate MARC's ability to handle blade twist. Normal modes analyses were conducted on flat composite plates to validate the newly developed coupled COBSTRAN-MARC methodology. Finally, normal modes analyses were conducted on four composite propfan blades that were designed, analyzed, and fabricated at NASA Lewis Research Center. Results were compared with experimental data. The research documented herein presents MARC as a viable tool for the analysis of rotating composite blades.
Localized viscoelasticity measurements with untethered intravitreal microrobots.
Pokki, Juho; Ergeneman, Olgaç; Bergeles, Christos; Torun, Hamdi; Nelson, Bradley J
2012-01-01
Microrobots are a promising tool for medical interventions and micromanipulation. In this paper, we explore the concept of using microrobots for microrheology. Untethered magnetically actuated microrobots were used to characterize one of the most complex biofluids, the vitreous humor. In this work we began by experimentally characterizing the viscoelastic properties of an artificial vitreous humor. For comparison, its properties were also measured using special microcantilevers in an atomic force microscope (AFM) setup. Subsequently, an untethered device was used to study the vitreous humor of a porcine eye, which is a valid ex-vivo model of a human eye. Its viscoelasticity model was extracted, which was in agreement with the model of the artificial vitreous. The existing characterization methodology requires eye and vitreous humor dissection for the microrheology measurements. We envision that the method proposed here can be used in in vivo.
MOSAIC - A space-multiplexing technique for optical processing of large images
NASA Technical Reports Server (NTRS)
Athale, Ravindra A.; Astor, Michael E.; Yu, Jeffrey
1993-01-01
A technique for Fourier processing of images larger than the space-bandwidth products of conventional or smart spatial light modulators and two-dimensional detector arrays is described. The technique involves a spatial combination of subimages displayed on individual spatial light modulators to form a phase-coherent image, which is subsequently processed with Fourier optical techniques. Because of the technique's similarity with the mosaic technique used in art, the processor used is termed an optical MOSAIC processor. The phase accuracy requirements of this system were studied by computer simulation. It was found that phase errors of less than lambda/8 did not degrade the performance of the system and that the system was relatively insensitive to amplitude nonuniformities. Several schemes for implementing the subimage combination are described. Initial experimental results demonstrating the validity of the mosaic concept are also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cada, G.F.; Solomon, J.A.; Loar, J.M.
This report provides a review of literature concerning the effects of sublethal stresses on predator-prey interactions in aquatic systems. In addition, the results of a preliminary laboratory study of the susceptibility of entrainment-stressed juvenile bluegill to striped bass predation are presented. Juvenile bluegill were exposed to thermal and physical entrainment stresses in the ORNL Power Plant Simulator and subsequently to predation by juvenile striped bass in a susceptibility to predation experimental design. None of the entrainment stresses tested (thermal shock, physical effects of pump and condenser passage, and combination of thermal and physical shock) was found to significantly increase predationmore » rates as compared to controls, and no significant interactions between thermal and physical stresses were detected. The validity of laboratory predator-prey studies and the application of indirect mortality information for setting protective standards and predicting environmental impacts are discussed.« less
CFD Validation Experiment of a Mach 2.5 Axisymmetric Shock-Wave/Boundary-Layer Interaction
NASA Technical Reports Server (NTRS)
Davis, David O.
2015-01-01
Experimental investigations of specific flow phenomena, e.g., Shock Wave Boundary-Layer Interactions (SWBLI), provide great insight to the flow behavior but often lack the necessary details to be useful as CFD validation experiments. Reasons include: 1.Undefined boundary conditions Inconsistent results 2.Undocumented 3D effects (CL only measurements) 3.Lack of uncertainty analysis While there are a number of good subsonic experimental investigations that are sufficiently documented to be considered test cases for CFD and turbulence model validation, the number of supersonic and hypersonic cases is much less. This was highlighted by Settles and Dodsons [1] comprehensive review of available supersonic and hypersonic experimental studies. In all, several hundred studies were considered for their database.Of these, over a hundred were subjected to rigorous acceptance criteria. Based on their criteria, only 19 (12 supersonic, 7 hypersonic) were considered of sufficient quality to be used for validation purposes. Aeschliman and Oberkampf [2] recognized the need to develop a specific methodology for experimental studies intended specifically for validation purposes.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-31
... factors as the approved models, are validated by experimental test data, and receive the Administrator's... stage of the MEP involves applying the model against a database of experimental test cases including..., particularly the requirement for validation by experimental test data. That guidance is based on the MEP's...
Bakire, Serge; Yang, Xinya; Ma, Guangcai; Wei, Xiaoxuan; Yu, Haiying; Chen, Jianrong; Lin, Hongjun
2018-01-01
Organic chemicals in the aquatic ecosystem may inhibit algae growth and subsequently lead to the decline of primary productivity. Growth inhibition tests are required for ecotoxicological assessments for regulatory purposes. In silico study is playing an important role in replacing or reducing animal tests and decreasing experimental expense due to its efficiency. In this work, a series of theoretical models was developed for predicting algal growth inhibition (log EC 50 ) after 72 h exposure to diverse chemicals. In total 348 organic compounds were classified into five modes of toxic action using the Verhaar Scheme. Each model was established by using molecular descriptors that characterize electronic and structural properties. The external validation and leave-one-out cross validation proved the statistical robustness of the derived models. Thus they can be used to predict log EC 50 values of chemicals that lack authorized algal growth inhibition values (72 h). This work systematically studied algal growth inhibition according to toxic modes and the developed model suite covers all five toxic modes. The outcome of this research will promote toxic mechanism analysis and be made applicable to structural diversity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Error analysis of mechanical system and wavelength calibration of monochromator
NASA Astrophysics Data System (ADS)
Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong
2018-02-01
This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.
Lee, Hyun; Mittal, Anuradha; Patel, Kavankumar; Gatuz, Joseph L; Truong, Lena; Torres, Jaime; Mulhearn, Debbie C; Johnson, Michael E
2014-01-01
We have used a combination of virtual screening (VS) and high-throughput screening (HTS) techniques to identify novel, non-peptidic small molecule inhibitors against human SARS-CoV 3CLpro. A structure-based VS approach integrating docking and pharmacophore based methods was employed to computationally screen 621,000 compounds from the ZINC library. The screening protocol was validated using known 3CLpro inhibitors and was optimized for speed, improved selectivity, and for accommodating receptor flexibility. Subsequently, a fluorescence-based enzymatic HTS assay was developed and optimized to experimentally screen approximately 41,000 compounds from four structurally diverse libraries chosen mainly based on the VS results. False positives from initial HTS hits were eliminated by a secondary orthogonal binding analysis using surface plasmon resonance (SPR). The campaign identified a reversible small molecule inhibitor exhibiting mixed-type inhibition with a K(i) value of 11.1 μM. Together, these results validate our protocols as suitable approaches to screen virtual and chemical libraries, and the newly identified compound reported in our study represents a promising structural scaffold to pursue for further SARS-CoV 3CLpro inhibitor development. Copyright © 2013. Published by Elsevier Ltd.
Accelerating cine-MR Imaging in Mouse Hearts Using Compressed Sensing
Wech, Tobias; Lemke, Angela; Medway, Debra; Stork, Lee-Anne; Lygate, Craig A; Neubauer, Stefan; Köstler, Herbert; Schneider, Jürgen E
2011-01-01
Purpose To combine global cardiac function imaging with compressed sensing (CS) in order to reduce scan time and to validate this technique in normal mouse hearts and in a murine model of chronic myocardial infarction. Materials and Methods To determine the maximally achievable acceleration factor, fully acquired cine data, obtained in sham and chronically infarcted (MI) mouse hearts were 2–4-fold undersampled retrospectively, followed by CS reconstruction and blinded image segmentation. Subsequently, dedicated CS sampling schemes were implemented at a preclinical 9.4 T magnetic resonance imaging (MRI) system, and 2- and 3-fold undersampled cine data were acquired in normal mouse hearts with high temporal and spatial resolution. Results The retrospective analysis demonstrated that an undersampling factor of three is feasible without impairing accuracy of cardiac functional parameters. Dedicated CS sampling schemes applied prospectively to normal mouse hearts yielded comparable left-ventricular functional parameters, and intra- and interobserver variability between fully and 3-fold undersampled data. Conclusion This study introduces and validates an alternative means to speed up experimental cine-MRI without the need for expensive hardware. J. Magn. Reson. Imaging 2011. © 2011 Wiley Periodicals, Inc. PMID:21932360
Ostracism Online: A social media ostracism paradigm.
Wolf, Wouter; Levordashka, Ana; Ruff, Johanna R; Kraaijeveld, Steven; Lueckmann, Jan-Matthis; Williams, Kipling D
2015-06-01
We describe Ostracism Online, a novel, social media-based ostracism paradigm designed to (1) keep social interaction experimentally controlled, (2) provide researchers with the flexibility to manipulate the properties of the social situation to fit their research purposes, (3) be suitable for online data collection, (4) be convenient for studying subsequent within-group behavior, and (5) be ecologically valid. After collecting data online, we compared the Ostracism Online paradigm with the Cyberball paradigm (Williams & Jarvis Behavior Research Methods, 38, 174-180, 2006) on need-threat and mood questionnaire scores (van Beest & Williams Journal of Personality and Social Psychology 91, 918-928, 2006). We also examined whether ostracized targets of either paradigm would be more likely to conform to their group members than if they had been included. Using a Bayesian analysis of variance to examine the individual effects of the different paradigms and to compare these effects across paradigms, we found analogous effects on need-threat and mood. Perhaps because we examined conformity to the ostracizers (rather than neutral sources), neither paradigm showed effects of ostracism on conformity. We conclude that Ostracism Online is a cost-effective, easy to use, and ecologically valid research tool for studying the psychological and behavioral effects of ostracism.
NASA Astrophysics Data System (ADS)
Widanage, W. D.; Barai, A.; Chouchelamane, G. H.; Uddin, K.; McGordon, A.; Marco, J.; Jennings, P.
2016-08-01
The Pulse Power Current (PPC) profile is often the signal of choice for obtaining the parameters of a Lithium-ion (Li-ion) battery Equivalent Circuit Model (ECM). Subsequently, a drive-cycle current profile is used as a validation signal. Such a profile, in contrast to a PPC, is more dynamic in both the amplitude and frequency bandwidth. Modelling errors can occur when using PPC data for parametrisation since the model is optimised over a narrower bandwidth than the validation profile. A signal more representative of a drive-cycle, while maintaining a degree of generality, is needed to reduce such modelling errors. In Part 1 of this 2-part paper a signal design technique defined as a pulse-multisine is presented. This superimposes a signal known as a multisine to a discharge, rest and charge base signal to achieve a profile more dynamic in amplitude and frequency bandwidth, and thus more similar to a drive-cycle. The signal improves modelling accuracy and reduces the experimentation time, per state-of-charge (SoC) and temperature, to several minutes compared to several hours for an PPC experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doinikov, Alexander A., E-mail: doinikov@bsu.by; Bouakaz, Ayache; Sheeran, Paul S.
2014-10-15
Purpose: Perfluorocarbon (PFC) microdroplets, called phase-change contrast agents (PCCAs), are a promising tool in ultrasound imaging and therapy. Interest in PCCAs is motivated by the fact that they can be triggered to transition from the liquid state to the gas state by an externally applied acoustic pulse. This property opens up new approaches to applications in ultrasound medicine. Insight into the physics of vaporization of PFC droplets is vital for effective use of PCCAs and for anticipating bioeffects. PCCAs composed of volatile PFCs (with low boiling point) exhibit complex dynamic behavior: after vaporization by a short acoustic pulse, a PFCmore » droplet turns into a vapor bubble which undergoes overexpansion and damped radial oscillation until settling to a final diameter. This behavior has not been well described theoretically so far. The purpose of our study is to develop an improved theoretical model that describes the vaporization dynamics of volatile PFC droplets and to validate this model by comparison with in vitro experimental data. Methods: The derivation of the model is based on applying the mathematical methods of fluid dynamics and thermodynamics to the process of the acoustic vaporization of PFC droplets. The used approach corrects shortcomings of the existing models. The validation of the model is carried out by comparing simulated results with in vitro experimental data acquired by ultrahigh speed video microscopy for octafluoropropane (OFP) and decafluorobutane (DFB) microdroplets of different sizes. Results: The developed theory allows one to simulate the growth of a vapor bubble inside a PFC droplet until the liquid PFC is completely converted into vapor, and the subsequent overexpansion and damped oscillations of the vapor bubble, including the influence of an externally applied acoustic pulse. To evaluate quantitatively the difference between simulated and experimental results, the L2-norm errors were calculated for all cases where the simulated and experimental results are compared. These errors were found to be in the ranges of 0.043–0.067 and 0.037–0.088 for OFP and DFB droplets, respectively. These values allow one to consider agreement between the simulated and experimental results as good. This agreement is attained by varying only 2 of 16 model parameters which describe the material properties of gaseous and liquid PFCs and the liquid surrounding the PFC droplet. The fitting parameters are the viscosity and the surface tension of the surrounding liquid. All other model parameters are kept invariable. Conclusions: The good agreement between the theoretical and experimental results suggests that the developed model is able to correctly describe the key physical processes underlying the vaporization dynamics of volatile PFC droplets. The necessity of varying the parameters of the surrounding liquid for fitting the experimental curves can be explained by the fact that the parts of the initial phospholipid shell of PFC droplets remain on the surface of vapor bubbles at the oscillatory stage and their presence affects the bubble dynamics.« less
Digital signal processing techniques for coherent optical communication
NASA Astrophysics Data System (ADS)
Goldfarb, Gilad
Coherent detection with subsequent digital signal processing (DSP) is developed, analyzed theoretically and numerically and experimentally demonstrated in various fiber-optic transmission scenarios. The use of DSP in conjunction with coherent detection unleashes the benefits of coherent detection which rely on the preservaton of full information of the incoming field. These benefits include high receiver sensitivity, the ability to achieve high spectral-efficiency and the use of advanced modulation formats. With the immense advancements in DSP speeds, many of the problems hindering the use of coherent detection in optical transmission systems have been eliminated. Most notably, DSP alleviates the need for hardware phase-locking and polarization tracking, which can now be achieved in the digital domain. The complexity previously associated with coherent detection is hence significantly diminished and coherent detection is once gain considered a feasible detection alternative. In this thesis, several aspects of coherent detection (with or without subsequent DSP) are addressed. Coherent detection is presented as a means to extend the dispersion limit of a duobinary signal using an analog decision-directed phase-lock loop. Analytical bit-error ratio estimation for quadrature phase-shift keying signals is derived. To validate the promise for high spectral efficiency, the orthogonal-wavelength-division multiplexing scheme is suggested. In this scheme the WDM channels are spaced at the symbol rate, thus achieving the spectral efficiency limit. Theory, simulation and experimental results demonstrate the feasibility of this approach. Infinite impulse response filtering is shown to be an efficient alternative to finite impulse response filtering for chromatic dispersion compensation. Theory, design considerations, simulation and experimental results relating to this topic are presented. Interaction between fiber dispersion and nonlinearity remains the last major challenge deterministic effects pose for long-haul optical data transmission. Experimental results which demonstrate the possibility to digitally mitigate both dispersion and nonlinearity are presented. Impairment compensation is achieved using backward propagation by implementing the split-step method. Efficient realizations of the dispersion compensation operator used in this implementation are considered. Infinite-impulse response and wavelet-based filtering are both investigated as a means to reduce the required computational load associated with signal backward-propagation. Possible future research directions conclude this dissertation.
Experimental investigation of an RNA sequence space
NASA Technical Reports Server (NTRS)
Lee, Youn-Hyung; Dsouza, Lisa; Fox, George E.
1993-01-01
Modern rRNAs are the historic consequence of an ongoing evolutionary exploration of a sequence space. These extant sequences belong to a special subset of the sequence space that is comprised only of those primary sequences that can validly perform the biological function(s) required of the particular RNA. If it were possible to readily identify all such valid sequences, stochastic predictions could be made about the relative likelihood of various evolutionary pathways available to an RNA. Herein an experimental system which can assess whether a particular sequence is likely to have validity as a eubacterial 5S rRNA is described. A total of ten naturally occurring, and hence known to be valid, sequences and two point mutants of unknown validity were used to test the usefulness of the approach. Nine of the ten valid sequences tested positive whereas both mutants tested as clearly defective. The tenth valid sequence gave results that would be interpreted as reflecting a borderline status were the answer not known. These results demonstrate that it is possible to experimentally determine which sequences in local regions of the sequence space are potentially valid 5S rRNAs.
2014-04-15
SINGLE CYLINDER DIESEL ENGINE Amit Shrestha, Umashankar Joshi, Ziliang Zheng, Tamer Badawy, Naeim A. Henein, Wayne State University, Detroit, MI, USA...13-03-2014 4. TITLE AND SUBTITLE EXPERIMENTAL VALIDATION AND COMBUSTION MODELING OF A JP-8 SURROGATE IN A SINGLE CYLINDER DIESEL ENGINE 5a...INTERNATIONAL UNCLASSIFIED • Validate a two-component JP-8 surrogate in a single cylinder diesel engine. Validation parameters include – Ignition delay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, Yassin; Anand, Nk
2016-03-30
A 1/16th scaled VHTR experimental model was constructed and the preliminary test was performed in this study. To produce benchmark data for CFD validation in the future, the facility was first run at partial operation with five pipes being heated. PIV was performed to extract the vector velocity field for three adjacent naturally convective jets at statistically steady state. A small recirculation zone was found between the pipes, and the jets entered the merging zone at 3 cm from the pipe outlet but diverged as the flow approached the top of the test geometry. Turbulence analysis shows the turbulence intensitymore » peaked at 41-45% as the jets mixed. A sensitivity analysis confirmed that 1000 frames were sufficient to measure statistically steady state. The results were then validated by extracting the flow rate from the PIV jet velocity profile, and comparing it with an analytic flow rate and ultrasonic flowmeter; all flow rates lie within the uncertainty of the other two methods for Tests 1 and 2. This test facility can be used for further analysis of naturally convective mixing, and eventually produce benchmark data for CFD validation for the VHTR during a PCC or DCC accident scenario. Next, a PTV study of 3000 images (1500 image pairs) were used to quantify the velocity field in the upper plenum. A sensitivity analysis confirmed that 1500 frames were sufficient to precisely estimate the flow. Subsequently, three (3, 9, and 15 cm) Y-lines from the pipe output were extracted to consider the output differences between 50 to 1500 frames. The average velocity field and standard deviation error that accrued in the three different tests were calculated to assess repeatability. The error was varied, from 1 to 14%, depending on Y-elevation. The error decreased as the flow moved farther from the output pipe. In addition, turbulent intensity was calculated and found to be high near the output. Reynolds stresses and turbulent intensity were used to validate the data by comparing it with benchmark data. The experimental data gave the same pattern as the benchmark data. A turbulent single buoyant jet study was performed for the case of LOFC in the upper plenum of scaled VHTR. Time-averaged profiles show that 3,000 frames of images were sufficient for the study up to second-order statistics. Self-similarity is an important feature of jets since the behavior of jets is independent of Reynolds number and a sole function of geometry. Self-similarity profiles were well observed in the axial velocity and velocity magnitude profile regardless of z/D where the radial velocity did not show any similarity pattern. The normal components of Reynolds stresses have self-similarity within the expected range. The study shows that large vortices were observed close to the dome wall, indicating that the geometry of the VHTR has a significant impact on its safety and performance. Near the dome surface, large vortices were shown to inhibit the flows, resulting in reduced axial jet velocity. The vortices that develop subsequently reduce the Reynolds stresses that develop and the impact on the integrity of the VHTR upper plenum surface. Multiple jets study, including two, three and five jets, were investigated.« less
Method for exploratory cluster analysis and visualisation of single-trial ERP ensembles.
Williams, N J; Nasuto, S J; Saddy, J D
2015-07-30
The validity of ensemble averaging on event-related potential (ERP) data has been questioned, due to its assumption that the ERP is identical across trials. Thus, there is a need for preliminary testing for cluster structure in the data. We propose a complete pipeline for the cluster analysis of ERP data. To increase the signal-to-noise (SNR) ratio of the raw single-trials, we used a denoising method based on Empirical Mode Decomposition (EMD). Next, we used a bootstrap-based method to determine the number of clusters, through a measure called the Stability Index (SI). We then used a clustering algorithm based on a Genetic Algorithm (GA) to define initial cluster centroids for subsequent k-means clustering. Finally, we visualised the clustering results through a scheme based on Principal Component Analysis (PCA). After validating the pipeline on simulated data, we tested it on data from two experiments - a P300 speller paradigm on a single subject and a language processing study on 25 subjects. Results revealed evidence for the existence of 6 clusters in one experimental condition from the language processing study. Further, a two-way chi-square test revealed an influence of subject on cluster membership. Our analysis operates on denoised single-trials, the number of clusters are determined in a principled manner and the results are presented through an intuitive visualisation. Given the cluster structure in some experimental conditions, we suggest application of cluster analysis as a preliminary step before ensemble averaging. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Petrie, Christian M.; Koyanagi, Takaaki; McDuffee, Joel L.; Deck, Christian P.; Katoh, Yutai; Terrani, Kurt A.
2017-08-01
The purpose of this work is to design an irradiation vehicle for testing silicon carbide (SiC) fiber-reinforced SiC matrix composite cladding materials under conditions representative of a light water reactor in order to validate thermo-mechanical models of stress states in these materials due to irradiation swelling and differential thermal expansion. The design allows for a constant tube outer surface temperature in the range of 300-350 °C under a representative high heat flux (∼0.66 MW/m2) during one cycle of irradiation in an un-instrumented ;rabbit; capsule in the High Flux Isotope Reactor. An engineered aluminum foil was developed to absorb the expansion of the cladding tubes, due to irradiation swelling, without changing the thermal resistance of the gap between the cladding and irradiation capsule. Finite-element analyses of the capsule were performed, and the models used to calculate thermal contact resistance were validated by out-of-pile testing and post-irradiation examination of the foils and passive SiC thermometry. Six irradiated cladding tubes (both monoliths and composites) were irradiated and subsequently disassembled in a hot cell. The calculated temperatures of passive SiC thermometry inside the capsules showed good agreement with temperatures measured post-irradiation, with two calculated temperatures falling within 10 °C of experimental measurements. The success of this design could lead to new opportunities for irradiation applications with materials that suffer from irradiation swelling, creep, or other dimensional changes that can affect the specimen temperature during irradiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrie, Christian M.; Koyanagi, Takaaki; McDuffee, Joel L.
The purpose of this work is to design an irradiation vehicle for testing silicon carbide (SiC) fiber-reinforced SiC matrix composite cladding materials under conditions representative of a light water reactor in order to validate thermo-mechanical models of stress states in these materials due to irradiation swelling and differential thermal expansion. The design allows for a constant tube outer surface temperature in the range of 300–350 °C under a representative high heat flux (~0.66 MW/m 2) during one cycle of irradiation in an un-instrumented “rabbit” capsule in the High Flux Isotope Reactor. An engineered aluminum foil was developed to absorb themore » expansion of the cladding tubes, due to irradiation swelling, without changing the thermal resistance of the gap between the cladding and irradiation capsule. Finite-element analyses of the capsule were performed, and the models used to calculate thermal contact resistance were validated by out-of-pile testing and post-irradiation examination of the foils and passive SiC thermometry. Six irradiated cladding tubes (both monoliths and composites) were irradiated and subsequently disassembled in a hot cell. The calculated temperatures of passive SiC thermometry inside the capsules showed good agreement with temperatures measured post-irradiation, with two calculated temperatures falling within 10 °C of experimental measurements. Furthermore, the success of this design could lead to new opportunities for irradiation applications with materials that suffer from irradiation swelling, creep, or other dimensional changes that can affect the specimen temperature during irradiation.« less
Oberg, T
2007-01-01
The vapour pressure is the most important property of an anthropogenic organic compound in determining its partitioning between the atmosphere and the other environmental media. The enthalpy of vaporisation quantifies the temperature dependence of the vapour pressure and its value around 298 K is needed for environmental modelling. The enthalpy of vaporisation can be determined by different experimental methods, but estimation methods are needed to extend the current database and several approaches are available from the literature. However, these methods have limitations, such as a need for other experimental results as input data, a limited applicability domain, a lack of domain definition, and a lack of predictive validation. Here we have attempted to develop a quantitative structure-property relationship (QSPR) that has general applicability and is thoroughly validated. Enthalpies of vaporisation at 298 K were collected from the literature for 1835 pure compounds. The three-dimensional (3D) structures were optimised and each compound was described by a set of computationally derived descriptors. The compounds were randomly assigned into a calibration set and a prediction set. Partial least squares regression (PLSR) was used to estimate a low-dimensional QSPR model with 12 latent variables. The predictive performance of this model, within the domain of application, was estimated at n=560, q2Ext=0.968 and s=0.028 (log transformed values). The QSPR model was subsequently applied to a database of 100,000+ structures, after a similar 3D optimisation and descriptor generation. Reliable predictions can be reported for compounds within the previously defined applicability domain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fort, James A.; Pfund, David M.; Sheen, David M.
2007-04-01
The MFDRC was formed in 1998 to advance the state-of-the-art in simulating multiphase turbulent flows by developing advanced computational models for gas-solid flows that are experimentally validated over a wide range of industrially relevant conditions. The goal was to transfer the resulting validated models to interested US commercial CFD software vendors, who would then propagate the models as part of new code versions to their customers in the US chemical industry. Since the lack of detailed data sets at industrially relevant conditions is the major roadblock to developing and validating multiphase turbulence models, a significant component of the work involvedmore » flow measurements on an industrial-scale riser contributed by Westinghouse, which was subsequently installed at SNL. Model comparisons were performed against these datasets by LANL. A parallel Office of Industrial Technology (OIT) project within the consortium made similar comparisons between riser measurements and models at NETL. Measured flow quantities of interest included volume fraction, velocity, and velocity-fluctuation profiles for both gas and solid phases at various locations in the riser. Some additional techniques were required for these measurements beyond what was currently available. PNNL’s role on the project was to work with the SNL experimental team to develop and test two new measurement techniques, acoustic tomography and millimeter-wave velocimetry. Acoustic tomography is a promising technique for gas-solid flow measurements in risers and PNNL has substantial related experience in this area. PNNL is also active in developing millimeter wave imaging techniques, and this technology presents an additional approach to make desired measurements. PNNL supported the advanced diagnostics development part of this project by evaluating these techniques and then by adapting and developing the selected technology to bulk gas-solids flows and by implementing them for testing in the SNL riser testbed.« less
Chaudhury, Sidhartha; Abdulhameed, Mohamed Diwan M.; Singh, Narender; Tawa, Gregory J.; D’haeseleer, Patrik M.; Zemla, Adam T.; Navid, Ali; Zhou, Carol E.; Franklin, Matthew C.; Cheung, Jonah; Rudolph, Michael J.; Love, James; Graf, John F.; Rozak, David A.; Dankmeyer, Jennifer L.; Amemiya, Kei; Daefler, Simon; Wallqvist, Anders
2013-01-01
In the future, we may be faced with the need to provide treatment for an emergent biological threat against which existing vaccines and drugs have limited efficacy or availability. To prepare for this eventuality, our objective was to use a metabolic network-based approach to rapidly identify potential drug targets and prospectively screen and validate novel small-molecule antimicrobials. Our target organism was the fully virulent Francisella tularensis subspecies tularensis Schu S4 strain, a highly infectious intracellular pathogen that is the causative agent of tularemia and is classified as a category A biological agent by the Centers for Disease Control and Prevention. We proceeded with a staggered computational and experimental workflow that used a strain-specific metabolic network model, homology modeling and X-ray crystallography of protein targets, and ligand- and structure-based drug design. Selected compounds were subsequently filtered based on physiological-based pharmacokinetic modeling, and we selected a final set of 40 compounds for experimental validation of antimicrobial activity. We began screening these compounds in whole bacterial cell-based assays in biosafety level 3 facilities in the 20th week of the study and completed the screens within 12 weeks. Six compounds showed significant growth inhibition of F. tularensis, and we determined their respective minimum inhibitory concentrations and mammalian cell cytotoxicities. The most promising compound had a low molecular weight, was non-toxic, and abolished bacterial growth at 13 µM, with putative activity against pantetheine-phosphate adenylyltransferase, an enzyme involved in the biosynthesis of coenzyme A, encoded by gene coaD. The novel antimicrobial compounds identified in this study serve as starting points for lead optimization, animal testing, and drug development against tularemia. Our integrated in silico/in vitro approach had an overall 15% success rate in terms of active versus tested compounds over an elapsed time period of 32 weeks, from pathogen strain identification to selection and validation of novel antimicrobial compounds. PMID:23704901
ERIC Educational Resources Information Center
Rossi, Robert Joseph
Methods drawn from four logical theories associated with studies of inductive processes are applied to the assessment and evaluation of experimental episode construct validity. It is shown that this application provides for estimates of episode informativeness with respect to the person examined in terms of the construct and to the construct…
The first experiments in SST-1
NASA Astrophysics Data System (ADS)
Pradhan, S.; Khan, Z.; Tanna, V. L.; Sharma, A. N.; Doshi, K. J.; Prasad, U.; Masand, H.; Kumar, Aveg; Patel, K. B.; Bhandarkar, M. K.; Dhongde, J. R.; Shukla, B. K.; Mansuri, I. A.; Varadarajulu, A.; Khristi, Y. S.; Biswas, P.; Gupta, C. N.; Sharma, D. K.; Raval, D. C.; Srinivasan, R.; Pandya, S. P.; Atrey, P. K.; Sharma, P. K.; Patel, P. J.; Patel, H. S.; Santra, P.; Parekh, T. J.; Dhanani, K. R.; Paravastu, Y.; Pathan, F. S.; Chauhan, P. K.; Khan, M. S.; Tank, J. K.; Panchal, P. N.; Panchal, R. N.; Patel, R. J.; George, S.; Semwal, P.; Gupta, P.; Mahesuriya, G. I.; Sonara, D. P.; Jayswal, S. P.; Sharma, M.; Patel, J. C.; Varmora, P. P.; Patel, D. J.; Srikanth, G. L. N.; Christian, D. R.; Garg, A.; Bairagi, N.; Babu, G. R.; Panchal, A. G.; Vora, M. M.; Singh, A. K.; Sharma, R.; Raju, D.; Kulkarni, S. V.; Kumar, M.; Manchanda, R.; Joisa, S.; Tahiliani, K.; Pathak, S. K.; Patel, K. M.; Nimavat, H. D.; Shah, P. R.; Chudasma, H. H.; Raval, T. Y.; Sharma, A. L.; Ojha, A.; Parghi, B. R.; Banaudha, M.; Makwana, A. R.; Chowdhuri, M. B.; Ramaiya, N.; kumar, A.; Raval, J. V.; Gupta, S.; Purohit, S.; Kaur, R.; Adhiya, A. N.; Jha, R.; Kumar, S.; Nagora, U. C.; Siju, V.; Thomas, J.; Chaudhari, V. R.; Patel, K. G.; Ambulkar, K. K.; Dalakoti, S.; Virani, C. G.; Parmar, P. R.; Thakur, A. L.; Das, A.; Bora, D.; the SST-1 Team
2015-10-01
A steady state superconducting tokamak (SST-1) has been commissioned after the successful experimental and engineering validations of its critical sub-systems. During the ‘engineering validation phase’ of SST-1; the cryostat was demonstrated to be leak-tight in all operational scenarios, 80 K thermal shields were demonstrated to be uniformly cooled without regions of ‘thermal runaway and hot spots’, the superconducting toroidal field magnets were demonstrated to be cooled to their nominal operational conditions and charged up to 1.5 T of the field at the major radius. The engineering validations further demonstrated the assembled SST-1 machine shell to be a graded, stress-strain optimized and distributed thermo-mechanical device, apart from the integrated vacuum vessel being validated to be UHV compatible etc. Subsequently, ‘field error components’ in SST-1 were measured to be acceptable towards plasma discharges. A successful breakdown in SST-1 was obtained in SST-1 in June 2013 assisted with electron cyclotron pre-ionization in the second harmonic mode, thus marking the ‘first plasma’ in SST-1 and the arrival of SST-1 into the league of contemporary steady state devices. Subsequent to the first plasma, successful repeatable plasma start-ups with E ˜ 0.4 V m-1, and plasma current in excess of 70 kA for 400 ms assisted with electron cyclotron heating pre-ionization at a field of 1.5 T have so far been achieved in SST-1. Lengthening the plasma pulse duration with lower hybrid current drive, confinement and transport in SST-1 plasmas and magnetohydrodynamic activities typical to large aspect ratio SST-1 discharges are presently being investigated in SST-1. In parallel, SST-1 has uniquely demonstrated reliable cryo-stable high field operation of superconducting TF magnets in the two-phase cooling mode, operation of vapour-cooled current leads with cold gas instead of liquid helium and an order less dc joint resistance in superconducting magnet winding packs with high transport currents. In parallel, SST-1 is also continually getting up-graded with first wall integration, superconducting central solenoid installation and over-loaded MgB2-brass based current leads etc. Phase-1 of SST-1 up-gradation is scheduled by the first half of 2015, after which long pulse plasma experiments in both circular and elongated configurations have been planned in SST-1.
Drug Use Disorder (DUD) Questionnaire: Scale Development and Validation
ERIC Educational Resources Information Center
Scherer, Michael; Furr-Holden, C. Debra; Voas, Robert B.
2013-01-01
Background: Despite the ample interest in the measurement of substance abuse and dependence, obtaining biological samples from participants as a means to validate a scale is considered time and cost intensive and is, subsequently, largely overlooked. Objectives: To report the psychometric properties of the drug use disorder (DUD) questionnaire…
McAuliff, Bradley D; Kovera, Margaret Bull; Nunez, Gabriel
2009-06-01
This study examined the ability of jury-eligible community members (N = 248) to detect internal validity threats in psychological science presented during a trial. Participants read a case summary in which an expert testified about a study that varied in internal validity (valid, missing control group, confound, and experimenter bias) and ecological validity (high, low). Ratings of expert evidence quality and expert credibility were higher for the valid versus missing control group versions only. Internal validity did not influence verdict or ratings of plaintiff credibility and no differences emerged as a function of ecological validity. Expert evidence quality, expert credibility, and plaintiff credibility were positively correlated with verdict. Implications for the scientific reasoning literature and for trials containing psychological science are discussed.
Validation of WIND for a Series of Inlet Flows
NASA Technical Reports Server (NTRS)
Slater, John W.; Abbott, John M.; Cavicchi, Richard H.
2002-01-01
Validation assessments compare WIND CFD simulations to experimental data for a series of inlet flows ranging in Mach number from low subsonic to hypersonic. The validation procedures follow the guidelines of the AIAA. The WIND code performs well in matching the available experimental data. The assessments demonstrate the use of WIND and provide confidence in its use for the analysis of aircraft inlets.
Experimental Validation of a Thermoelastic Model for SMA Hybrid Composites
NASA Technical Reports Server (NTRS)
Turner, Travis L.
2001-01-01
This study presents results from experimental validation of a recently developed model for predicting the thermomechanical behavior of shape memory alloy hybrid composite (SMAHC) structures, composite structures with an embedded SMA constituent. The model captures the material nonlinearity of the material system with temperature and is capable of modeling constrained, restrained, or free recovery behavior from experimental measurement of fundamental engineering properties. A brief description of the model and analysis procedures is given, followed by an overview of a parallel effort to fabricate and characterize the material system of SMAHC specimens. Static and dynamic experimental configurations for the SMAHC specimens are described and experimental results for thermal post-buckling and random response are presented. Excellent agreement is achieved between the measured and predicted results, fully validating the theoretical model for constrained recovery behavior of SMAHC structures.
Sauter, Jennifer L; Grogg, Karen L; Vrana, Julie A; Law, Mark E; Halvorson, Jennifer L; Henry, Michael R
2016-02-01
The objective of the current study was to establish a process for validating immunohistochemistry (IHC) protocols for use on the Cellient cell block (CCB) system. Thirty antibodies were initially tested on CCBs using IHC protocols previously validated on formalin-fixed, paraffin-embedded tissue (FFPE). Cytology samples were split to generate thrombin cell blocks (TCB) and CCBs. IHC was performed in parallel. Antibody immunoreactivity was scored, and concordance or discordance in immunoreactivity between the TCBs and CCBs for each sample was determined. Criteria for validation of an antibody were defined as concordant staining in expected positive and negative cells, in at least 5 samples each, and concordance in at least 90% of the samples total. Antibodies that failed initial validation were retested after alterations in IHC conditions. Thirteen of the 30 antibodies (43%) did not meet initial validation criteria. Of those, 8 antibodies (calretinin, clusters of differentiation [CD] 3, CD20, CDX2, cytokeratin 20, estrogen receptor, MOC-31, and p16) were optimized for CCBs and subsequently validated. Despite several alterations in conditions, 3 antibodies (Ber-EP4, D2-40, and paired box gene 8 [PAX8]) were not successfully validated. Nearly one-half of the antibodies tested in the current study failed initial validation using IHC conditions that were established in the study laboratory for FFPE material. Although some antibodies subsequently met validation criteria after optimization of conditions, a few continued to demonstrate inadequate immunoreactivity. These findings emphasize the importance of validating IHC protocols for methanol-fixed tissue before clinical use and suggest that optimization for alcohol fixation may be needed to obtain adequate immunoreactivity on CCBs. © 2016 American Cancer Society.
Compact Heat Exchanger Design and Testing for Advanced Reactors and Advanced Power Cycles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Xiaodong; Zhang, Xiaoqin; Christensen, Richard
The goal of the proposed research is to demonstrate the thermal hydraulic performance of innovative surface geometries in compact heat exchangers used as intermediate heat exchangers (IHXs) and recuperators for the supercritical carbon dioxide (s-CO 2) Brayton cycle. Printed-circuit heat exchangers (PCHEs) are the primary compact heat exchangers of interest. The overall objectives are: To develop optimized PCHE designs for different working fluid combinations including helium to s-CO 2, liquid salt to s-CO 2, sodium to s-CO 2, and liquid salt to helium; To experimentally and numerically investigate thermal performance, thermal stress and failure mechanism of PCHEs under various transients;more » and To study diffusion bonding techniques for elevated-temperature alloys and examine post-test material integrity of the PCHEs. The project objectives were accomplished by defining and executing five different tasks corresponding to these specific objectives. The first task involved a thorough literature review and a selection of IHX candidates with different surface geometries as well as a summary of prototypic operational conditions. The second task involved optimization of PCHE design with numerical analyses of thermal-hydraulic performances and mechanical integrity. The subsequent task dealt with the development of testing facilities and engineering design of PCHE to be tested in s-CO 2 fluid conditions. The next task involved experimental investigation and validation of the thermal-hydraulic performances and thermal stress distribution of prototype PCHEs manufactured with particular surface geometries. The last task involved an investigation of diffusion bonding process and posttest destructive testing to validate mechanical design methods adopted in the design process. The experimental work utilized the two test facilities at The Ohio State University (OSU) including one existing High-Temperature Helium Test Facility (HTHF) and the newly developed s-CO 2 test loop (STL) facility and s-CO 2 test facility at University of Wisconsin – Madison (UW).« less
Predicting bone strength with ultrasonic guided waves
Bochud, Nicolas; Vallet, Quentin; Minonzio, Jean-Gabriel; Laugier, Pascal
2017-01-01
Recent bone quantitative ultrasound approaches exploit the multimode waveguide response of long bones for assessing properties such as cortical thickness and stiffness. Clinical applications remain, however, challenging, as the impact of soft tissue on guided waves characteristics is not fully understood yet. In particular, it must be clarified whether soft tissue must be incorporated in waveguide models needed to infer reliable cortical bone properties. We hypothesize that an inverse procedure using a free plate model can be applied to retrieve the thickness and stiffness of cortical bone from experimental data. This approach is first validated on a series of laboratory-controlled measurements performed on assemblies of bone- and soft tissue mimicking phantoms and then on in vivo measurements. The accuracy of the estimates is evaluated by comparison with reference values. To further support our hypothesis, these estimates are subsequently inserted into a bilayer model to test its accuracy. Our results show that the free plate model allows retrieving reliable waveguide properties, despite the presence of soft tissue. They also suggest that the more sophisticated bilayer model, although it is more precise to predict experimental data in the forward problem, could turn out to be hardly manageable for solving the inverse problem. PMID:28256568
Mansouri, K; Grulke, C M; Richard, A M; Judson, R S; Williams, A J
2016-11-01
The increasing availability of large collections of chemical structures and associated experimental data provides an opportunity to build robust QSAR models for applications in different fields. One common concern is the quality of both the chemical structure information and associated experimental data. Here we describe the development of an automated KNIME workflow to curate and correct errors in the structure and identity of chemicals using the publicly available PHYSPROP physicochemical properties and environmental fate datasets. The workflow first assembles structure-identity pairs using up to four provided chemical identifiers, including chemical name, CASRNs, SMILES, and MolBlock. Problems detected included errors and mismatches in chemical structure formats, identifiers and various structure validation issues, including hypervalency and stereochemistry descriptions. Subsequently, a machine learning procedure was applied to evaluate the impact of this curation process. The performance of QSAR models built on only the highest-quality subset of the original dataset was compared with the larger curated and corrected dataset. The latter showed statistically improved predictive performance. The final workflow was used to curate the full list of PHYSPROP datasets, and is being made publicly available for further usage and integration by the scientific community.
Background of SAM atom-fraction profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ernst, Frank
Atom-fraction profiles acquired by SAM (scanning Auger microprobe) have important applications, e.g. in the context of alloy surface engineering by infusion of carbon or nitrogen through the alloy surface. However, such profiles often exhibit an artifact in form of a background with a level that anti-correlates with the local atom fraction. This article presents a theory explaining this phenomenon as a consequence of the way in which random noise in the spectrum propagates into the discretized differentiated spectrum that is used for quantification. The resulting model of “energy channel statistics” leads to a useful semi-quantitative background reduction procedure, which ismore » validated by applying it to simulated data. Subsequently, the procedure is applied to an example of experimental SAM data. The analysis leads to conclusions regarding optimum experimental acquisition conditions. The proposed method of background reduction is based on general principles and should be useful for a broad variety of applications. - Highlights: • Atom-fraction–depth profiles of carbon measured by scanning Auger microprobe • Strong background, varies with local carbon concentration. • Needs correction e.g. for quantitative comparison with simulations • Quantitative theory explains background. • Provides background removal strategy and practical advice for acquisition.« less
Chuprom, Julalak; Bovornreungroj, Preeyanuch; Ahmad, Mehraj; Kantachote, Duangporn; Dueramae, Sawitree
2016-06-01
A new potent halophilic protease producer, Halobacterium sp. strain LBU50301 was isolated from salt-fermented fish samples ( budu ) and identified by phenotypic analysis, and 16S rDNA gene sequencing. Thereafter, sequential statistical strategy was used to optimize halophilic protease production from Halobacterium sp. strain LBU50301 by shake-flask fermentation. The classical one-factor-at-a-time (OFAT) approach determined gelatin was the best nitrogen source. Based on Plackett - Burman (PB) experimental design; gelatin, MgSO 4 ·7H 2 O, NaCl and pH significantly influenced the halophilic protease production. Central composite design (CCD) determined the optimum level of medium components. Subsequently, an 8.78-fold increase in corresponding halophilic protease yield (156.22 U/mL) was obtained, compared with that produced in the original medium (17.80 U/mL). Validation experiments proved the adequacy and accuracy of model, and the results showed the predicted value agreed well with the experimental values. An overall 13-fold increase in halophilic protease yield was achieved using a 3 L laboratory fermenter and optimized medium (231.33 U/mL).
Design, fabrication, and testing of a low frequency MEMS piezoelectromagnetic energy harvester
NASA Astrophysics Data System (ADS)
Fernandes, Egon; Martin, Blake; Rua, Isabel; Zarabi, Sid; Debéda, Hélène; Nairn, David; Wei, Lan; Salehian, Armaghan
2018-03-01
This paper details a power solution for smart grid applications to replace batteries by harvesting the electromagnetic energy from a current-carrying wire. A MEMS piezoelectromagnetic energy harvester has been fabricated using PZT screen-printing technology with a centrally-supported meandering geometry. The energy harvesting device employs a symmetric geometry to increase its power output by reducing the effects of the torsional modes and the resultant overall strain nodes in the system subsequently reduce the complexities for the electrode fabrication. The unit is modelled using COMSOL to determine mode shapes and frequency response functions. A 12.7 mm by 14.7 mm unit is fabricated by screen-printing 75 μm-thick PZT on a stainless steel substrate and then experimentally tested to validate the FEA results. Experimentally, the harvester is shown to produce 9 μW from a wire carrying 7 A while operating at a distance of 6.5 mm from the wire. The design of the current work results in a greater normalized power density than other MEMS based piezoelectromagnetic devices and shows great potential relative to larger devices that use bulk or thin film piezoelectrics.
Compressive strength of delaminated aerospace composites.
Butler, Richard; Rhead, Andrew T; Liu, Wenli; Kontis, Nikolaos
2012-04-28
An efficient analytical model is described which predicts the value of compressive strain below which buckle-driven propagation of delaminations in aerospace composites will not occur. An extension of this efficient strip model which accounts for propagation transverse to the direction of applied compression is derived. In order to provide validation for the strip model a number of laminates were artificially delaminated producing a range of thin anisotropic sub-laminates made up of 0°, ±45° and 90° plies that displayed varied buckling and delamination propagation phenomena. These laminates were subsequently subject to experimental compression testing and nonlinear finite element analysis (FEA) using cohesive elements. Comparison of strip model results with those from experiments indicates that the model can conservatively predict the strain at which propagation occurs to within 10 per cent of experimental values provided (i) the thin-film assumption made in the modelling methodology holds and (ii) full elastic coupling effects do not play a significant role in the post-buckling of the sub-laminate. With such provision, the model was more accurate and produced fewer non-conservative results than FEA. The accuracy and efficiency of the model make it well suited to application in optimum ply-stacking algorithms to maximize laminate strength.
NASA Astrophysics Data System (ADS)
Banerjee, Ipsita
2009-03-01
Knowledge of pathways governing cellular differentiation to specific phenotype will enable generation of desired cell fates by careful alteration of the governing network by adequate manipulation of the cellular environment. With this aim, we have developed a novel method to reconstruct the underlying regulatory architecture of a differentiating cell population from discrete temporal gene expression data. We utilize an inherent feature of biological networks, that of sparsity, in formulating the network reconstruction problem as a bi-level mixed-integer programming problem. The formulation optimizes the network topology at the upper level and the network connectivity strength at the lower level. The method is first validated by in-silico data, before applying it to the complex system of embryonic stem (ES) cell differentiation. This formulation enables efficient identification of the underlying network topology which could accurately predict steps necessary for directing differentiation to subsequent stages. Concurrent experimental verification demonstrated excellent agreement with model prediction.
NASA Astrophysics Data System (ADS)
Ma, Hao; Li, Chen; Tang, Shixiong; Yan, Jiaqiang; Alatas, Ahmet; Lindsay, Lucas; Sales, Brian C.; Tian, Zhiting
2016-12-01
Cubic boron arsenide (BAs) was predicted to have an exceptionally high thermal conductivity (k ) ˜2000 W m-1K-1 at room temperature, comparable to that of diamond, based on first-principles calculations. Subsequent experimental measurements, however, only obtained a k of ˜200 W m-1K-1 . To gain insight into this discrepancy, we measured phonon dispersion of single-crystal BAs along high symmetry directions using inelastic x-ray scattering and compared these with first-principles calculations. Based on the measured phonon dispersion, we have validated the theoretical prediction of a large frequency gap between acoustic and optical modes and bunching of acoustic branches, which were considered the main reasons for the predicted ultrahigh k . This supports its potential to be a super thermal conductor if very-high-quality single-crystal samples can be synthesized.
Modaresi, Seyed Mohamad Sadegh; Faramarzi, Mohammad Ali; Soltani, Arash; Baharifar, Hadi; Amani, Amir
2014-01-01
Streptokinase is a potent fibrinolytic agent which is widely used in treatment of deep vein thrombosis (DVT), pulmonary embolism (PE) and acute myocardial infarction (MI). Major limitation of this enzyme is its short biological half-life in the blood stream. Our previous report showed that complexing streptokinase with chitosan could be a solution to overcome this limitation. The aim of this research was to establish an artificial neural networks (ANNs) model for identifying main factors influencing the loading efficiency of streptokinase, as an essential parameter determining efficacy of the enzyme. Three variables, namely, chitosan concentration, buffer pH and enzyme concentration were considered as input values and the loading efficiency was used as output. Subsequently, the experimental data were modeled and the model was validated against a set of unseen data. The developed model indicated chitosan concentration as probably the most important factor, having reverse effect on the loading efficiency. PMID:25587327
Evaluating two process scale chromatography column header designs using CFD.
Johnson, Chris; Natarajan, Venkatesh; Antoniou, Chris
2014-01-01
Chromatography is an indispensable unit operation in the downstream processing of biomolecules. Scaling of chromatographic operations typically involves a significant increase in the column diameter. At this scale, the flow distribution within a packed bed could be severely affected by the distributor design in process scale columns. Different vendors offer process scale columns with varying design features. The effect of these design features on the flow distribution in packed beds and the resultant effect on column efficiency and cleanability needs to be properly understood in order to prevent unpleasant surprises on scale-up. Computational Fluid Dynamics (CFD) provides a cost-effective means to explore the effect of various distributor designs on process scale performance. In this work, we present a CFD tool that was developed and validated against experimental dye traces and tracer injections. Subsequently, the tool was employed to compare and contrast two commercially available header designs. © 2014 American Institute of Chemical Engineers.
Ji, Jun; Ling, Jeffrey; Jiang, Helen; Wen, Qiaojun; Whitin, John C; Tian, Lu; Cohen, Harvey J; Ling, Xuefeng B
2013-03-23
Mass spectrometry (MS) has evolved to become the primary high throughput tool for proteomics based biomarker discovery. Until now, multiple challenges in protein MS data analysis remain: large-scale and complex data set management; MS peak identification, indexing; and high dimensional peak differential analysis with the concurrent statistical tests based false discovery rate (FDR). "Turnkey" solutions are needed for biomarker investigations to rapidly process MS data sets to identify statistically significant peaks for subsequent validation. Here we present an efficient and effective solution, which provides experimental biologists easy access to "cloud" computing capabilities to analyze MS data. The web portal can be accessed at http://transmed.stanford.edu/ssa/. Presented web application supplies large scale MS data online uploading and analysis with a simple user interface. This bioinformatic tool will facilitate the discovery of the potential protein biomarkers using MS.
Theoretical analysis of ozone generation by pulsed dielectric barrier discharge in oxygen
NASA Astrophysics Data System (ADS)
Wei, L. S.; Zhou, J. H.; Wang, Z. H.; Cen, K. F.
2007-08-01
The use of very short high-voltage pulses combined with a dielectric layer results in high-energy electrons that dissociate oxygen molecules into atoms, which are a prerequisite for the subsequent production of ozone by collisions with oxygen molecules and third particles. The production of ozone depends on both the electrical and the physical parameters. For ozone generation by pulsed dielectric barrier discharge in oxygen, a mathematical model, which describes the relation between ozone concentration and these parameters that are of importance in its design, is developed according to dimensional analysis theory. A formula considering the ozone destruction factor is derived for predicting the characteristics of the ozone generation, within the range of the corona inception voltage to the gap breakdown voltage. The trend showing the dependence of the concentration of ozone in oxygen on these parameters generally agrees with the experimental results, thus confirming the validity of the mathematical model.
SEE rate estimation based on diffusion approximation of charge collection
NASA Astrophysics Data System (ADS)
Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.
2018-03-01
The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.
Interaction of 〈1 0 0〉 dislocation loops with dislocations studied by dislocation dynamics in α-iron
NASA Astrophysics Data System (ADS)
Shi, X. J.; Dupuy, L.; Devincre, B.; Terentyev, D.; Vincent, L.
2015-05-01
Interstitial dislocation loops with Burgers vector of 〈1 0 0〉 type are formed in α-iron under neutron or heavy ion irradiation. As the density and size of these loops increase with radiation dose and temperature, these defects are thought to play a key role in hardening and subsequent embrittlement of iron-based steels. The aim of the present work is to study the pinning strength of the loops on mobile dislocations. Prior to run massive Dislocation Dynamics (DD) simulations involving experimentally representative array of radiation defects and dislocations, the DD code and its parameterization are validated by comparing the individual loop-dislocation reactions with those obtained from direct atomistic Molecular Dynamics (MD) simulations. Several loop-dislocation reaction mechanisms are successfully reproduced as well as the values of the unpinning stress to detach mobile dislocations from the defects.
Gas Core Reactor Numerical Simulation Using a Coupled MHD-MCNP Model
NASA Technical Reports Server (NTRS)
Kazeminezhad, F.; Anghaie, S.
2008-01-01
Analysis is provided in this report of using two head-on magnetohydrodynamic (MHD) shocks to achieve supercritical nuclear fission in an axially elongated cylinder filled with UF4 gas as an energy source for deep space missions. The motivation for each aspect of the design is explained and supported by theory and numerical simulations. A subsequent report will provide detail on relevant experimental work to validate the concept. Here the focus is on the theory of and simulations for the proposed gas core reactor conceptual design from the onset of shock generations to the supercritical state achieved when the shocks collide. The MHD model is coupled to a standard nuclear code (MCNP) to observe the neutron flux and fission power attributed to the supercritical state brought about by the shock collisions. Throughout the modeling, realistic parameters are used for the initial ambient gaseous state and currents to ensure a resulting supercritical state upon shock collisions.
Advanced Interrogation of Fiber-Optic Bragg Grating and Fabry-Perot Sensors with KLT Analysis
Tosi, Daniele
2015-01-01
The Karhunen-Loeve Transform (KLT) is applied to accurate detection of optical fiber sensors in the spectral domain. By processing an optical spectrum, although coarsely sampled, through the KLT, and subsequently processing the obtained eigenvalues, it is possible to decode a plurality of optical sensor results. The KLT returns higher accuracy than other demodulation techniques, despite coarse sampling, and exhibits higher resilience to noise. Three case studies of KLT-based processing are presented, representing most of the current challenges in optical fiber sensing: (1) demodulation of individual sensors, such as Fiber Bragg Gratings (FBGs) and Fabry-Perot Interferometers (FPIs); (2) demodulation of dual (FBG/FPI) sensors; (3) application of reverse KLT to isolate different sensors operating on the same spectrum. A simulative outline is provided to demonstrate the KLT operation and estimate performance; a brief experimental section is also provided to validate accurate FBG and FPI decoding. PMID:26528975
Advanced Interrogation of Fiber-Optic Bragg Grating and Fabry-Perot Sensors with KLT Analysis.
Tosi, Daniele
2015-10-29
The Karhunen-Loeve Transform (KLT) is applied to accurate detection of optical fiber sensors in the spectral domain. By processing an optical spectrum, although coarsely sampled, through the KLT, and subsequently processing the obtained eigenvalues, it is possible to decode a plurality of optical sensor results. The KLT returns higher accuracy than other demodulation techniques, despite coarse sampling, and exhibits higher resilience to noise. Three case studies of KLT-based processing are presented, representing most of the current challenges in optical fiber sensing: (1) demodulation of individual sensors, such as Fiber Bragg Gratings (FBGs) and Fabry-Perot Interferometers (FPIs); (2) demodulation of dual (FBG/FPI) sensors; (3) application of reverse KLT to isolate different sensors operating on the same spectrum. A simulative outline is provided to demonstrate the KLT operation and estimate performance; a brief experimental section is also provided to validate accurate FBG and FPI decoding.
Sgrignani, Jacopo; Grazioso, Giovanni; De Amici, Marco
2016-09-13
The fast and constant development of drug resistant bacteria represents a serious medical emergency. To overcome this problem, the development of drugs with new structures and modes of action is urgently needed. In this work, we investigated, at the atomistic level, the mechanisms of hydrolysis of Meropenem by OXA-23, a class D β-lactamase, combining unbiased classical molecular dynamics and umbrella sampling simulations with classical force field-based and quantum mechanics/molecular mechanics potentials. Our calculations provide a detailed structural and dynamic picture of the molecular steps leading to the formation of the Meropenem-OXA-23 covalent adduct, the subsequent hydrolysis, and the final release of the inactive antibiotic. In this mechanistic framework, the predicted activation energy is in good agreement with experimental kinetic measurements, validating the expected reaction path.
Renal Aging: Causes and Consequences
Hughes, Jeremy; Ferenbach, David A.
2017-01-01
Individuals age >65 years old are the fastest expanding population demographic throughout the developed world. Consequently, more aged patients than before are receiving diagnoses of impaired renal function and nephrosclerosis—age–associated histologic changes in the kidneys. Recent studies have shown that the aged kidney undergoes a range of structural changes and has altered transcriptomic, hemodynamic, and physiologic behavior at rest and in response to renal insults. These changes impair the ability of the kidney to withstand and recover from injury, contributing to the high susceptibility of the aged population to AKI and their increased propensity to develop subsequent progressive CKD. In this review, we examine these features of the aged kidney and explore the various validated and putative pathways contributing to the changes observed with aging in both experimental animal models and humans. We also discuss the potential for additional study to increase understanding of the aged kidney and lead to novel therapeutic strategies. PMID:28143966
Modeling of skin cooling, blood flow, and optical properties in wounds created by electrical shock
NASA Astrophysics Data System (ADS)
Nguyen, Thu T. A.; Shupp, Jeffrey W.; Moffatt, Lauren T.; Jordan, Marion H.; Jeng, James C.; Ramella-Roman, Jessica C.
2012-02-01
High voltage electrical injuries may lead to irreversible tissue damage or even death. Research on tissue injury following high voltage shock is needed and may yield stage-appropriate therapy to reduce amputation rate. One of the mechanisms by which electricity damages tissue is through Joule heating, with subsequent protein denaturation. Previous studies have shown that blood flow had a significant effect on the cooling rate of heated subcutaneous tissue. To assess the thermal damage in tissue, this study focused on monitoring changes of temperature and optical properties of skin next to high voltage wounds. The burns were created between left fore limb and right hind limb extremities of adult male Sprague-Dawley rats by a 1000VDC delivery shock system. A thermal camera was utilized to record temperature variation during the exposure. The experimental results were then validated using a thermal-electric finite element model (FEM).
A layered modulation method for pixel matching in online phase measuring profilometry
NASA Astrophysics Data System (ADS)
Li, Hongru; Feng, Guoying; Bourgade, Thomas; Yang, Peng; Zhou, Shouhuan; Asundi, Anand
2016-10-01
An online phase measuring profilometry with new layered modulation method for pixel matching is presented. In this method and in contrast with previous modulation matching methods, the captured images are enhanced by Retinex theory for better modulation distribution, and all different layer modulation masks are fully used to determine the displacement of a rectilinear moving object. High, medium and low modulation masks are obtained by performing binary segmentation with iterative Otsu method. The final shifting pixels are calculated based on centroid concept, and after that the aligned fringe patterns can be extracted from each frame. After performing Stoilov algorithm and a series of subsequent operations, the object profile on a translation stage is reconstructed. All procedures are carried out automatically, without setting specific parameters in advance. Numerical simulations are detailed and experimental results verify the validity and feasibility of the proposed approach.
Estimation of road profile variability from measured vehicle responses
NASA Astrophysics Data System (ADS)
Fauriat, W.; Mattrand, C.; Gayton, N.; Beakou, A.; Cembrzynski, T.
2016-05-01
When assessing the statistical variability of fatigue loads acting throughout the life of a vehicle, the question of the variability of road roughness naturally arises, as both quantities are strongly related. For car manufacturers, gathering information on the environment in which vehicles evolve is a long and costly but necessary process to adapt their products to durability requirements. In the present paper, a data processing algorithm is proposed in order to estimate the road profiles covered by a given vehicle, from the dynamic responses measured on this vehicle. The algorithm based on Kalman filtering theory aims at solving a so-called inverse problem, in a stochastic framework. It is validated using experimental data obtained from simulations and real measurements. The proposed method is subsequently applied to extract valuable statistical information on road roughness from an existing load characterisation campaign carried out by Renault within one of its markets.
PARC Navier-Stokes code upgrade and validation for high speed aeroheating predictions
NASA Technical Reports Server (NTRS)
Liver, Peter A.; Praharaj, Sarat C.; Seaford, C. Mark
1990-01-01
Applications of the PARC full Navier-Stokes code for hypersonic flowfield and aeroheating predictions around blunt bodies such as the Aeroassist Flight Experiment (AFE) and Aeroassisted Orbital Transfer Vehicle (AOTV) are evaluated. Two-dimensional/axisymmetric and three-dimensional perfect gas versions of the code were upgraded and tested against benchmark wind tunnel cases of hemisphere-cylinder, three-dimensional AFE forebody, and axisymmetric AFE and AOTV aerobrake/wake flowfields. PARC calculations are in good agreement with experimental data and results of similar computer codes. Difficulties encountered in flowfield and heat transfer predictions due to effects of grid density, boundary conditions such as singular stagnation line axis and artificial dissipation terms are presented together with subsequent improvements made to the code. The experience gained with the perfect gas code is being currently utilized in applications of an equilibrium air real gas PARC version developed at REMTECH.
Ma, Hao; Li, Chen; Tang, Shixiong; ...
2016-12-14
Cubic boron arsenide (BAs) was predicted to have an exceptionally high thermal conductivity (k) ~2000 Wm -1K -1 at room temperature, comparable to that of diamond, based on first-principles calculations. Subsequent experimental measurements, however, only obtained a k of ~200 Wm-1K-1. To gain insight into this discrepancy, we measured phonon dispersion of single crystal BAs along high symmetry directions using inelastic x-ray scattering (IXS) and compared these with first-principles calculations. Based on the measured phonon dispersion, we have validated the theoretical prediction of a large frequency gap between acoustic and optical modes and bunching of acoustic branches, which were consideredmore » the main reasons for the predicted ultrahigh k. This supports its potential to be a super thermal conductor if very high-quality single crystal samples can be synthesized.« less
Prediction of enzymatic pathways by integrative pathway mapping
Wichelecki, Daniel J; San Francisco, Brian; Zhao, Suwen; Rodionov, Dmitry A; Vetting, Matthew W; Al-Obaidi, Nawar F; Lin, Henry; O'Meara, Matthew J; Scott, David A; Morris, John H; Russel, Daniel; Almo, Steven C; Osterman, Andrei L
2018-01-01
The functions of most proteins are yet to be determined. The function of an enzyme is often defined by its interacting partners, including its substrate and product, and its role in larger metabolic networks. Here, we describe a computational method that predicts the functions of orphan enzymes by organizing them into a linear metabolic pathway. Given candidate enzyme and metabolite pathway members, this aim is achieved by finding those pathways that satisfy structural and network restraints implied by varied input information, including that from virtual screening, chemoinformatics, genomic context analysis, and ligand -binding experiments. We demonstrate this integrative pathway mapping method by predicting the L-gulonate catabolic pathway in Haemophilus influenzae Rd KW20. The prediction was subsequently validated experimentally by enzymology, crystallography, and metabolomics. Integrative pathway mapping by satisfaction of structural and network restraints is extensible to molecular networks in general and thus formally bridges the gap between structural biology and systems biology. PMID:29377793
Mechanical low-frequency filter via modes separation in 3D periodic structures
NASA Astrophysics Data System (ADS)
D'Alessandro, L.; Belloni, E.; Ardito, R.; Braghin, F.; Corigliano, A.
2017-12-01
This work presents a strategy to design three-dimensional elastic periodic structures endowed with complete bandgaps, the first of which is ultra-wide, where the top limits of the first two bandgaps are overstepped in terms of wave transmission in the finite structure. Thus, subsequent bandgaps are merged, approaching the behaviour of a three-dimensional low-pass mechanical filter. This result relies on a proper organization of the modal characteristics, and it is validated by performing numerical and analytical calculations over the unit cell. A prototype of the analysed layout, made of Nylon by means of additive manufacturing, is experimentally tested to assess the transmission spectrum of the finite structure, obtaining good agreement with numerical predictions. The presented strategy paves the way for the development of a class of periodic structures to be used in robust and reliable wave attenuation over a wide frequency band.
Moris, Demetrios; Avgerinos, Efthymios; Makris, Marinos; Bakoyiannis, Chris; Pikoulis, Emmanuel; Georgopoulos, Sotirios
2014-01-01
Abdominal aortic aneurysm (AAA) is a prevalent and potentially life-threatening disease. Early detection by screening programs and subsequent surveillance has been shown to be effective at reducing the risk of mortality due to aneurysm rupture. The aim of this review is to summarize the developments in the literature concerning the latest biomarkers (from 2008 to date) and their potential screening and therapeutic values. Our search included human studies in English and found numerous novel biomarkers under research, which were categorized in 6 groups. Most of these studies are either experimental or hampered by their low numbers of patients. We concluded that currently no specific laboratory markers allow screeing for the disease and monitoring its progression or the results of treatment. Further studies and studies in larger patient groups are required in order to validate biomarkers as cost-effective tools in the AAA disease. PMID:24967416
Rayne, Sierra; Forest, Kaya
2016-09-18
The air-water partition coefficients (Kaw) for 86 large polycyclic aromatic hydrocarbons and their unsaturated relatives were estimated using high-level G4(MP2) gas and aqueous phase calculations with the SMD, IEFPCM-UFF, and CPCM solvation models. An extensive method validation effort was undertaken which involved confirming that, via comparisons to experimental enthalpies of formation, gas-phase energies at the G4(MP2) level for the compounds of interest were at or near thermochemical accuracy. Investigations of the three solvation models using a range of neutral and ionic compounds suggested that while no clear preferential solvation model could be chosen in advance for accurate Kaw estimates of the target compounds, the employment of increasingly higher levels of theory would result in lower Kaw errors. Subsequent calculations on the polycyclic aromatic and unsaturated hydrocarbons at the G4(MP2) level revealed excellent agreement for the IEFPCM-UFF and CPCM models against limited available experimental data. The IEFPCM-UFF-G4(MP2) and CPCM-G4(MP2) solvation energy calculation approaches are anticipated to give Kaw estimates within typical experimental ranges, each having general Kaw errors of less than 0.5 log10 units. When applied to other large organic compounds, the method should allow development of a broad and reliable Kaw database for multimedia environmental modeling efforts on various contaminants.
NASA Astrophysics Data System (ADS)
Wan, Fubin; Tan, Yuanyuan; Jiang, Zhenhua; Chen, Xun; Wu, Yinong; Zhao, Peng
2017-12-01
Lifetime and reliability are the two performance parameters of premium importance for modern space Stirling-type pulse tube refrigerators (SPTRs), which are required to operate in excess of 10 years. Demonstration of these parameters provides a significant challenge. This paper proposes a lifetime prediction and reliability estimation method that utilizes accelerated degradation testing (ADT) for SPTRs related to gaseous contamination failure. The method was experimentally validated via three groups of gaseous contamination ADT. First, the performance degradation model based on mechanism of contamination failure and material outgassing characteristics of SPTRs was established. Next, a preliminary test was performed to determine whether the mechanism of contamination failure of the SPTRs during ADT is consistent with normal life testing. Subsequently, the experimental program of ADT was designed for SPTRs. Then, three groups of gaseous contamination ADT were performed at elevated ambient temperatures of 40 °C, 50 °C, and 60 °C, respectively and the estimated lifetimes of the SPTRs under normal condition were obtained through acceleration model (Arrhenius model). The results show good fitting of the degradation model with the experimental data. Finally, we obtained the reliability estimation of SPTRs through using the Weibull distribution. The proposed novel methodology enables us to take less than one year time to estimate the reliability of the SPTRs designed for more than 10 years.
NASA Astrophysics Data System (ADS)
Gaillac, Alexis; Ly, Céline
2018-05-01
Within the forming route of Zirconium alloy cladding tubes, hot extrusion is used to deform the forged billets into tube hollows, which are then cold rolled to produce the final tubes with the suitable properties for in-reactor use. The hot extrusion goals are to give the appropriate geometry for cold pilgering, without creating surface defects and microstructural heterogeneities which are detrimental for subsequent rolling. In order to ensure a good quality of the tube hollows, hot extrusion parameters have to be carefully chosen. For this purpose, finite element models are used in addition to experimental tests. These models can take into account the thermo-mechanical coupling conditions obtained in the tube and the tools during extrusion, and provide a good prediction of the extrusion load and the thermo-mechanical history of the extruded product. This last result can be used to calculate the fragmentation of the microstructure in the die and the meta-dynamic recrystallization after extrusion. To further optimize the manufacturing route, a numerical model of the cold pilgering process is also applied, taking into account the complex geometry of the tools and the pseudo-steady state rolling sequence of this incremental forming process. The strain and stress history of the tube during rolling can then be used to assess the damage risk thanks to the use of ductile damage models. Once validated vs. experimental data, both numerical models were used to optimize the manufacturing route and the quality of zirconium cladding tubes. This goal was achieved by selecting hot extrusion parameters giving better recrystallized microstructure that improves the subsequent formability. Cold pilgering parameters were also optimized in order to reduce the potential ductile damage in the cold rolled tubes.
Nies, H; Harms, I H; Karcher, M J; Dethleff, D; Bahe, C
1999-09-30
The paper presents the results of the joint project carried out in Germany in order to assess the consequences in the marine environment from the dumping of nuclear wastes in the Kara and Barents Seas. The project consisted of experimental work on measurements of radionuclides in samples from the Arctic marine environment and numerical modelling of the potential pathways and dispersion of contaminants in the Arctic Ocean. Water and sediment samples were collected for determination of radionuclide such as 137Cs, 90Sr, 239 + 240Pu, 238Pu, and 241Am and various organic micropollutants. In addition, a few water and numerous surface sediment samples collected in the Kara Sea and from the Kola peninsula were taken by Russian colleagues and analysed for artificial radionuclide by the BSH laboratory. The role of transport by sea ice from the Kara Sea into the Arctic Ocean was assessed by a small subgroup at GEOMAR. This transport process might be considered as a rapid contribution due to entrainment of contaminated sediments into sea ice, following export from the Kara Sea into the transpolar ice drift and subsequent release in the Atlantic Ocean in the area of the East Greenland Current. Numerical modelling of dispersion of pollutants from the Kara and Barents Seas was carried out both on a local scale for the Barents and Kara Seas and for long range dispersion into the Arctic and Atlantic Oceans. Three-dimensional baroclinic circulation models were applied to trace the transport of pollutants. Experimental results were used to validate the model results such as the discharges from the nuclear reprocessing plant at Sellafield and subsequent contamination of the North Sea up the Arctic Seas.
Kashuba, Corinna M.; Benson, James D.; Critser, John K.
2014-01-01
In Part I, we documented differences in cryopreservation success measured by membrane integrity in four mouse embryonic stem cell (mESC) lines from different genetic backgrounds (BALB/c, CBA, FVB, and 129R1), and we demonstrated a potential biophysical basis for these differences through a comparative study characterizing the membrane permeability characteristics and osmotic tolerance limits of each cell line. Here we use these values to predict optimal cryoprotectants, cooling rates, warming rates, and plunge temperatures. We subsequently verified these predictions experimentally for their effects on post-thaw recovery. From this study, we determined that a cryopreservation protocol utilizing 1 M propylene glycol, a cooling rate of 1 °C/minute, and plunging into liquid nitrogen at −41 °C, combined with subsequent warming in a 22 °C water bath with agitation, significantly improved post-thaw recovery for three of the four mESC lines, and did not diminish post-thaw recovery for our single exception. It is proposed that this protocol can be successfully applied to most mESC lines beyond those included within this study once the effect of propylene glycol on mESC gene expression, growth characteristics, and germ-line transmission has been determined. Mouse ESC lines with poor survival using current standard cryopreservation protocols or our proposed protocol can be optimized on a case-by-case basis using the method we have outlined over two papers. For our single exception, the CBA cell line, a cooling rate of 5 °C/minute in the presence of 1.0 M dimethyl sulfoxide or 1.0 M propylene glycol, combined with plunge temperature of −80 °C was optimal. PMID:24560712
NASA Astrophysics Data System (ADS)
Chandra, Shubham; Rao, Balkrishna C.
2017-06-01
The process of laser engineered net shaping (LENSTM) is an additive manufacturing technique that employs the coaxial flow of metallic powders with a high-power laser to form a melt pool and the subsequent deposition of the specimen on a substrate. Although research done over the past decade on the LENSTM processing of alloys of steel, titanium, nickel and other metallic materials typically reports superior mechanical properties in as-deposited specimens, when compared to the bulk material, there is anisotropy in the mechanical properties of the melt deposit. The current study involves the development of a numerical model of the LENSTM process, using the principles of computational fluid dynamics (CFD), and the subsequent prediction of the volume fraction of equiaxed grains to predict process parameters required for the deposition of workpieces with isotropy in their properties. The numerical simulation is carried out on ANSYS-Fluent, whose data on thermal gradient are used to determine the volume fraction of the equiaxed grains present in the deposited specimen. This study has been validated against earlier efforts on the experimental studies of LENSTM for alloys of nickel. Besides being applicable to the wider family of metals and alloys, the results of this study will also facilitate effective process design to improve both product quality and productivity.
Brown, Alexander L; Wagner, Gregory J; Metzinger, Kurt E
2012-06-01
Transportation accidents frequently involve liquids dispersing in the atmosphere. An example is that of aircraft impacts, which often result in spreading fuel and a subsequent fire. Predicting the resulting environment is of interest for design, safety, and forensic applications. This environment is challenging for many reasons, one among them being the disparate time and length scales that are necessary to resolve for an accurate physical representation of the problem. A recent computational method appropriate for this class of problems has been described for modeling the impact and subsequent liquid spread. Because the environment is difficult to instrument and costly to test, the existing validation data are of limited scope and quality. A comparatively well instrumented test involving a rocket propelled cylindrical tank of water was performed, the results of which are helpful to understand the adequacy of the modeling methods. Existing data include estimates of drop sizes at several locations, final liquid surface deposition mass integrated over surface area regions, and video evidence of liquid cloud spread distances. Comparisons are drawn between the experimental observations and the predicted results of the modeling methods to provide evidence regarding the accuracy of the methods, and to provide guidance on the application and use of these methods.
Ecological Validity Revisited: A 10-Year Comparison of Two Journals.
ERIC Educational Resources Information Center
Ford, Jerry; Gaylord-Ross, Robert
1991-01-01
This study examined 40 articles published in the "American Journal on Mental Retardation" or the "Journal of the Association for Persons with Severe Handicaps" (JASH) from 1976-78 and 1986-88. Both journals published low numbers of articles with ecological validity in the late 1970s, but JASH subsequently increased…
Improving the Performance of the Listening Competency Scale: Revision and Validation
ERIC Educational Resources Information Center
Mickelson, William T.; Welch, S. A.
2013-01-01
Measuring latent traits is central to quantitative listening research and has been the focus of many studies. One such prominent measurement instrument, based on the Wolvin and Coakley (1993) listening taxonomy, was developed by Ford, Wolvin, and Chung (2000). Subsequent validation research (Mickelson & Welch, 2012) called for revisiting and…
Reliability and Validity of Autism Diagnostic Interview-Revised, Japanese Version
ERIC Educational Resources Information Center
Tsuchiya, Kenji J.; Matsumoto, Kaori; Yagi, Atsuko; Inada, Naoko; Kuroda, Miho; Inokuchi, Eiko; Koyama, Tomonori; Kamio, Yoko; Tsujii, Masatsugu; Sakai, Saeko; Mohri, Ikuko; Taniike, Masako; Iwanaga, Ryoichiro; Ogasahara, Kei; Miyachi, Taishi; Nakajima, Shunji; Tani, Iori; Ohnishi, Masafumi; Inoue, Masahiko; Nomura, Kazuyo; Hagiwara, Taku; Uchiyama, Tokio; Ichikawa, Hironobu; Kobayashi, Shuji; Miyamoto, Ken; Nakamura, Kazuhiko; Suzuki, Katsuaki; Mori, Norio; Takei, Nori
2013-01-01
To examine the inter-rater reliability of Autism Diagnostic Interview-Revised, Japanese Version (ADI-R-JV), the authors recruited 51 individuals aged 3-19 years, interviewed by two independent raters. Subsequently, to assess the discriminant and diagnostic validity of ADI-R-JV, the authors investigated 317 individuals aged 2-19 years, who were…
ERIC Educational Resources Information Center
Montirosso, Rosario; Cozzi, Patrizia; Putnam, Samuel P.; Gartstein, Maria A.; Borgatti, Renato
2011-01-01
An Italian translation of the Infant Behavior Questionnaire-Revised (IBQ-R) was developed and evaluated with 110 infants, demonstrating satisfactory internal consistency, discriminant validity, and construct validity in the form of gender and age differences, as well as factorial integrity. Cross-cultural differences were subsequently evaluated…
Numerical simulation and validation of SI-CAI hybrid combustion in a CAI/HCCI gasoline engine
NASA Astrophysics Data System (ADS)
Wang, Xinyan; Xie, Hui; Xie, Liyan; Zhang, Lianfang; Li, Le; Chen, Tao; Zhao, Hua
2013-02-01
SI-CAI hybrid combustion, also known as spark-assisted compression ignition (SACI), is a promising concept to extend the operating range of CAI (Controlled Auto-Ignition) and achieve the smooth transition between spark ignition (SI) and CAI in the gasoline engine. In this study, a SI-CAI hybrid combustion model (HCM) has been constructed on the basis of the 3-Zones Extended Coherent Flame Model (ECFM3Z). An ignition model is included to initiate the ECFM3Z calculation and induce the flame propagation. In order to precisely depict the subsequent auto-ignition process of the unburned fuel and air mixture independently after the initiation of flame propagation, the tabulated chemistry concept is adopted to describe the auto-ignition chemistry. The methodology for extracting tabulated parameters from the chemical kinetics calculations is developed so that both cool flame reactions and main auto-ignition combustion can be well captured under a wider range of thermodynamic conditions. The SI-CAI hybrid combustion model (HCM) is then applied in the three-dimensional computational fluid dynamics (3-D CFD) engine simulation. The simulation results are compared with the experimental data obtained from a single cylinder VVA engine. The detailed analysis of the simulations demonstrates that the SI-CAI hybrid combustion process is characterised with the early flame propagation and subsequent multi-site auto-ignition around the main flame front, which is consistent with the optical results reported by other researchers. Besides, the systematic study of the in-cylinder condition reveals the influence mechanism of the early flame propagation on the subsequent auto-ignition.
Validation of the thermal challenge problem using Bayesian Belief Networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McFarland, John; Swiler, Laura Painton
The thermal challenge problem has been developed at Sandia National Laboratories as a testbed for demonstrating various types of validation approaches and prediction methods. This report discusses one particular methodology to assess the validity of a computational model given experimental data. This methodology is based on Bayesian Belief Networks (BBNs) and can incorporate uncertainty in experimental measurements, in physical quantities, and model uncertainties. The approach uses the prior and posterior distributions of model output to compute a validation metric based on Bayesian hypothesis testing (a Bayes' factor). This report discusses various aspects of the BBN, specifically in the context ofmore » the thermal challenge problem. A BBN is developed for a given set of experimental data in a particular experimental configuration. The development of the BBN and the method for ''solving'' the BBN to develop the posterior distribution of model output through Monte Carlo Markov Chain sampling is discussed in detail. The use of the BBN to compute a Bayes' factor is demonstrated.« less
NASA Astrophysics Data System (ADS)
Drexler, Andreas; Ecker, Werner; Hessert, Roland; Oberwinkler, Bernd; Gänser, Hans-Peter; Keckes, Jozef; Hofmann, Michael; Fischersworring-Bunk, Andreas
2017-10-01
In this work the evolution of the residual stress field in a forged and heat treated turbine disk of Alloy 718 and its subsequent relaxation during machining was simulated and measured. After forging at around 1000 °C the disks were natural air cooled to room temperature and direct aged in a furnace at 720 °C for 8 hours and at 620 °C for 8 hours. The machining of the Alloy 718 turbine disk was performed in two steps: The machining of the Alloy 718 turbine disk was performed in two steps: First, from the forging contour to a contour used for ultra-sonic testing. Second, from the latter to the final contour. The thermal boundary conditions in the finite element model for air cooling and furnace heating were estimated based on analytical equations from literature. A constitutive model developed for the unified description of rate dependent and rate independent mechanical material behavior of Alloy 718 under in-service conditions up to temperatures of 1000 °C was extended and parametrized to meet the manufacturing conditions with temperatures up to 1000 °C. The results of the finite element model were validated with measurements on real-scale turbine disks. The thermal boundary conditions were validated in-field with measured cooling curves. For that purpose holes were drilled at different positions into the turbine disk and thermocouples were mounted in these holes to record the time-temperature curves during natural cooling and heating. The simulated residual stresses were validated by using the hole drilling method and the neutron diffraction technique. The accuracy of the finite element model for the final manufacturing step investigated was ±50 MPa.
Validation of the Soil Moisture Active Passive mission using USDA-ARS experimental watersheds
USDA-ARS?s Scientific Manuscript database
The calibration and validation program of the Soil Moisture Active Passive mission (SMAP) relies upon an international cooperative of in situ networks to provide ground truth references across a variety of landscapes. The USDA Agricultural Research Service operates several experimental watersheds wh...
Fischer, Kenneth J; Johnson, Joshua E; Waller, Alexander J; McIff, Terence E; Toby, E Bruce; Bilgen, Mehmet
2011-10-01
The objective of this study was to validate the MRI-based joint contact modeling methodology in the radiocarpal joints by comparison of model results with invasive specimen-specific radiocarpal contact measurements from four cadaver experiments. We used a single validation criterion for multiple outcome measures to characterize the utility and overall validity of the modeling approach. For each experiment, a Pressurex film and a Tekscan sensor were sequentially placed into the radiocarpal joints during simulated grasp. Computer models were constructed based on MRI visualization of the cadaver specimens without load. Images were also acquired during the loaded configuration used with the direct experimental measurements. Geometric surface models of the radius, scaphoid and lunate (including cartilage) were constructed from the images acquired without the load. The carpal bone motions from the unloaded state to the loaded state were determined using a series of 3D image registrations. Cartilage thickness was assumed uniform at 1.0 mm with an effective compressive modulus of 4 MPa. Validation was based on experimental versus model contact area, contact force, average contact pressure and peak contact pressure for the radioscaphoid and radiolunate articulations. Contact area was also measured directly from images acquired under load and compared to the experimental and model data. Qualitatively, there was good correspondence between the MRI-based model data and experimental data, with consistent relative size, shape and location of radioscaphoid and radiolunate contact regions. Quantitative data from the model generally compared well with the experimental data for all specimens. Contact area from the MRI-based model was very similar to the contact area measured directly from the images. For all outcome measures except average and peak pressures, at least two specimen models met the validation criteria with respect to experimental measurements for both articulations. Only the model for one specimen met the validation criteria for average and peak pressure of both articulations; however the experimental measures for peak pressure also exhibited high variability. MRI-based modeling can reliably be used for evaluating the contact area and contact force with similar confidence as in currently available experimental techniques. Average contact pressure, and peak contact pressure were more variable from all measurement techniques, and these measures from MRI-based modeling should be used with some caution.
NASA Astrophysics Data System (ADS)
Dartevelle, S.
2006-12-01
Large-scale volcanic eruptions are inherently hazardous events, hence cannot be described by detailed and accurate in situ measurements; hence, volcanic explosive phenomenology is inadequately constrained in terms of initial and inflow conditions. Consequently, little to no real-time data exist to Verify and Validate computer codes developed to model these geophysical events as a whole. However, code Verification and Validation remains a necessary step, particularly when volcanologists use numerical data for mitigation of volcanic hazards as more often performed nowadays. The Verification and Validation (V&V) process formally assesses the level of 'credibility' of numerical results produced within a range of specific applications. The first step, Verification, is 'the process of determining that a model implementation accurately represents the conceptual description of the model', which requires either exact analytical solutions or highly accurate simplified experimental data. The second step, Validation, is 'the process of determining the degree to which a model is an accurate representation of the real world', which requires complex experimental data of the 'real world' physics. The Verification step is rather simple to formally achieve, while, in the 'real world' explosive volcanism context, the second step, Validation, is about impossible. Hence, instead of validating computer code against the whole large-scale unconstrained volcanic phenomenology, we rather suggest to focus on the key physics which control these volcanic clouds, viz., momentum-driven supersonic jets and multiphase turbulence. We propose to compare numerical results against a set of simple but well-constrained analog experiments, which uniquely and unambiguously represent these two key-phenomenology separately. Herewith, we use GMFIX (Geophysical Multiphase Flow with Interphase eXchange, v1.62), a set of multiphase- CFD FORTRAN codes, which have been recently redeveloped to meet the strict Quality Assurance, verification, and validation requirements from the Office of Civilian Radioactive Waste Management of the US Dept of Energy. GMFIX solves Navier-Stokes and energy partial differential equations for each phase with appropriate turbulence and interfacial coupling between phases. For momentum-driven single- to multi-phase underexpanded jets, the position of the first Mach disk is known empirically as a function of both the pressure ratio, K, and the particle mass fraction, Phi at the nozzle. Namely, the higher K, the further downstream the Mach disk and the higher Phi, the further upstream the first Mach disk. We show that GMFIX captures these two essential features. In addition, GMFIX displays all the properties found in these jets, such as expansion fans, incident and reflected shocks, and subsequent downstream mach discs, which make this code ideal for further investigations of equivalent volcanological phenomena. One of the other most challenging aspects of volcanic phenomenology is the multiphase nature of turbulence. We also validated GMFIX in comparing the velocity profiles and turbulence quantities against well constrained analog experiments. The velocity profiles agree with the analog ones as well as these of production of turbulent quantities. Overall, the Verification and the Validation experiments although inherently challenging suggest GMFIX captures the most essential dynamical properties of multiphase and supersonic flows and jets.
Geist, Rebecca E; DuBois, Chase H; Nichols, Timothy C; Caughey, Melissa C; Merricks, Elizabeth P; Raymer, Robin; Gallippi, Caterina M
2016-09-01
Acoustic radiation force impulse (ARFI) Surveillance of Subcutaneous Hemorrhage (ASSH) has been previously demonstrated to differentiate bleeding phenotype and responses to therapy in dogs and humans, but to date, the method has lacked experimental validation. This work explores experimental validation of ASSH in a poroelastic tissue-mimic and in vivo in dogs. The experimental design exploits calibrated flow rates and infusion durations of evaporated milk in tofu or heparinized autologous blood in dogs. The validation approach enables controlled comparisons of ASSH-derived bleeding rate (BR) and time to hemostasis (TTH) metrics. In tissue-mimicking experiments, halving the calibrated flow rate yielded ASSH-derived BRs that decreased by 44% to 48%. Furthermore, for calibrated flow durations of 5.0 minutes and 7.0 minutes, average ASSH-derived TTH was 5.2 minutes and 7.0 minutes, respectively, with ASSH predicting the correct TTH in 78% of trials. In dogs undergoing calibrated autologous blood infusion, ASSH measured a 3-minute increase in TTH, corresponding to the same increase in the calibrated flow duration. For a measured 5% decrease in autologous infusion flow rate, ASSH detected a 7% decrease in BR. These tissue-mimicking and in vivo preclinical experimental validation studies suggest the ASSH BR and TTH measures reflect bleeding dynamics. © The Author(s) 2015.
Pal, Debojyoti; Sharma, Deepak; Kumar, Mukesh; Sandur, Santosh K
2016-09-01
S-glutathionylation of proteins plays an important role in various biological processes and is known to be protective modification during oxidative stress. Since, experimental detection of S-glutathionylation is labor intensive and time consuming, bioinformatics based approach is a viable alternative. Available methods require relatively longer sequence information, which may prevent prediction if sequence information is incomplete. Here, we present a model to predict glutathionylation sites from pentapeptide sequences. It is based upon differential association of amino acids with glutathionylated and non-glutathionylated cysteines from a database of experimentally verified sequences. This data was used to calculate position dependent F-scores, which measure how a particular amino acid at a particular position may affect the likelihood of glutathionylation event. Glutathionylation-score (G-score), indicating propensity of a sequence to undergo glutathionylation, was calculated using position-dependent F-scores for each amino-acid. Cut-off values were used for prediction. Our model returned an accuracy of 58% with Matthew's correlation-coefficient (MCC) value of 0.165. On an independent dataset, our model outperformed the currently available model, in spite of needing much less sequence information. Pentapeptide motifs having high abundance among glutathionylated proteins were identified. A list of potential glutathionylation hotspot sequences were obtained by assigning G-scores and subsequent Protein-BLAST analysis revealed a total of 254 putative glutathionable proteins, a number of which were already known to be glutathionylated. Our model predicted glutathionylation sites in 93.93% of experimentally verified glutathionylated proteins. Outcome of this study may assist in discovering novel glutathionylation sites and finding candidate proteins for glutathionylation.
Rathnayaka, C M; Karunasena, H C P; Senadeera, W; Gu, Y T
2018-03-14
Numerical modelling has gained popularity in many science and engineering streams due to the economic feasibility and advanced analytical features compared to conventional experimental and theoretical models. Food drying is one of the areas where numerical modelling is increasingly applied to improve drying process performance and product quality. This investigation applies a three dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) and Coarse-Grained (CG) numerical approach to predict the morphological changes of different categories of food-plant cells such as apple, grape, potato and carrot during drying. To validate the model predictions, experimental findings from in-house experimental procedures (for apple) and sources of literature (for grape, potato and carrot) have been utilised. The subsequent comaprison indicate that the model predictions demonstrate a reasonable agreement with the experimental findings, both qualitatively and quantitatively. In this numerical model, a higher computational accuracy has been maintained by limiting the consistency error below 1% for all four cell types. The proposed meshfree-based approach is well-equipped to predict the morphological changes of plant cellular structure over a wide range of moisture contents (10% to 100% dry basis). Compared to the previous 2-D meshfree-based models developed for plant cell drying, the proposed model can draw more useful insights on the morphological behaviour due to the 3-D nature of the model. In addition, the proposed computational modelling approach has a high potential to be used as a comprehensive tool in many other tissue morphology related investigations.
Hains, Guy; Boucher, Pierre B; Lamy, Anne-Marie
2015-03-01
The aim of this study was to evaluate the efficacy of myofascial therapy involving ischemic compression on trigger points in combination with mobilization therapy on patients with chronic nonspecific foot pain. Two quasi-experimental before-and-after studies involving two different baseline states. Foot pain patients at a private clinic were divided into two separate cohorts: A, custom orthotic users; and B, non-users. In Study A, 31 users received 15 experimental treatments consisting of ischemic compressions on trigger points and mobilization of articulations through the foot immediately after study enrollment. In study B, ten non-users were prescribed a soft prefabricated insole and were monitored for five weeks before subsequently receiving 15 experimental treatments after the initial five-week delay. The Foot Function Index (FFI) and patients' perceived improvement score (PIS) on a scale from 0% to 100%. The Study A group (n=31) maintained a significant reduction in the FFI at all three follow-up evaluations. Mean improvement from baseline in FFI was 47%, 49% and 56% at 0, 1 and 6 months, respectively, post-treatment. Mean PIS was 58%, 57%, and 58%, again at 0, 1 and 6 months post-treatment. For the Study B group, mean improvement in FFI was only 19% after the monitoring period, and 64% after the experimental treatment period. Mean PIS was 31% after monitoring, and 78% after experimental treatment. In repeated measures analyses, experimental treatment was associated with a significant main effect in both of these before-and after studies (all P values<0.01). Combined treatment involving ischemic compression and joint mobilization for chronic foot pain is associated with significant improvements in functional and self-perceived improvement immediately and at up to six-months post-treatment. Further validation of this treatment approach within a randomized controlled trial is needed.
20 CFR 655.750 - What is the validity period of the labor condition application?
Code of Federal Regulations, 2010 CFR
2010-04-01
... ETA 9035E or ETA 9035. The validity period of an LCA will not begin before the application is... in writing and must be sent to ETA, Office of Foreign Labor Certification. ETA will publish the... is superseded by a subsequent application which is certified by ETA. (4) An employer's obligation to...
ERIC Educational Resources Information Center
Ellett, Chad D.; Monsaas, Judy
2011-01-01
This article describes the development and validation of the Inventory of Teaching and Learning (ITAL) as a new measure of teacher perceptions of science and mathematics learning environments. The ITAL was initially developed and administered in 2004 and has subsequently been revised annually. The ITAL is administered using a confidential…
Bell, Nicole S.; Williams, Jeffrey O.; Senier, Laura; Strowman, Shelley R.; Amoroso, Paul J.
2007-01-01
Background The reliability and validity of self-reported drinking behaviors from the Army Health Risk Appraisal (HRA) survey are unknown. Methods We compared demographics and health experiences of those who completed the HRA with those who did not (1991–1998). We also evaluated the reliability and validity of eight HRA alcohol-related items, including the CAGE, weekly drinking quantity, and drinking and driving measures. We used Cohen’s κ and Pearson’s r to assess reliability and convergent validity. To assess criterion (predictive) validity, we used proportional hazards and logistical regression models predicting alcohol-related hospitalizations and alcohol-related separations from the Army, respectively. Results A total of 404,966 soldiers completed an HRA. No particular demographic group seems to be over- or underrepresented. Although few respondents skipped alcohol items, those who did tended to be older and of minority race. The alcohol items demonstrate a reasonable degree of reliability, with Cronbach’s α = 0.69 and test-retest reliability associations in the 0.75–0.80 range for most items over 2- to 30-day interims between surveys. The alcohol measures showed good criterion-related validity: those consuming more than 21 drinks per week were at 6 times the risk for subsequent alcohol-related hospitalization versus those who abstained from drinking (hazard ratio, 6.36; 95% confidence interval=5.79, 6.99). Those who said their friends worried about their drinking were almost 5 times more likely to be discharged due to alcoholism (risk ratio, 4.9; 95% confidence interval=4.00, 6.04) and 6 times more likely to experience an alcohol-related hospitalization (hazard ratio, 6.24; 95% confidence interval=5.74, 6.77). Conclusions The Army’s HRA alcohol items seem to elicit reliable and valid responses. Because HRAs contain identifiers, alcohol use can be linked with subsequent health and occupational outcomes, making the HRA a useful epidemiological research tool. Associations between perceived peer opinions of drinking and subsequent problems deserve further exploration. PMID:12766628
Validation of the Mayo Hip Score: construct validity, reliability and responsiveness to change.
Singh, Jasvinder A; Schleck, Cathy; Harmsen, W Scott; Lewallen, David G
2016-01-19
Previous studies have provided the initial evidence for construct validity and test-retest reliability of the Mayo Hip Score. Instruments used for Total Hip Arthroplasty (THA) outcomes assessment should be valid, reliable and responsive to change. Our main objective was to examine the responsiveness to change, association with subsequent revision and the construct validity of the Mayo hip score. Discriminant ability was assessed by calculating effect size (ES), standardized response mean (SRM) and Guyatt's responsiveness index (GRI). Minimal clinically important difference (MCII) and moderate improvement thresholds were calculated. We assessed construct validity by examining association of scores with preoperative patient characteristics and correlation with Harris hip score, and assessed association of scores with the risk of subsequent revision. Five thousand three hundred seven provided baseline data; of those with baseline data, 2,278 and 2,089 (39%) provided 2- and 5-year data, respectively. Large ES, SRM and GRI ranging 2.66-2.78, 2.42-2.61 and 1.67-1.88 were noted for Mayo hip scores with THA, respectively. The MCII and moderate improvement thresholds were 22.4-22.7 and 39.4-40.5 respectively. Hazard ratios of revision surgery were higher with lower final score or less improvement in Mayo hip score at 2-years and borderline significant/non-significant at 5-years, respectively: (1) score ≤55 with hazard ratios of 2.24 (95% CI, 1.45, 3.46; p = 0.0003) and 1.70 (95% CI, 1.00, 2.92; p = 0.05) of implant revision subsequently, compared to 72-80 points; (2) no improvement or worsening score with hazard ratios 3.94 (95% CI, 1.50, 10.30; p = 0.005) and 2.72 (95% CI, 0.85,8.70; p = 0.09), compared to improvement >50-points. Mayo hip score had significant positive correlation with younger age, male gender, lower BMI, lower ASA class and lower Deyo-Charlson index (p ≤ 0.003 for each) and with Harris hip scores (p < 0.001). Mayo Hip Score is valid, sensitive to change and associated with future risk of revision surgery in patients with primary THA.
Ng, John Y.; Boelen, Lies; Wong, Jason W. H.
2013-01-01
Protein 3-nitrotyrosine is a post-translational modification that commonly arises from the nitration of tyrosine residues. This modification has been detected under a wide range of pathological conditions and has been shown to alter protein function. Whether 3-nitrotyrosine is important in normal cellular processes or is likely to affect specific biological pathways remains unclear. Using GPS-YNO2, a recently described 3-nitrotyrosine prediction algorithm, a set of predictions for nitrated residues in the human proteome was generated. In total, 9.27 per cent of the proteome was predicted to be nitratable (27 922/301 091). By matching the predictions against a set of curated and experimentally validated 3-nitrotyrosine sites in human proteins, it was found that GPS-YNO2 is able to predict 73.1 per cent (404/553) of these sites. Furthermore, of these sites, 42 have been shown to be nitrated endogenously, with 85.7 per cent (36/42) of these predicted to be nitrated. This demonstrates the feasibility of using the predicted dataset for a whole proteome analysis. A comprehensive bioinformatics analysis was subsequently performed on predicted and all experimentally validated nitrated tyrosine. This found mild but specific biophysical constraints that affect the susceptibility of tyrosine to nitration, and these may play a role in increasing the likelihood of 3-nitrotyrosine to affect processes, including phosphorylation and DNA binding. Furthermore, examining the evolutionary conservation of predicted 3-nitrotyrosine showed that, relative to non-nitrated tyrosine residues, 3-nitrotyrosine residues are generally less conserved. This suggests that, at least in the majority of cases, 3-nitrotyrosine is likely to have a deleterious effect on protein function and less likely to be important in normal cellular function. PMID:23389939
Prediction of microRNAs Associated with Human Diseases Based on Weighted k Most Similar Neighbors
Guo, Maozu; Guo, Yahong; Li, Jinbao; Ding, Jian; Liu, Yong; Dai, Qiguo; Li, Jin; Teng, Zhixia; Huang, Yufei
2013-01-01
Background The identification of human disease-related microRNAs (disease miRNAs) is important for further investigating their involvement in the pathogenesis of diseases. More experimentally validated miRNA-disease associations have been accumulated recently. On the basis of these associations, it is essential to predict disease miRNAs for various human diseases. It is useful in providing reliable disease miRNA candidates for subsequent experimental studies. Methodology/Principal Findings It is known that miRNAs with similar functions are often associated with similar diseases and vice versa. Therefore, the functional similarity of two miRNAs has been successfully estimated by measuring the semantic similarity of their associated diseases. To effectively predict disease miRNAs, we calculated the functional similarity by incorporating the information content of disease terms and phenotype similarity between diseases. Furthermore, the members of miRNA family or cluster are assigned higher weight since they are more probably associated with similar diseases. A new prediction method, HDMP, based on weighted k most similar neighbors is presented for predicting disease miRNAs. Experiments validated that HDMP achieved significantly higher prediction performance than existing methods. In addition, the case studies examining prostatic neoplasms, breast neoplasms, and lung neoplasms, showed that HDMP can uncover potential disease miRNA candidates. Conclusions The superior performance of HDMP can be attributed to the accurate measurement of miRNA functional similarity, the weight assignment based on miRNA family or cluster, and the effective prediction based on weighted k most similar neighbors. The online prediction and analysis tool is freely available at http://nclab.hit.edu.cn/hdmpred. PMID:23950912
Sperschneider, Jana; Williams, Angela H; Hane, James K; Singh, Karam B; Taylor, Jennifer M
2015-01-01
The steadily increasing number of sequenced fungal and oomycete genomes has enabled detailed studies of how these eukaryotic microbes infect plants and cause devastating losses in food crops. During infection, fungal and oomycete pathogens secrete effector molecules which manipulate host plant cell processes to the pathogen's advantage. Proteinaceous effectors are synthesized intracellularly and must be externalized to interact with host cells. Computational prediction of secreted proteins from genomic sequences is an important technique to narrow down the candidate effector repertoire for subsequent experimental validation. In this study, we benchmark secretion prediction tools on experimentally validated fungal and oomycete effectors. We observe that for a set of fungal SwissProt protein sequences, SignalP 4 and the neural network predictors of SignalP 3 (D-score) and SignalP 2 perform best. For effector prediction in particular, the use of a sensitive method can be desirable to obtain the most complete candidate effector set. We show that the neural network predictors of SignalP 2 and 3, as well as TargetP were the most sensitive tools for fungal effector secretion prediction, whereas the hidden Markov model predictors of SignalP 2 and 3 were the most sensitive tools for oomycete effectors. Thus, previous versions of SignalP retain value for oomycete effector prediction, as the current version, SignalP 4, was unable to reliably predict the signal peptide of the oomycete Crinkler effectors in the test set. Our assessment of subcellular localization predictors shows that cytoplasmic effectors are often predicted as not extracellular. This limits the reliability of secretion predictions that depend on these tools. We present our assessment with a view to informing future pathogenomics studies and suggest revised pipelines for secretion prediction to obtain optimal effector predictions in fungi and oomycetes.
Further Validation of a CFD Code for Calculating the Performance of Two-Stage Light Gas Guns
NASA Technical Reports Server (NTRS)
Bogdanoff, David W.
2017-01-01
Earlier validations of a higher-order Godunov code for modeling the performance of two-stage light gas guns are reviewed. These validation comparisons were made between code predictions and experimental data from the NASA Ames 1.5" and 0.28" guns and covered muzzle velocities of 6.5 to 7.2 km/s. In the present report, five more series of code validation comparisons involving experimental data from the Ames 0.22" (1.28" pump tube diameter), 0.28", 0.50", 1.00" and 1.50" guns are presented. The total muzzle velocity range of the validation data presented herein is 3 to 11.3 km/s. The agreement between the experimental data and CFD results is judged to be very good. Muzzle velocities were predicted within 0.35 km/s for 74% of the cases studied with maximum differences being 0.5 km/s and for 4 out of 50 cases, 0.5 - 0.7 km/s.
NASA Astrophysics Data System (ADS)
Clegg, R. A.; White, D. M.; Hayhurst, C.; Ridel, W.; Harwick, W.; Hiermaier, S.
2003-09-01
The development and validation of an advanced material model for orthotropic materials, such as fibre reinforced composites, is described. The model is specifically designed to facilitate the numerical simulation of impact and shock wave propagation through orthotropic materials and the prediction of subsequent material damage. Initial development of the model concentrated on correctly representing shock wave propagation in composite materials under high and hypervelocity impact conditions [1]. This work has now been extended to further concentrate on the development of improved numerical models and material characterisation techniques for the prediction of damage, including residual strength, in fibre reinforced composite materials. The work is focussed on Kevlar-epoxy however materials such as CFRP are also being considered. The paper describes our most recent activities in relation to the implementation of advanced material modelling options in this area. These enable refined non-liner directional characteristics of composite materials to be modelled, in addition to the correct thermodynamic response under shock wave loading. The numerical work is backed by an extensive experimental programme covering a wide range of static and dynamic tests to facilitate derivation of model input data and to validate the predicted material response. Finally, the capability of the developing composite material model is discussed in relation to a hypervelocity impact problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bragg-Sitton, S.M.; Propulsion Research Center, NASA Marshall Space Flight Center, Huntsville, AL 35812; Kapernick, R.
2004-02-04
Experiments have been designed to characterize the coolant gas flow in two space reactor concepts that are currently under investigation by NASA Marshall Space Flight Center and Los Alamos National Laboratory: the direct-drive gas-cooled reactor (DDG) and the SAFE-100 heatpipe-cooled reactor (HPR). For the DDG concept, initial tests have been completed to measure pressure drop versus flow rate for a prototypic core flow channel, with gas exiting to atmospheric pressure conditions. The experimental results of the completed DDG tests presented in this paper validate the predicted results to within a reasonable margin of error. These tests have resulted in amore » re-design of the flow annulus to reduce the pressure drop. Subsequent tests will be conducted with the re-designed flow channel and with the outlet pressure held at 150 psi (1 MPa). Design of a similar test for a nominal flow channel in the HPR heat exchanger (HPR-HX) has been completed and hardware is currently being assembled for testing this channel at 150 psi. When completed, these test programs will provide the data necessary to validate calculated flow performance for these reactor concepts (pressure drop and film temperature rise)« less
Learned helplessness: validity and reliability of depressive-like states in mice.
Chourbaji, S; Zacher, C; Sanchis-Segura, C; Dormann, C; Vollmayr, B; Gass, P
2005-12-01
The learned helplessness paradigm is a depression model in which animals are exposed to unpredictable and uncontrollable stress, e.g. electroshocks, and subsequently develop coping deficits for aversive but escapable situations (J.B. Overmier, M.E. Seligman, Effects of inescapable shock upon subsequent escape and avoidance responding, J. Comp. Physiol. Psychol. 63 (1967) 28-33 ). It represents a model with good similarity to the symptoms of depression, construct, and predictive validity in rats. Despite an increased need to investigate emotional, in particular depression-like behaviors in transgenic mice, so far only a few studies have been published using the learned helplessness paradigm. One reason may be the fact that-in contrast to rats (B. Vollmayr, F.A. Henn, Learned helplessness in the rat: improvements in validity and reliability, Brain Res. Brain Res. Protoc. 8 (2001) 1-7)--there is no generally accepted learned helplessness protocol available for mice. This prompted us to develop a reliable helplessness procedure in C57BL/6N mice, to exclude possible artifacts, and to establish a protocol, which yields a consistent fraction of helpless mice following the shock exposure. Furthermore, we validated this protocol pharmacologically using the tricyclic antidepressant imipramine. Here, we present a mouse model with good face and predictive validity that can be used for transgenic, behavioral, and pharmacological studies.
2003-03-01
Different?," Jour. of Experimental & Theoretical Artificial Intelligence, Special Issue on Al for Systems Validation and Verification, 12(4), 2000, pp...Hamilton, D., " Experiences in Improving the State of Practice in Verification and Validation of Knowledge-Based Systems," Workshop Notes of the AAAI...Unsuspected Power of the Standard Turing Test," Jour. of Experimental & Theoretical Artificial Intelligence., 12, 2000, pp3 3 1-3 4 0 . [30] Gaschnig
NASA Astrophysics Data System (ADS)
Alpert, Peter A.; Knopf, Daniel A.
2016-02-01
Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature, T, and relative humidity, RH, at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling-rate-dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nucleating particles (INPs) all have the same INP surface area (ISA); however, the validity of this assumption or the impact it may have on analysis and interpretation of the experimental data is rarely questioned. Descriptions of ice active sites and variability of contact angles have been successfully formulated to describe ice nucleation experimental data in previous research; however, we consider the ability of a stochastic freezing model founded on classical nucleation theory to reproduce previous results and to explain experimental uncertainties and data scatter. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses parameters including the total number of droplets, Ntot, and the heterogeneous ice nucleation rate coefficient, Jhet(T). This model is applied to address if (i) a time and ISA-dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous-flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time-dependent isothermal frozen fractions exhibiting non-exponential behavior can be readily explained by this model considering varying ISA. An apparent cooling-rate dependence of Jhet is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling-rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. The model simulations allow for a quantitative experimental uncertainty analysis for parameters Ntot, T, RH, and the ISA variability. The implications of our results for experimental analysis and interpretation of the immersion freezing process are discussed.
Vivaldi: visualization and validation of biomacromolecular NMR structures from the PDB.
Hendrickx, Pieter M S; Gutmanas, Aleksandras; Kleywegt, Gerard J
2013-04-01
We describe Vivaldi (VIsualization and VALidation DIsplay; http://pdbe.org/vivaldi), a web-based service for the analysis, visualization, and validation of NMR structures in the Protein Data Bank (PDB). Vivaldi provides access to model coordinates and several types of experimental NMR data using interactive visualization tools, augmented with structural annotations and model-validation information. The service presents information about the modeled NMR ensemble, validation of experimental chemical shifts, residual dipolar couplings, distance and dihedral angle constraints, as well as validation scores based on empirical knowledge and databases. Vivaldi was designed for both expert NMR spectroscopists and casual non-expert users who wish to obtain a better grasp of the information content and quality of NMR structures in the public archive. Copyright © 2013 Wiley Periodicals, Inc.
STR-validator: an open source platform for validation and process control.
Hansson, Oskar; Gill, Peter; Egeland, Thore
2014-11-01
This paper addresses two problems faced when short tandem repeat (STR) systems are validated for forensic purposes: (1) validation is extremely time consuming and expensive, and (2) there is strong consensus about what to validate but not how. The first problem is solved by powerful data processing functions to automate calculations. Utilising an easy-to-use graphical user interface, strvalidator (hereafter referred to as STR-validator) can greatly increase the speed of validation. The second problem is exemplified by a series of analyses, and subsequent comparison with published material, highlighting the need for a common validation platform. If adopted by the forensic community STR-validator has the potential to standardise the analysis of validation data. This would not only facilitate information exchange but also increase the pace at which laboratories are able to switch to new technology. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Experimental validation of predicted cancer genes using FRET
NASA Astrophysics Data System (ADS)
Guala, Dimitri; Bernhem, Kristoffer; Ait Blal, Hammou; Jans, Daniel; Lundberg, Emma; Brismar, Hjalmar; Sonnhammer, Erik L. L.
2018-07-01
Huge amounts of data are generated in genome wide experiments, designed to investigate diseases with complex genetic causes. Follow up of all potential leads produced by such experiments is currently cost prohibitive and time consuming. Gene prioritization tools alleviate these constraints by directing further experimental efforts towards the most promising candidate targets. Recently a gene prioritization tool called MaxLink was shown to outperform other widely used state-of-the-art prioritization tools in a large scale in silico benchmark. An experimental validation of predictions made by MaxLink has however been lacking. In this study we used Fluorescence Resonance Energy Transfer, an established experimental technique for detection of protein-protein interactions, to validate potential cancer genes predicted by MaxLink. Our results provide confidence in the use of MaxLink for selection of new targets in the battle with polygenic diseases.
Hariharan, Prasanna; D’Souza, Gavin A.; Horner, Marc; Morrison, Tina M.; Malinauskas, Richard A.; Myers, Matthew R.
2017-01-01
A “credible” computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing “model credibility” is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a “threshold-based” validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results (“S”) of velocity and viscous shear stress were compared with inter-laboratory experimental measurements (“D”). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student’s t-test. However, following the threshold-based approach, a Student’s t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices. PMID:28594889
Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R
2017-01-01
A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices.
Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code
NASA Astrophysics Data System (ADS)
Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.
2015-12-01
WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).
Deep phenotyping to predict live birth outcomes in in vitro fertilization
Banerjee, Prajna; Choi, Bokyung; Shahine, Lora K.; Jun, Sunny H.; O’Leary, Kathleen; Lathi, Ruth B.; Westphal, Lynn M.; Wong, Wing H.; Yao, Mylene W. M.
2010-01-01
Nearly 75% of in vitro fertilization (IVF) treatments do not result in live births and patients are largely guided by a generalized age-based prognostic stratification. We sought to provide personalized and validated prognosis by using available clinical and embryo data from prior, failed treatments to predict live birth probabilities in the subsequent treatment. We generated a boosted tree model, IVFBT, by training it with IVF outcomes data from 1,676 first cycles (C1s) from 2003–2006, followed by external validation with 634 cycles from 2007–2008, respectively. We tested whether this model could predict the probability of having a live birth in the subsequent treatment (C2). By using nondeterministic methods to identify prognostic factors and their relative nonredundant contribution, we generated a prediction model, IVFBT, that was superior to the age-based control by providing over 1,000-fold improvement to fit new data (p < 0.05), and increased discrimination by receiver–operative characteristic analysis (area-under-the-curve, 0.80 vs. 0.68 for C1, 0.68 vs. 0.58 for C2). IVFBT provided predictions that were more accurate for ∼83% of C1 and ∼60% of C2 cycles that were out of the range predicted by age. Over half of those patients were reclassified to have higher live birth probabilities. We showed that data from a prior cycle could be used effectively to provide personalized and validated live birth probabilities in a subsequent cycle. Our approach may be replicated and further validated in other IVF clinics. PMID:20643955
Validation of NASA Thermal Ice Protection Computer Codes. Part 1; Program Overview
NASA Technical Reports Server (NTRS)
Miller, Dean; Bond, Thomas; Sheldon, David; Wright, William; Langhals, Tammy; Al-Khalil, Kamel; Broughton, Howard
1996-01-01
The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center. LEWICE/Thermal (electrothermal deicing & anti-icing), and ANTICE (hot-gas & electrothermal anti-icing). The Thermal Code Validation effort was designated as a priority during a 1994 'peer review' of the NASA Lewis Icing program, and was implemented as a cooperative effort with industry. During April 1996, the first of a series of experimental validation tests was conducted in the NASA Lewis Icing Research Tunnel(IRT). The purpose of the April 96 test was to validate the electrothermal predictive capabilities of both LEWICE/Thermal, and ANTICE. A heavily instrumented test article was designed and fabricated for this test, with the capability of simulating electrothermal de-icing and anti-icing modes of operation. Thermal measurements were then obtained over a range of test conditions, for comparison with analytical predictions. This paper will present an overview of the test, including a detailed description of: (1) the validation process; (2) test article design; (3) test matrix development; and (4) test procedures. Selected experimental results will be presented for de-icing and anti-icing modes of operation. Finally, the status of the validation effort at this point will be summarized. Detailed comparisons between analytical predictions and experimental results are contained in the following two papers: 'Validation of NASA Thermal Ice Protection Computer Codes: Part 2- The Validation of LEWICE/Thermal' and 'Validation of NASA Thermal Ice Protection Computer Codes: Part 3-The Validation of ANTICE'
2016-05-24
experimental data. However, the time and length scales, and energy deposition rates in the canonical laboratory flames that have been studied over the...is to obtain high-fidelity experimental data critically needed to validate research codes at relevant conditions, and to develop systematic and...validated with experimental data. However, the time and length scales, and energy deposition rates in the canonical laboratory flames that have been
2016-06-02
Retrieval of droplet-size density distribution from multiple-field-of-view cross-polarized lidar signals: theory and experimental validation...theoretical and experimental studies of mul- tiple scattering and multiple-field-of-view (MFOV) li- dar detection have made possible the retrieval of cloud...droplet cloud are typical of Rayleigh scattering, with a signature close to a dipole (phase function quasi -flat and a zero-depolarization ratio
2001-08-30
Body with Thermo-Chemical destribution of Heat-Protected System . In: Physical and Gasdynamic Phenomena in Supersonic Flows Over Bodies. Edit. By...Final Report on ISTC Contract # 1809p Parametric Study of Advanced Mixing of Fuel/Oxidant System in High Speed Gaseous Flows and Experimental...of Advanced Mixing of Fuel/Oxidant System in High Speed Gaseous Flows and Experimental Validation Planning 5c. PROGRAM ELEMENT NUMBER 5d. PROJECT
Experimental validation of ultrasonic guided modes in electrical cables by optical interferometry.
Mateo, Carlos; de Espinosa, Francisco Montero; Gómez-Ullate, Yago; Talavera, Juan A
2008-03-01
In this work, the dispersion curves of elastic waves propagating in electrical cables and in bare copper wires are obtained theoretically and validated experimentally. The theoretical model, based on Gazis equations formulated according to the global matrix methodology, is resolved numerically. Viscoelasticity and attenuation are modeled theoretically using the Kelvin-Voigt model. Experimental tests are carried out using interferometry. There is good agreement between the simulations and the experiments despite the peculiarities of electrical cables.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-05
... modeling needs and experimental validation techniques for complex flow phenomena in and around off- shore... experimental validation. Ultimately, research in this area may lead to significant improvements in wind plant... meeting will consist of an initial plenary session in which invited speakers will survey available...
Validation of a Monte Carlo simulation of the Inveon PET scanner using GATE
NASA Astrophysics Data System (ADS)
Lu, Lijun; Zhang, Houjin; Bian, Zhaoying; Ma, Jianhua; Feng, Qiangjin; Chen, Wufan
2016-08-01
The purpose of this study is to validate the application of GATE (Geant4 Application for Tomographic Emission) Monte Carlo simulation toolkit in order to model the performance characteristics of Siemens Inveon small animal PET system. The simulation results were validated against experimental/published data in accordance with the NEMA NU-4 2008 protocol for standardized evaluation of spatial resolution, sensitivity, scatter fraction (SF) and noise equivalent counting rate (NECR) of a preclinical PET system. An agreement of less than 18% was obtained between the radial, tangential and axial spatial resolutions of the simulated and experimental results. The simulated peak NECR of mouse-size phantom agreed with the experimental result, while for the rat-size phantom simulated value was higher than experimental result. The simulated and experimental SFs of mouse- and rat- size phantom both reached an agreement of less than 2%. It has been shown the feasibility of our GATE model to accurately simulate, within certain limits, all major performance characteristics of Inveon PET system.
A new simple local muscle recovery model and its theoretical and experimental validation.
Ma, Liang; Zhang, Wei; Wu, Su; Zhang, Zhanwu
2015-01-01
This study was conducted to provide theoretical and experimental validation of a local muscle recovery model. Muscle recovery has been modeled in different empirical and theoretical approaches to determine work-rest allowance for musculoskeletal disorder (MSD) prevention. However, time-related parameters and individual attributes have not been sufficiently considered in conventional approaches. A new muscle recovery model was proposed by integrating time-related task parameters and individual attributes. Theoretically, this muscle recovery model was compared to other theoretical models mathematically. Experimentally, a total of 20 subjects participated in the experimental validation. Hand grip force recovery and shoulder joint strength recovery were measured after a fatiguing operation. The recovery profile was fitted by using the recovery model, and individual recovery rates were calculated as well after fitting. Good fitting values (r(2) > .8) were found for all the subjects. Significant differences in recovery rates were found among different muscle groups (p < .05). The theoretical muscle recovery model was primarily validated by characterization of the recovery process after fatiguing operation. The determined recovery rate may be useful to represent individual recovery attribute.
Poor replication validity of biomedical association studies reported by newspapers
Smith, Andy; Boraud, Thomas; Gonon, François
2017-01-01
Objective To investigate the replication validity of biomedical association studies covered by newspapers. Methods We used a database of 4723 primary studies included in 306 meta-analysis articles. These studies associated a risk factor with a disease in three biomedical domains, psychiatry, neurology and four somatic diseases. They were classified into a lifestyle category (e.g. smoking) and a non-lifestyle category (e.g. genetic risk). Using the database Dow Jones Factiva, we investigated the newspaper coverage of each study. Their replication validity was assessed using a comparison with their corresponding meta-analyses. Results Among the 5029 articles of our database, 156 primary studies (of which 63 were lifestyle studies) and 5 meta-analysis articles were reported in 1561 newspaper articles. The percentage of covered studies and the number of newspaper articles per study strongly increased with the impact factor of the journal that published each scientific study. Newspapers almost equally covered initial (5/39 12.8%) and subsequent (58/600 9.7%) lifestyle studies. In contrast, initial non-lifestyle studies were covered more often (48/366 13.1%) than subsequent ones (45/3718 1.2%). Newspapers never covered initial studies reporting null findings and rarely reported subsequent null observations. Only 48.7% of the 156 studies reported by newspapers were confirmed by the corresponding meta-analyses. Initial non-lifestyle studies were less often confirmed (16/48) than subsequent ones (29/45) and than lifestyle studies (31/63). Psychiatric studies covered by newspapers were less often confirmed (10/38) than the neurological (26/41) or somatic (40/77) ones. This is correlated to an even larger coverage of initial studies in psychiatry. Whereas 234 newspaper articles covered the 35 initial studies that were later disconfirmed, only four press articles covered a subsequent null finding and mentioned the refutation of an initial claim. Conclusion Journalists preferentially cover initial findings although they are often contradicted by meta-analyses and rarely inform the public when they are disconfirmed. PMID:28222122
Poor replication validity of biomedical association studies reported by newspapers.
Dumas-Mallet, Estelle; Smith, Andy; Boraud, Thomas; Gonon, François
2017-01-01
To investigate the replication validity of biomedical association studies covered by newspapers. We used a database of 4723 primary studies included in 306 meta-analysis articles. These studies associated a risk factor with a disease in three biomedical domains, psychiatry, neurology and four somatic diseases. They were classified into a lifestyle category (e.g. smoking) and a non-lifestyle category (e.g. genetic risk). Using the database Dow Jones Factiva, we investigated the newspaper coverage of each study. Their replication validity was assessed using a comparison with their corresponding meta-analyses. Among the 5029 articles of our database, 156 primary studies (of which 63 were lifestyle studies) and 5 meta-analysis articles were reported in 1561 newspaper articles. The percentage of covered studies and the number of newspaper articles per study strongly increased with the impact factor of the journal that published each scientific study. Newspapers almost equally covered initial (5/39 12.8%) and subsequent (58/600 9.7%) lifestyle studies. In contrast, initial non-lifestyle studies were covered more often (48/366 13.1%) than subsequent ones (45/3718 1.2%). Newspapers never covered initial studies reporting null findings and rarely reported subsequent null observations. Only 48.7% of the 156 studies reported by newspapers were confirmed by the corresponding meta-analyses. Initial non-lifestyle studies were less often confirmed (16/48) than subsequent ones (29/45) and than lifestyle studies (31/63). Psychiatric studies covered by newspapers were less often confirmed (10/38) than the neurological (26/41) or somatic (40/77) ones. This is correlated to an even larger coverage of initial studies in psychiatry. Whereas 234 newspaper articles covered the 35 initial studies that were later disconfirmed, only four press articles covered a subsequent null finding and mentioned the refutation of an initial claim. Journalists preferentially cover initial findings although they are often contradicted by meta-analyses and rarely inform the public when they are disconfirmed.
NASA Astrophysics Data System (ADS)
Dörr, Dominik; Joppich, Tobias; Schirmaier, Fabian; Mosthaf, Tobias; Kärger, Luise; Henning, Frank
2016-10-01
Thermoforming of continuously fiber reinforced thermoplastics (CFRTP) is ideally suited to thin walled and complex shaped products. By means of forming simulation, an initial validation of the producibility of a specific geometry, an optimization of the forming process and the prediction of fiber-reorientation due to forming is possible. Nevertheless, applied methods need to be validated. Therefor a method is presented, which enables the calculation of error measures for the mismatch between simulation results and experimental tests, based on measurements with a conventional coordinate measuring device. As a quantitative measure, describing the curvature is provided, the presented method is also suitable for numerical or experimental sensitivity studies on wrinkling behavior. The applied methods for forming simulation, implemented in Abaqus explicit, are presented and applied to a generic geometry. The same geometry is tested experimentally and simulation and test results are compared by the proposed validation method.
Validation of an automated mite counter for Dermanyssus gallinae in experimental laying hen cages.
Mul, Monique F; van Riel, Johan W; Meerburg, Bastiaan G; Dicke, Marcel; George, David R; Groot Koerkamp, Peter W G
2015-08-01
For integrated pest management (IPM) programs to be maximally effective, monitoring of the growth and decline of the pest populations is essential. Here, we present the validation results of a new automated monitoring device for the poultry red mite (Dermanyssus gallinae), a serious pest in laying hen facilities world-wide. This monitoring device (called an "automated mite counter") was validated in experimental laying hen cages with live birds and a growing population of D. gallinae. This validation study resulted in 17 data points of 'number of mites counted' by the automated mite counter and the 'number of mites present' in the experimental laying hen cages. The study demonstrated that the automated mite counter was able to track the D. gallinae population effectively. A wider evaluation showed that this automated mite counter can become a useful tool in IPM of D. gallinae in laying hen facilities.
Children Selectively Trust Individuals Who Have Imitated Them
ERIC Educational Resources Information Center
Over, Harriet; Carpenter, Malinda; Spears, Russell; Gattis, Merideth
2013-01-01
We investigated the influence of being imitated on children's subsequent trust. Five- to six-year-olds interacted with one experimenter who mimicked their choices and another experimenter who made different choices. Children were then presented with two tests. In a preference test, the experimenters offered conflicting preferences for the contents…
van de Streek, Jacco; Neumann, Marcus A
2010-10-01
This paper describes the validation of a dispersion-corrected density functional theory (d-DFT) method for the purpose of assessing the correctness of experimental organic crystal structures and enhancing the information content of purely experimental data. 241 experimental organic crystal structures from the August 2008 issue of Acta Cryst. Section E were energy-minimized in full, including unit-cell parameters. The differences between the experimental and the minimized crystal structures were subjected to statistical analysis. The r.m.s. Cartesian displacement excluding H atoms upon energy minimization with flexible unit-cell parameters is selected as a pertinent indicator of the correctness of a crystal structure. All 241 experimental crystal structures are reproduced very well: the average r.m.s. Cartesian displacement for the 241 crystal structures, including 16 disordered structures, is only 0.095 Å (0.084 Å for the 225 ordered structures). R.m.s. Cartesian displacements above 0.25 A either indicate incorrect experimental crystal structures or reveal interesting structural features such as exceptionally large temperature effects, incorrectly modelled disorder or symmetry breaking H atoms. After validation, the method is applied to nine examples that are known to be ambiguous or subtly incorrect.
Gaillard, María Emilia; Bottero, Daniela; Zurita, María Eugenia; Carriquiriborde, Francisco; Martin Aispuro, Pablo; Bartel, Erika; Sabater-Martínez, David; Bravo, María Sol; Castuma, Celina; Hozbor, Daniela Flavia
2017-01-01
Maternal safety through pertussis vaccination and subsequent maternal–fetal-antibody transfer are well documented, but information on infant protection from pertussis by such antibodies and by subsequent vaccinations is scarce. Since mice are used extensively for maternal-vaccination studies, we adopted that model to narrow those gaps in our understanding of maternal pertussis immunization. Accordingly, we vaccinated female mice with commercial acellular pertussis (aP) vaccine and measured offspring protection against Bordetella pertussis challenge and specific-antibody levels with or without revaccination. Maternal immunization protected the offspring against pertussis, with that immune protection transferred to the offspring lasting for several weeks, as evidenced by a reduction (4–5 logs, p < 0.001) in the colony-forming-units recovered from the lungs of 16-week-old offspring. Moreover, maternal-vaccination-acquired immunity from the first pregnancy still conferred protection to offspring up to the fourth pregnancy. Under the conditions of our experimental protocol, protection to offspring from the aP-induced immunity is transferred both transplacentally and through breastfeeding. Adoptive-transfer experiments demonstrated that transferred antibodies were more responsible for the protection detected in offspring than transferred whole spleen cells. In contrast to reported findings, the protection transferred was not lost after the vaccination of infant mice with the same or other vaccine preparations, and conversely, the immunity transferred from mothers did not interfere with the protection conferred by infant vaccination with the same or different vaccines. These results indicated that aP-vaccine immunization of pregnant female mice conferred protective immunity that is transferred both transplacentally and via offspring breastfeeding without compromising the protection boostered by subsequent infant vaccination. These results—though admittedly not necessarily immediately extrapolatable to humans—nevertheless enabled us to test hypotheses under controlled conditions through detailed sampling and data collection. These findings will hopefully refine hypotheses that can then be validated in subsequent human studies. PMID:28932228
Bayesian cross-entropy methodology for optimal design of validation experiments
NASA Astrophysics Data System (ADS)
Jiang, X.; Mahadevan, S.
2006-07-01
An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.
Computational fluid dynamics modeling of laboratory flames and an industrial flare.
Singh, Kanwar Devesh; Gangadharan, Preeti; Chen, Daniel H; Lou, Helen H; Li, Xianchang; Richmond, Peyton
2014-11-01
A computational fluid dynamics (CFD) methodology for simulating the combustion process has been validated with experimental results. Three different types of experimental setups were used to validate the CFD model. These setups include an industrial-scale flare setups and two lab-scale flames. The CFD study also involved three different fuels: C3H6/CH/Air/N2, C2H4/O2/Ar and CH4/Air. In the first setup, flare efficiency data from the Texas Commission on Environmental Quality (TCEQ) 2010 field tests were used to validate the CFD model. In the second setup, a McKenna burner with flat flames was simulated. Temperature and mass fractions of important species were compared with the experimental data. Finally, results of an experimental study done at Sandia National Laboratories to generate a lifted jet flame were used for the purpose of validation. The reduced 50 species mechanism, LU 1.1, the realizable k-epsilon turbulence model, and the EDC turbulence-chemistry interaction model were usedfor this work. Flare efficiency, axial profiles of temperature, and mass fractions of various intermediate species obtained in the simulation were compared with experimental data and a good agreement between the profiles was clearly observed. In particular the simulation match with the TCEQ 2010 flare tests has been significantly improved (within 5% of the data) compared to the results reported by Singh et al. in 2012. Validation of the speciated flat flame data supports the view that flares can be a primary source offormaldehyde emission.
Sources of Self-Efficacy in Mathematics: A Validation Study
ERIC Educational Resources Information Center
Usher, Ellen L.; Pajares, Frank
2009-01-01
The purpose of this study was to develop and validate items with which to assess A. Bandura's (1997) theorized sources of self-efficacy among middle school mathematics students. Results from Phase 1 (N=1111) were used to develop and refine items for subsequent use. In Phase 2 of the study (N=824), a 39-item, four-factor exploratory model fit best.…
ERIC Educational Resources Information Center
Machingambi, Zadzisai
2017-01-01
The principal focus of this study was to undertake a multilevel assessment of the predictive validity of teacher made tests in the Zimbabwean primary education sector. A correlational research design was adopted for the study, mainly to allow for statistical treatment of data and subsequent classical hypotheses testing using the spearman's rho.…
34 CFR 690.65 - Transfer student: attendance at more than one institution during an award year.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Grant at one institution subsequently enrolls at a second institution in the same award year, the... valid SAR to the second institution; or (2) The second institution obtains a valid ISIR. (b) The second... Federal Pell Grant only for that portion of the academic year in which a student is enrolled at that...
34 CFR 690.65 - Transfer student: attendance at more than one institution during an award year.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Grant at one institution subsequently enrolls at a second institution in the same award year, the... valid SAR to the second institution; or (2) The second institution obtains a valid ISIR. (b) The second... Federal Pell Grant only for that portion of the academic year in which a student is enrolled at that...
34 CFR 690.65 - Transfer student: attendance at more than one institution during an award year.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Grant at one institution subsequently enrolls at a second institution in the same award year, the... valid SAR to the second institution; or (2) The second institution obtains a valid ISIR. (b) The second... Federal Pell Grant only for that portion of the academic year in which a student is enrolled at that...
34 CFR 690.65 - Transfer student: attendance at more than one institution during an award year.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Grant at one institution subsequently enrolls at a second institution in the same award year, the... valid SAR to the second institution; or (2) The second institution obtains a valid ISIR. (b) The second... Federal Pell Grant only for that portion of the academic year in which a student is enrolled at that...
NASA Technical Reports Server (NTRS)
Hazelton, R. C.; Yadlowsky, E. J.; Churchill, R. J.; Parker, L. W.; Sellers, B.
1981-01-01
The effect differential charging of spacecraft thermal control surfaces is assessed by studying the dynamics of the charging process. A program to experimentally validate a computer model of the charging process was established. Time resolved measurements of the surface potential were obtained for samples of Kapton and Teflon irradiated with a monoenergetic electron beam. Results indicate that the computer model and experimental measurements agree well and that for Teflon, secondary emission is the governing factor. Experimental data indicate that bulk conductivities play a significant role in the charging of Kapton.
FY2017 Pilot Project Plan for the Nuclear Energy Knowledge and Validation Center Initiative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Weiju
To prepare for technical development of computational code validation under the Nuclear Energy Knowledge and Validation Center (NEKVAC) initiative, several meetings were held by a group of experts of the Idaho National Laboratory (INL) and the Oak Ridge National Laboratory (ORNL) to develop requirements of, and formulate a structure for, a transient fuel database through leveraging existing resources. It was concluded in discussions of these meetings that a pilot project is needed to address the most fundamental issues that can generate immediate stimulus to near-future validation developments as well as long-lasting benefits to NEKVAC operation. The present project is proposedmore » based on the consensus of these discussions. Analysis of common scenarios in code validation indicates that the incapability of acquiring satisfactory validation data is often a showstopper that must first be tackled before any confident validation developments can be carried out. Validation data are usually found scattered in different places most likely with interrelationships among the data not well documented, incomplete with information for some parameters missing, nonexistent, or unrealistic to experimentally generate. Furthermore, with very different technical backgrounds, the modeler, the experimentalist, and the knowledgebase developer that must be involved in validation data development often cannot communicate effectively without a data package template that is representative of the data structure for the information domain of interest to the desired code validation. This pilot project is proposed to use the legendary TREAT Experiments Database to provide core elements for creating an ideal validation data package. Data gaps and missing data interrelationships will be identified from these core elements. All the identified missing elements will then be filled in with experimental data if available from other existing sources or with dummy data if nonexistent. The resulting hybrid validation data package (composed of experimental and dummy data) will provide a clear and complete instance delineating the structure of the desired validation data and enabling effective communication among the modeler, the experimentalist, and the knowledgebase developer. With a good common understanding of the desired data structure by the three parties of subject matter experts, further existing data hunting will be effectively conducted, new experimental data generation will be realistically pursued, knowledgebase schema will be practically designed; and code validation will be confidently planned.« less
Progress Towards a Microgravity CFD Validation Study Using the ISS SPHERES-SLOSH Experiment
NASA Technical Reports Server (NTRS)
Storey, Jedediah M.; Kirk, Daniel; Marsell, Brandon (Editor); Schallhorn, Paul (Editor)
2017-01-01
Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecrafts mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many CFD programs have been validated by slosh experiments using various fluids in earth gravity, but prior to the ISS SPHERES-Slosh experiment1, little experimental data for long-duration, zero-gravity slosh existed. This paper presents the current status of an ongoing CFD validation study using the ISS SPHERES-Slosh experimental data.
Progress Towards a Microgravity CFD Validation Study Using the ISS SPHERES-SLOSH Experiment
NASA Technical Reports Server (NTRS)
Storey, Jed; Kirk, Daniel (Editor); Marsell, Brandon (Editor); Schallhorn, Paul (Editor)
2017-01-01
Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecrafts mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many CFD programs have been validated by slosh experiments using various fluids in earth gravity, but prior to the ISS SPHERES-Slosh experiment, little experimental data for long-duration, zero-gravity slosh existed. This paper presents the current status of an ongoing CFD validation study using the ISS SPHERES-Slosh experimental data.
Karanci, A Nuray; Dirik, Gülay; Yorulmaz, Orçun
2007-01-01
The aim of the present study was to examine the reliability and the validity of the Turkish translation of the Eysneck Personality Questionnaire Revised-abbreviated Form (EPQR-A) (Francis et al., 1992), which consists of 24 items that assess neuroticism, extraversion, psychoticism, and lying. The questionnaire was first translated into Turkish and then back translated. Subsequently, it was administered to 756 students from 4 different universities. The Fear Survey Inventory-III (FSI-III), Rosenberg Self-Esteem Scales (RSES), and Egna Minnen Betraffande Uppfostran (EMBU-C) were also administered in order to assess the questionnaire's validity. The internal consistency, test-retest reliability, and validity were subsequently evaluated. Factor analysis, similar to the original scale, yielded 4 factors; the neuroticism, extraversion, psychoticism, and lie scales. Kuder-Richardson alpha coefficients for the extraversion, neuroticism, psychoticism, and lie scales were 0.78, 0.65, 0.42, and 0.64, respectively, and the test-retest reliability of the scales was 0.84, 0.82, 0.69, and 0.69, respectively. The relationships between EPQR-A-48, FSI-III, EMBU-C, and RSES were examined in order to evaluate the construct validity of the scale. Our findings support the construct validity of the questionnaire. To investigate gender differences in scores on the subscales, MANOVA was conducted. The results indicated that there was a gender difference only in the lie scale scores. Our findings largely supported the reliability and validity of the questionnaire in a Turkish student sample. The psychometric characteristics of the Turkish version of the EPQR-A were discussed in light of the relevant literature.
Tomei, M Concetta; Mosca Angelucci, Domenica; Ademollo, Nicoletta; Daugulis, Andrew J
2015-03-01
Solid phase extraction performed with commercial polymer beads to treat soil contaminated by chlorophenols (4-chlorophenol, 2,4-dichlorophenol and pentachlorophenol) as single compounds and in a mixture has been investigated in this study. Soil-water-polymer partition tests were conducted to determine the relative affinities of single compounds in soil-water and polymer-water pairs. Subsequent soil extraction tests were performed with Hytrel 8206, the polymer showing the highest affinity for the tested chlorophenols. Factors that were examined were polymer type, moisture content, and contamination level. Increased moisture content (up to 100%) improved the extraction efficiency for all three compounds. Extraction tests at this upper level of moisture content showed removal efficiencies ≥70% for all the compounds and their ternary mixture, for 24 h of contact time, which is in contrast to the weeks and months, normally required for conventional ex situ remediation processes. A dynamic model characterizing the rate and extent of decontamination was also formulated, calibrated and validated with the experimental data. The proposed model, based on the simplified approach of "lumped parameters" for the mass transfer coefficients, provided very good predictions of the experimental data for the absorptive removal of contaminants from soil at different individual solute levels. Parameters evaluated from calibration by fitting of single compound data, have been successfully applied to predict mixture data, with differences between experimental and predicted data in all cases being ≤3%. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Moret-Fernández, David; Angulo, Marta; Latorre, Borja; González-Cebollada, César; López, María Victoria
2017-04-01
Determination of the saturated hydraulic conductivity, Ks, and the α and n parameters of the van Genuchten (1980) water retention curve, θ(h), are fundamental to fully understand and predict soil water distribution. This work presents a new procedure to estimate the soil hydraulic properties from the inverse analysis of a single cumulative upward infiltration curve followed by an overpressure step at the end of the wetting process. Firstly, Ks is calculated by the Darcy's law from the overpressure step. The soil sorptivity (S) is then estimated using the Haverkamp et al., (1994) equation. Next, a relationship between α and n, f(α,n), is calculated from the estimated Sand Ks. The α and n values are finally obtained by the inverse analysis of the experimental data after applying the f(α,n) relationship to the HYDRUS-1D model. The method was validated on theoretical synthetic curves for three different soils (sand, loam and clay), and subsequently tested on experimental sieved soils (sand, loam, clay loam and clay) of known hydraulic properties. A robust relationship was observed between the theoretical α and nvalues (R2 > 0.99) of the different synthetic soils and those estimated from inverse analysis of the upward infiltration curve. Consistent results were also obtained for the experimental soils (R2 > 0.85). These results demonstrated that this technique allowed accurate estimates of the soil hydraulic properties for a wide range of textures, including clay soils.
Marvel Analysis of the Measured High-resolution Rovibronic Spectra of TiO
NASA Astrophysics Data System (ADS)
McKemmish, Laura K.; Masseron, Thomas; Sheppard, Samuel; Sandeman, Elizabeth; Schofield, Zak; Furtenbacher, Tibor; Császár, Attila G.; Tennyson, Jonathan; Sousa-Silva, Clara
2017-02-01
Accurate, experimental rovibronic energy levels, with associated labels and uncertainties, are reported for 11 low-lying electronic states of the diatomic {}48{{Ti}}16{{O}} molecule, determined using the Marvel (Measured Active Rotational-Vibrational Energy Levels) algorithm. All levels are based on lines corresponding to critically reviewed and validated high-resolution experimental spectra taken from 24 literature sources. The transition data are in the 2-22,160 cm-1 region. Out of the 49,679 measured transitions, 43,885 are triplet-triplet, 5710 are singlet-singlet, and 84 are triplet-singlet transitions. A careful analysis of the resulting experimental spectroscopic network (SN) allows 48,590 transitions to be validated. The transitions determine 93 vibrational band origins of {}48{{Ti}}16{{O}}, including 71 triplet and 22 singlet ones. There are 276 (73) triplet-triplet (singlet-singlet) band-heads derived from Marvel experimental energies, 123(38) of which have never been assigned in low- or high-resolution experiments. The highest J value, where J stands for the total angular momentum, for which an energy level is validated is 163. The number of experimentally derived triplet and singlet {}48{{Ti}}16{{O}} rovibrational energy levels is 8682 and 1882, respectively. The lists of validated lines and levels for {}48{{Ti}}16{{O}} are deposited in the supporting information to this paper.
Houdek, Petr
2017-01-01
The aim of this perspective article is to show that current experimental evidence on factors influencing dishonesty has limited external validity. Most of experimental studies is built on random assignments, in which control/experimental groups of subjects face varied sizes of the expected reward for behaving dishonestly, opportunities for cheating, means of rationalizing dishonest behavior etc., and mean groups' reactions are observed. The studies have internal validity in assessing the causal influence of these and other factors, but they lack external validity in organizational, market and other environments. If people can opt into or out of diverse real-world environments, an experiment aimed at studying factors influencing real-life degree of dishonesty should permit for such an option. The behavior of such self-selected groups of marginal subjects would probably contain a larger level of (non)deception than the behavior of average people. The article warns that there are not many studies that would enable self-selection or sorting of participants into varying environments, and that limits current knowledge of the extent and dynamics of dishonest and fraudulent behavior. The article focuses on suggestions how to improve dishonesty research, especially how to avoid the experimenter demand bias.
Houdek, Petr
2017-01-01
The aim of this perspective article is to show that current experimental evidence on factors influencing dishonesty has limited external validity. Most of experimental studies is built on random assignments, in which control/experimental groups of subjects face varied sizes of the expected reward for behaving dishonestly, opportunities for cheating, means of rationalizing dishonest behavior etc., and mean groups’ reactions are observed. The studies have internal validity in assessing the causal influence of these and other factors, but they lack external validity in organizational, market and other environments. If people can opt into or out of diverse real-world environments, an experiment aimed at studying factors influencing real-life degree of dishonesty should permit for such an option. The behavior of such self-selected groups of marginal subjects would probably contain a larger level of (non)deception than the behavior of average people. The article warns that there are not many studies that would enable self-selection or sorting of participants into varying environments, and that limits current knowledge of the extent and dynamics of dishonest and fraudulent behavior. The article focuses on suggestions how to improve dishonesty research, especially how to avoid the experimenter demand bias. PMID:28955279
Fatigue Failure of Space Shuttle Main Engine Turbine Blades
NASA Technical Reports Server (NTRS)
Swanson, Gregrory R.; Arakere, Nagaraj K.
2000-01-01
Experimental validation of finite element modeling of single crystal turbine blades is presented. Experimental results from uniaxial high cycle fatigue (HCF) test specimens and full scale Space Shuttle Main Engine test firings with the High Pressure Fuel Turbopump Alternate Turbopump (HPFTP/AT) provide the data used for the validation. The conclusions show the significant contribution of the crystal orientation within the blade on the resulting life of the component, that the analysis can predict this variation, and that experimental testing demonstrates it.
Ali, Azhar A; Shalhoub, Sami S; Cyr, Adam J; Fitzpatrick, Clare K; Maletsky, Lorin P; Rullkoetter, Paul J; Shelburne, Kevin B
2016-01-25
Healthy patellofemoral (PF) joint mechanics are critical to optimal function of the knee joint. Patellar maltracking may lead to large joint reaction loads and high stresses on the articular cartilage, increasing the risk of cartilage wear and the onset of osteoarthritis. While the mechanical sources of PF joint dysfunction are not well understood, links have been established between PF tracking and abnormal kinematics of the tibiofemoral (TF) joint, specifically following cruciate ligament injury and repair. The objective of this study was to create a validated finite element (FE) representation of the PF joint in order to predict PF kinematics and quadriceps force across healthy and pathological specimens. Measurements from a series of dynamic in-vitro cadaveric experiments were used to develop finite element models of the knee for three specimens. Specimens were loaded under intact, ACL-resected and both ACL and PCL-resected conditions. Finite element models of each specimen were constructed and calibrated to the outputs of the intact knee condition, and subsequently used to predict PF kinematics, contact mechanics, quadriceps force, patellar tendon moment arm and patellar tendon angle of the cruciate resected conditions. Model results for the intact and cruciate resected trials successfully matched experimental kinematics (avg. RMSE 4.0°, 3.1mm) and peak quadriceps forces (avg. difference 5.6%). Cruciate resections demonstrated either increased patellar tendon loads or increased joint reaction forces. The current study advances the standard for evaluation of PF mechanics through direct validation of cruciate-resected conditions including specimen-specific representations of PF anatomy. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ligament-induced sacral fractures of the pelvis are possible.
Steinke, Hanno; Hammer, Niels; Lingslebe, Uwe; Höch, Andreas; Klink, Thomas; Böhme, Jörg
2014-07-01
Pelvic ring stability is maintained passively by both the osseous and the ligamentous apparatus. Therapeutic approaches focus mainly on fracture patterns, so ligaments are often neglected. When they rupture along with the bone after pelvic ring fractures, disrupting stability, ligaments need to be considered during reconstruction and rehabilitation. Our aim was to determine the influence of ligaments on open-book injury using two experimental models with body donors. Mechanisms of bone avulsion related to open-book injury were investigated. Open-book injuries were induced in human pelves and subsequently investigated by anatomical dissection and endoscopy. The findings were compared to CT and MRI scans of open-book injuries. Relevant structures were further analyzed using plastinated cross-sections of the posterior pelvic ring. A fragment of the distal sacrum was observed, related to open-book injury. Two ligaments were found to be responsible for this avulsion phenomenon: the caudal portion of the anterior sacroiliac ligament and another ligament running along the ventral surface of the third sacral vertebra. The sacral fragment remained attached to the coxal bone by this second ligament after open-book injury. These results were validated using plastination and the structures were identified. Pelvic ligaments are probably involved in sacral avulsion caused by lateral traction. Therefore, ligaments should to be taken into account in diagnosis of open-book injury and subsequent therapy. Copyright © 2014 Wiley Periodicals, Inc.
Jones, Brendon R; Brouwers, Luke B; Van Tonder, Warren D; Dippenaar, Matthys A
2017-05-01
The vadose zone typically comprises soil underlain by fractured rock. Often, surface water and groundwater parameters are readily available, but variably saturated flow through soil and rock are oversimplified or estimated as input for hydrological models. In this paper, a series of geotechnical centrifuge experiments are conducted to contribute to the knowledge gaps in: (i) variably saturated flow and dispersion in soil and (ii) variably saturated flow in discrete vertical and horizontal fractures. Findings from the research show that the hydraulic gradient, and not the hydraulic conductivity, is scaled for seepage flow in the geotechnical centrifuge. Furthermore, geotechnical centrifuge modelling has been proven as a viable experimental tool for the modelling of hydrodynamic dispersion as well as the replication of similar flow mechanisms for unsaturated fracture flow, as previously observed in literature. Despite the imminent challenges of modelling variable saturation in the vadose zone, the geotechnical centrifuge offers a powerful experimental tool to physically model and observe variably saturated flow. This can be used to give valuable insight into mechanisms associated with solid-fluid interaction problems under these conditions. Findings from future research can be used to validate current numerical modelling techniques and address the subsequent influence on aquifer recharge and vulnerability, contaminant transport, waste disposal, dam construction, slope stability and seepage into subsurface excavations.
Micro Blowing Simulations Using a Coupled Finite-Volume Lattice-Boltzman n L ES Approach
NASA Technical Reports Server (NTRS)
Menon, S.; Feiz, H.
1990-01-01
Three dimensional large-eddy simulations (LES) of single and multiple jet-in-cross-flow (JICF) are conducted using the 19-bit Lattice Boltzmann Equation (LBE) method coupled with a conventional finite-volume (FV) scheme. In this coupled LBE-FV approach, the LBE-LES is employed to simulate the flow inside the jet nozzles while the FV-LES is used to simulate the crossflow. The key application area is the use of this technique is to study the micro blowing technique (MBT) for drag control similar to the recent experiments at NASA/GRC. It is necessary to resolve the flow inside the micro-blowing and suction holes with high resolution without being restricted by the FV time-step restriction. The coupled LBE-FV-LES approach achieves this objectives in a computationally efficient manner. A single jet in crossflow case is used for validation purpose and the results are compared with experimental data and full LBE-LES simulation. Good agreement with data is obtained. Subsequently, MBT over a flat plate with porosity of 25% is simulated using 9 jets in a compressible cross flow at a Mach number of 0.4. It is shown that MBT suppresses the near-wall vortices and reduces the skin friction by up to 50 percent. This is in good agreement with experimental data.
Wurtmann, Elisabeth J.; Ratushny, Alexander V.; Pan, Min; Beer, Karlyn D.; Aitchison, John D.; Baliga, Nitin S.
2014-01-01
Summary It is known that environmental context influences the degree of regulation at the transcriptional and post-transcriptional levels. However, the principles governing the differential usage and interplay of regulation at these two levels are not clear. Here, we show that the integration of transcriptional and post-transcriptional regulatory mechanisms in a characteristic network motif drives efficient environment-dependent state transitions. Through phenotypic screening, systems analysis, and rigorous experimental validation, we discovered an RNase (VNG2099C) in Halobacterium salinarum that is transcriptionally co-regulated with genes of the aerobic physiologic state but acts on transcripts of the anaerobic state. Through modeling and experimentation we show that this arrangement generates an efficient state-transition switch, within which RNase-repression of a transcriptional positive autoregulation (RPAR) loop is critical for shutting down ATP-consuming active potassium uptake to reserve energy required for salinity adaptation under aerobic, high potassium, or dark conditions. Subsequently, we discovered that many Escherichia coli operons with energy-associated functions are also putatively controlled by RPAR indicating that this network motif may have evolved independently in phylogenetically distant organisms. Thus, our data suggest that interplay of transcriptional and post-transcriptional regulation in the RPAR motifis a generalized principle for efficient environment-dependent state transitions across prokaryotes. PMID:24612392
Rubino, Stefano; Akhtar, Sultan; Leifer, Klaus
2016-02-01
We present a simple, fast method for thickness characterization of suspended graphene/graphite flakes that is based on transmission electron microscopy (TEM). We derive an analytical expression for the intensity of the transmitted electron beam I 0(t), as a function of the specimen thickness t (t<λ; where λ is the absorption constant for graphite). We show that in thin graphite crystals the transmitted intensity is a linear function of t. Furthermore, high-resolution (HR) TEM simulations are performed to obtain λ for a 001 zone axis orientation, in a two-beam case and in a low symmetry orientation. Subsequently, HR (used to determine t) and bright-field (to measure I 0(0) and I 0(t)) images were acquired to experimentally determine λ. The experimental value measured in low symmetry orientation matches the calculated value (i.e., λ=225±9 nm). The simulations also show that the linear approximation is valid up to a sample thickness of 3-4 nm regardless of the orientation and up to several ten nanometers for a low symmetry orientation. When compared with standard techniques for thickness determination of graphene/graphite, the method we propose has the advantage of being simple and fast, requiring only the acquisition of bright-field images.
Charek, Daniel B; Meyer, Gregory J; Mihura, Joni L
2016-10-01
We investigated the impact of ego depletion on selected Rorschach cognitive processing variables and self-reported affect states. Research indicates acts of effortful self-regulation transiently deplete a finite pool of cognitive resources, impairing performance on subsequent tasks requiring self-regulation. We predicted that relative to controls, ego-depleted participants' Rorschach protocols would have more spontaneous reactivity to color, less cognitive sophistication, and more frequent logical lapses in visualization, whereas self-reports would reflect greater fatigue and less attentiveness. The hypotheses were partially supported; despite a surprising absence of self-reported differences, ego-depleted participants had Rorschach protocols with lower scores on two variables indicative of sophisticated combinatory thinking, as well as higher levels of color receptivity; they also had lower scores on a composite variable computed across all hypothesized markers of complexity. In addition, self-reported achievement striving moderated the effect of the experimental manipulation on color receptivity, and in the Depletion condition it was associated with greater attentiveness to the tasks, more color reactivity, and less global synthetic processing. Results are discussed with an emphasis on the response process, methodological limitations and strengths, implications for calculating refined Rorschach scores, and the value of using multiple methods in research and experimental paradigms to validate assessment measures. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Liu, T.; Reißner, R.; Schiller, G.; Ansar, A.
2018-01-01
The aim of this work is to improve the performance of electrodes prepared via atmospheric plasma spray by means of gas shrouding which is expected to apparently reduce the oxygen content of the plasma plume and subsequently improve the coating quality. Electrodes with dual-layer coating for alkaline water electrolysis were deposited on Ni-coated perforated substrates. Microstructure and morphology were studied by SEM. Element content was measured by EDS. Enthalpy probe was employed for measuring plasma temperature and velocity as well as the gas composition. For verifying and better understanding the shrouding effect numerical calculation was carried out according to the experimental settings. Electrochemical test was carried out to validate the shrouding effect. The results showed slight protecting effect of gas shrouding on plasma plume and the final coating. Over the dual-layer section, the measured oxygen fraction was 3.46 and 3.15% for the case without gas shrouding and with gas shrouding, respectively. With gas shrouding the coating exhibited similar element contents as the coating sprayed by VPS, while no obvious improvement was observed in the microstructure or the morphology. Evident electrochemical improvement was nevertheless achieved that with gas shrouding the electrode exhibited similar performance as that of the VPS-sprayed electrode.
Optical hysteresis in SPR structures with amorphous As2S3 film under low-power laser irradiation
NASA Astrophysics Data System (ADS)
Stafe, M.; Popescu, A. A.; Savastru, D.; Negutu, C.; Vasile, G.; Mihailescu, M.; Ducariu, A.; Savu, V.; Tenciu, D.; Miclos, S.; Baschir, L.; Verlan, V. V.; Bordian, O.; Puscas, N. N.
2018-03-01
Optical hysteresis is a fundamental phenomenon that can lead to optical bistability and high-speed signal processing. Here, we present a theoretical and experimental study of the optical hysteresis phenomenon in amorphous As2S3 chalcogenide based waveguide structures under surface plasmon resonance (SPR) conditions. The SPR structure is irradiated with low power CW Ar laser radiation at 514 nm wavelength, with photon energy near the optical band-gap of As2S3, in a Kretschmann-Raether configuration. First, we determined the incidence angle on the SPR structure for resonant coupling of the laser radiation within the waveguide structure. Subsequently, by setting the near resonance incidence angle, we analyzed the variation of the laser power reflected on the SPR structure with incident power. We demonstrated that, by setting the incidence angle at a value slightly smaller than the resonance angle, the increase followed by the decrease of the incident power lead to a wide (up to 60%) hysteresis loop of the reflected power. This behavior is related to the slow and persistent photo-induced modification of the complex refractive index of As2S3 under 514 nm laser irradiation. The experimental and theoretical results are in good agreement, demonstrating the validity of the theoretical model presented here.
Determination of effective loss factors in reduced SEA models
NASA Astrophysics Data System (ADS)
Chimeno Manguán, M.; Fernández de las Heras, M. J.; Roibás Millán, E.; Simón Hidalgo, F.
2017-01-01
The definition of Statistical Energy Analysis (SEA) models for large complex structures is highly conditioned by the classification of the structure elements into a set of coupled subsystems and the subsequent determination of the loss factors representing both the internal damping and the coupling between subsystems. The accurate definition of the complete system can lead to excessively large models as the size and complexity increases. This fact can also rise practical issues for the experimental determination of the loss factors. This work presents a formulation of reduced SEA models for incomplete systems defined by a set of effective loss factors. This reduced SEA model provides a feasible number of subsystems for the application of the Power Injection Method (PIM). For structures of high complexity, their components accessibility can be restricted, for instance internal equipments or panels. For these cases the use of PIM to carry out an experimental SEA analysis is not possible. New methods are presented for this case in combination with the reduced SEA models. These methods allow defining some of the model loss factors that could not be obtained through PIM. The methods are validated with a numerical analysis case and they are also applied to an actual spacecraft structure with accessibility restrictions: a solar wing in folded configuration.
Continuous odour measurement from fattening pig units
NASA Astrophysics Data System (ADS)
Romain, Anne-Claude; Nicolas, Jacques; Cobut, Pierre; Delva, Julien; Nicks, Baudouin; Philippe, François-Xavier
2013-10-01
A study in experimental slatted-system fattening pig units was conducted with the aim of estimating the odour emission factor (in ou s.pig-1), which can subsequently be used in dispersion models to assess the odour annoyance zone. Dynamic olfactometry measurements carried out at different development stages of pigs showed a logical trend of the mean assessed odour emission factor with the pig mass. However, the variation within the same mass class was much larger than variation between classes. Possible causes of such variation were identified as the evolution of ventilation rate during the day and the circadian rhythm of pig. To be able to monitor continuously the daily variation of the odour, an electronic nose was used with suitable regression model calibrated against olfactometric measurements. After appropriate validation check, the electronic nose proved to be convenient, as a complementary tool to dynamic olfactometry, to record the daily variation of the odour emission factor in the pig barn. It was demonstrated that, in the controlled conditions of the experimental pens, the daily variation of the odour emission rate could be mainly attributed to the sole influence of the circadian rhythm of pig. As a consequence, determining a representative odour emission factor in a real case cannot be based on a snapshot odour sampling.
A genome-wide longitudinal transcriptome analysis of the aging model Podospora anserina.
Philipp, Oliver; Hamann, Andrea; Servos, Jörg; Werner, Alexandra; Koch, Ina; Osiewacz, Heinz D
2013-01-01
Aging of biological systems is controlled by various processes which have a potential impact on gene expression. Here we report a genome-wide transcriptome analysis of the fungal aging model Podospora anserina. Total RNA of three individuals of defined age were pooled and analyzed by SuperSAGE (serial analysis of gene expression). A bioinformatics analysis identified different molecular pathways to be affected during aging. While the abundance of transcripts linked to ribosomes and to the proteasome quality control system were found to decrease during aging, those associated with autophagy increase, suggesting that autophagy may act as a compensatory quality control pathway. Transcript profiles associated with the energy metabolism including mitochondrial functions were identified to fluctuate during aging. Comparison of wild-type transcripts, which are continuously down-regulated during aging, with those down-regulated in the long-lived, copper-uptake mutant grisea, validated the relevance of age-related changes in cellular copper metabolism. Overall, we (i) present a unique age-related data set of a longitudinal study of the experimental aging model P. anserina which represents a reference resource for future investigations in a variety of organisms, (ii) suggest autophagy to be a key quality control pathway that becomes active once other pathways fail, and (iii) present testable predictions for subsequent experimental investigations.
A user-friendly model for spray drying to aid pharmaceutical product development.
Grasmeijer, Niels; de Waard, Hans; Hinrichs, Wouter L J; Frijlink, Henderik W
2013-01-01
The aim of this study was to develop a user-friendly model for spray drying that can aid in the development of a pharmaceutical product, by shifting from a trial-and-error towards a quality-by-design approach. To achieve this, a spray dryer model was developed in commercial and open source spreadsheet software. The output of the model was first fitted to the experimental output of a Büchi B-290 spray dryer and subsequently validated. The predicted outlet temperatures of the spray dryer model matched the experimental values very well over the entire range of spray dryer settings that were tested. Finally, the model was applied to produce glassy sugars by spray drying, an often used excipient in formulations of biopharmaceuticals. For the production of glassy sugars, the model was extended to predict the relative humidity at the outlet, which is not measured in the spray dryer by default. This extended model was then successfully used to predict whether specific settings were suitable for producing glassy trehalose and inulin by spray drying. In conclusion, a spray dryer model was developed that is able to predict the output parameters of the spray drying process. The model can aid the development of spray dried pharmaceutical products by shifting from a trial-and-error towards a quality-by-design approach.
siRNAmod: A database of experimentally validated chemically modified siRNAs.
Dar, Showkat Ahmad; Thakur, Anamika; Qureshi, Abid; Kumar, Manoj
2016-01-28
Small interfering RNA (siRNA) technology has vast potential for functional genomics and development of therapeutics. However, it faces many obstacles predominantly instability of siRNAs due to nuclease digestion and subsequently biologically short half-life. Chemical modifications in siRNAs provide means to overcome these shortcomings and improve their stability and potency. Despite enormous utility bioinformatics resource of these chemically modified siRNAs (cm-siRNAs) is lacking. Therefore, we have developed siRNAmod, a specialized databank for chemically modified siRNAs. Currently, our repository contains a total of 4894 chemically modified-siRNA sequences, comprising 128 unique chemical modifications on different positions with various permutations and combinations. It incorporates important information on siRNA sequence, chemical modification, their number and respective position, structure, simplified molecular input line entry system canonical (SMILES), efficacy of modified siRNA, target gene, cell line, experimental methods, reference etc. It is developed and hosted using Linux Apache MySQL PHP (LAMP) software bundle. Standard user-friendly browse, search facility and analysis tools are also integrated. It would assist in understanding the effect of chemical modifications and further development of stable and efficacious siRNAs for research as well as therapeutics. siRNAmod is freely available at: http://crdd.osdd.net/servers/sirnamod.
Broadband acoustic properties of a murine skull.
Estrada, Héctor; Rebling, Johannes; Turner, Jake; Razansky, Daniel
2016-03-07
It has been well recognized that the presence of a skull imposes harsh restrictions on the use of ultrasound and optoacoustic techniques in the study, treatment and modulation of the brain function. We propose a rigorous modeling and experimental methodology for estimating the insertion loss and the elastic constants of the skull over a wide range of frequencies and incidence angles. A point-source-like excitation of ultrawideband acoustic radiation was induced via the absorption of nanosecond duration laser pulses by a 20 μm diameter microsphere. The acoustic waves transmitted through the skull are recorded by a broadband, spherically focused ultrasound transducer. A coregistered pulse-echo ultrasound scan is subsequently performed to provide accurate skull geometry to be fed into an acoustic transmission model represented in an angular spectrum domain. The modeling predictions were validated by measurements taken from a glass cover-slip and ex vivo adult mouse skulls. The flexible semi-analytical formulation of the model allows for seamless extension to other transducer geometries and diverse experimental scenarios involving broadband acoustic transmission through locally flat solid structures. It is anticipated that accurate quantification and modeling of the skull transmission effects would ultimately allow for skull aberration correction in a broad variety of applications employing transcranial detection or transmission of high frequency ultrasound.
NASA Astrophysics Data System (ADS)
Bonazzi, Enrico; Colombini, Elena; Panari, Davide; Vergnano, Alberto; Leali, Francesco; Veronesi, Paolo
2017-01-01
The integration of experiments with numerical simulations can efficiently support a quick evaluation of the welded joint. In this work, the MIG welding operation on aluminum T-joint thin plate has been studied by the integration of both simulation and experiments. The aim of the paper is to enlarge the global database, to promote the use of thin aluminum sheets in automotive body industries and to provide new data. Since the welding of aluminum thin plates is difficult to control due to high speed of the heat source and high heat flows during heating and cooling, a simulation model could be considered an effective design tool to predict the real phenomena. This integrated approach enables new evaluation possibilities on MIG-welded thin aluminum T-joints, as correspondence between the extension of the microstructural zones and the simulation parameters, material hardness, transient 3D temperature distribution on the surface and inside the material, stresses, strains, and deformations. The results of the mechanical simulations are comparable with the experimental measurements along the welding path, especially considering the variability of the process. The results could well predict the welding-induced distortion, which together with local heating during welding must be anticipated and subsequently minimized and counterbalance.
NASA Astrophysics Data System (ADS)
Zhang, Yaning; Xu, Fei; Li, Bingxi; Kim, Yong-Song; Zhao, Wenke; Xie, Gongnan; Fu, Zhongbin
2018-04-01
This study aims to validate the three-phase heat and mass transfer model developed in the first part (Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development). Experimental results from studies and experiments were used for the validation. The results showed that the correlation coefficients for the simulated and experimental water contents at different soil depths were between 0.83 and 0.92. The correlation coefficients for the simulated and experimental liquid water contents at different soil temperatures were between 0.95 and 0.99. With these high accuracies, the developed model can be well used to predict the water contents at different soil depths and temperatures.
Experimental design methodologies in the optimization of chiral CE or CEC separations: an overview.
Dejaegher, Bieke; Mangelings, Debby; Vander Heyden, Yvan
2013-01-01
In this chapter, an overview of experimental designs to develop chiral capillary electrophoresis (CE) and capillary electrochromatographic (CEC) methods is presented. Method development is generally divided into technique selection, method optimization, and method validation. In the method optimization part, often two phases can be distinguished, i.e., a screening and an optimization phase. In method validation, the method is evaluated on its fit for purpose. A validation item, also applying experimental designs, is robustness testing. In the screening phase and in robustness testing, screening designs are applied. During the optimization phase, response surface designs are used. The different design types and their application steps are discussed in this chapter and illustrated by examples of chiral CE and CEC methods.
Olondo, C; Legarda, F; Herranz, M; Idoeta, R
2017-04-01
This paper shows the procedure performed to validate the migration equation and the migration parameters' values presented in a previous paper (Legarda et al., 2011) regarding the migration of 137 Cs in Spanish mainland soils. In this paper, this model validation has been carried out checking experimentally obtained activity concentration values against those predicted by the model. This experimental data come from the measured vertical activity profiles of 8 new sampling points which are located in northern Spain. Before testing predicted values of the model, the uncertainty of those values has been assessed with the appropriate uncertainty analysis. Once establishing the uncertainty of the model, both activity concentration values, experimental versus model predicted ones, have been compared. Model validation has been performed analyzing its accuracy, studying it as a whole and also at different depth intervals. As a result, this model has been validated as a tool to predict 137 Cs behaviour in a Mediterranean environment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.
Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
2017-06-30
Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do
2017-01-01
Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113
NASA Astrophysics Data System (ADS)
Nir, A.; Doughty, C.; Tsang, C. F.
Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no attempt to validate a specific model, but several models of increasing complexity are compared with experimental results. The outcome is interpreted as a demonstration of the paradigm proposed by van der Heijde, 26 that different constituencies have different objectives for the validation process and therefore their acceptance criteria differ also.
Selecting and Improving Quasi-Experimental Designs in Effectiveness and Implementation Research.
Handley, Margaret A; Lyles, Courtney R; McCulloch, Charles; Cattamanchi, Adithya
2018-04-01
Interventional researchers face many design challenges when assessing intervention implementation in real-world settings. Intervention implementation requires holding fast on internal validity needs while incorporating external validity considerations (such as uptake by diverse subpopulations, acceptability, cost, and sustainability). Quasi-experimental designs (QEDs) are increasingly employed to achieve a balance between internal and external validity. Although these designs are often referred to and summarized in terms of logistical benefits, there is still uncertainty about (a) selecting from among various QEDs and (b) developing strategies to strengthen the internal and external validity of QEDs. We focus here on commonly used QEDs (prepost designs with nonequivalent control groups, interrupted time series, and stepped-wedge designs) and discuss several variants that maximize internal and external validity at the design, execution and implementation, and analysis stages.
A Systematic Method for Verification and Validation of Gyrokinetic Microstability Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bravenec, Ronald
My original proposal for the period Feb. 15, 2014 through Feb. 14, 2017 called for an integrated validation and verification effort carried out by myself with collaborators. The validation component would require experimental profile and power-balance analysis. In addition, it would require running the gyrokinetic codes varying the input profiles within experimental uncertainties to seek agreement with experiment before discounting a code as invalidated. Therefore, validation would require a major increase of effort over my previous grant periods which covered only code verification (code benchmarking). Consequently, I had requested full-time funding. Instead, I am being funded at somewhat less thanmore » half time (5 calendar months per year). As a consequence, I decided to forego the validation component and to only continue the verification efforts.« less
ERIC Educational Resources Information Center
Makransky, Guido; Havmose, Philip; Vang, Maria Louison; Andersen, Tonny Elmose; Nielsen, Tine
2017-01-01
The aim of this study was to evaluate the predictive validity of a two-step admissions procedure that included a cognitive ability test followed by multiple mini-interviews (MMIs) used to assess non-cognitive skills, compared to grade-based admissions relative to subsequent drop-out rates and academic achievement after one and two years of study.…
2008-09-01
performance criteria including passing/failing training, training grades, class rank (Carretta & Ree, 2003; Olea & Ree, 1994), and several non...are consistent with prior validations of the AFOQT versus academic performance criteria in pilot (Carretta & Ree, 1995; Olea & Ree, 1994; Ree...Carretta, & Teachout, 1995)) and navigator ( Olea & Ree, 1994) training. Subsequent analyses took three different approaches to examine the
Recent Advances in Simulation of Eddy Current Testing of Tubes and Experimental Validations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reboud, C.; Premel, D.; Lesselier, D.
2007-03-21
Eddy current testing (ECT) is widely used in iron and steel industry for the inspection of tubes during manufacturing. A collaboration between CEA and the Vallourec Research Center led to the development of new numerical functionalities dedicated to the simulation of ECT of non-magnetic tubes by external probes. The achievement of experimental validations led us to the integration of these models into the CIVA platform. Modeling approach and validation results are discussed here. A new numerical scheme is also proposed in order to improve the accuracy of the model.
Recent Advances in Simulation of Eddy Current Testing of Tubes and Experimental Validations
NASA Astrophysics Data System (ADS)
Reboud, C.; Prémel, D.; Lesselier, D.; Bisiaux, B.
2007-03-01
Eddy current testing (ECT) is widely used in iron and steel industry for the inspection of tubes during manufacturing. A collaboration between CEA and the Vallourec Research Center led to the development of new numerical functionalities dedicated to the simulation of ECT of non-magnetic tubes by external probes. The achievement of experimental validations led us to the integration of these models into the CIVA platform. Modeling approach and validation results are discussed here. A new numerical scheme is also proposed in order to improve the accuracy of the model.
Chaudhuri, Santanu; Graetz, Jason; Ignatov, Alex; Reilly, James J; Muckerman, James T
2006-09-06
We report the results of an experimental and theoretical study of hydrogen storage in sodium alanate (NaAlH(4)). Reversible hydrogen storage in this material is dependent on the presence of 2-4% Ti dopant. Our combined study shows that the role of Ti may be linked entirely to Ti-containing active catalytic sites in the metallic Al phase present in the dehydrogenated NaAlH(4). The EXAFS data presented here show that dehydrogenated samples contain a highly disordered distribution of Ti-Al distances with no long-range order beyond the second coordination sphere. We have used density functional theory techniques to calculate the chemical potential of possible Ti arrangements on an Al(001) surface for Ti coverages ranging from 0.125 to 0.5 monolayer (ML) and have identified those that can chemisorb molecular hydrogen via spontaneous or only moderately activated pathways. The chemisorption process exhibits a characteristic nodal symmetry property for the low-barrier sites: the incipient doped surface-H(2) adduct's highest occupied molecular orbital (HOMO) incorporates the sigma antibonding molecular orbital of hydrogen, allowing the transfer of charge density from the surface to dissociate the molecular hydrogen. This work also proposes a plausible mechanism for the transport of an aluminum hydride species back into the NaH lattice that is supported by Car-Parrinello molecular dynamics (CPMD) simulations of the stability and mobility of aluminum clusters (alanes) on Al(001). As an experimental validation of the proposed role of titanium and the subsequent diffusion of alanes, we demonstrate experimentally that AlH(3) reacts with NaH to form NaAlH(4) without any requirement of a catalyst or hydrogen overpressure.
Discriminative Learning of Receptive Fields from Responses to Non-Gaussian Stimulus Ensembles
Meyer, Arne F.; Diepenbrock, Jan-Philipp; Happel, Max F. K.; Ohl, Frank W.; Anemüller, Jörn
2014-01-01
Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design. PMID:24699631
Discriminative learning of receptive fields from responses to non-Gaussian stimulus ensembles.
Meyer, Arne F; Diepenbrock, Jan-Philipp; Happel, Max F K; Ohl, Frank W; Anemüller, Jörn
2014-01-01
Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design.
Experimental validation of the DARWIN2.3 package for fuel cycle applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
San-Felice, L.; Eschbach, R.; Bourdot, P.
2012-07-01
The DARWIN package, developed by the CEA and its French partners (AREVA and EDF) provides the required parameters for fuel cycle applications: fuel inventory, decay heat, activity, neutron, {gamma}, {alpha}, {beta} sources and spectrum, radiotoxicity. This paper presents the DARWIN2.3 experimental validation for fuel inventory and decay heat calculations on Pressurized Water Reactor (PWR). In order to validate this code system for spent fuel inventory a large program has been undertaken, based on spent fuel chemical assays. This paper deals with the experimental validation of DARWIN2.3 for the Pressurized Water Reactor (PWR) Uranium Oxide (UOX) and Mixed Oxide (MOX) fuelmore » inventory calculation, focused on the isotopes involved in Burn-Up Credit (BUC) applications and decay heat computations. The calculation - experiment (C/E-1) discrepancies are calculated with the latest European evaluation file JEFF-3.1.1 associated with the SHEM energy mesh. An overview of the tendencies is obtained on a complete range of burn-up from 10 to 85 GWd/t (10 to 60 GWcVt for MOX fuel). The experimental validation of the DARWIN2.3 package for decay heat calculation is performed using calorimetric measurements carried out at the Swedish Interim Spent Fuel Storage Facility for Pressurized Water Reactor (PWR) assemblies, covering a large burn-up (20 to 50 GWd/t) and cooling time range (10 to 30 years). (authors)« less
Viability of Cross-Flow Fan with Helical Blades for Vertical Take-off and Landing Aircraft
2012-09-01
fluid dynamics (CFD) software, ANSYS - CFX , a three-dimensional (3-D) straight-bladed model was validated against previous study’s experimental results...computational fluid dynamics software (CFD), ANSYS - CFX , a three-dimensional (3-D) straight-bladed model was validated against previous study’s experimental...37 B. SIZING PARAMETERS AND ILLUSTRATION ................................. 37 APPENDIX B. ANSYS CFX PARAMETERS
2011-09-01
a quality evaluation with limited data, a model -based assessment must be...that affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a ...affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a wide range
Valid and Reliable Science Content Assessments for Science Teachers
NASA Astrophysics Data System (ADS)
Tretter, Thomas R.; Brown, Sherri L.; Bush, William S.; Saderholm, Jon C.; Holmes, Vicki-Lynn
2013-03-01
Science teachers' content knowledge is an important influence on student learning, highlighting an ongoing need for programs, and assessments of those programs, designed to support teacher learning of science. Valid and reliable assessments of teacher science knowledge are needed for direct measurement of this crucial variable. This paper describes multiple sources of validity and reliability (Cronbach's alpha greater than 0.8) evidence for physical, life, and earth/space science assessments—part of the Diagnostic Teacher Assessments of Mathematics and Science (DTAMS) project. Validity was strengthened by systematic synthesis of relevant documents, extensive use of external reviewers, and field tests with 900 teachers during assessment development process. Subsequent results from 4,400 teachers, analyzed with Rasch IRT modeling techniques, offer construct and concurrent validity evidence.
NASA Astrophysics Data System (ADS)
Rizzo, Axel; Vaglio-Gaudard, Claire; Martin, Julie-Fiona; Noguère, Gilles; Eschbach, Romain
2017-09-01
DARWIN2.3 is the reference package used for fuel cycle applications in France. It solves the Boltzmann and Bateman equations in a coupling way, with the European JEFF-3.1.1 nuclear data library, to compute the fuel cycle values of interest. It includes both deterministic transport codes APOLLO2 (for light water reactors) and ERANOS2 (for fast reactors), and the DARWIN/PEPIN2 depletion code, each of them being developed by CEA/DEN with the support of its industrial partners. The DARWIN2.3 package has been experimentally validated for pressurized and boiling water reactors, as well as for sodium fast reactors; this experimental validation relies on the analysis of post-irradiation experiments (PIE). The DARWIN2.3 experimental validation work points out some isotopes for which the depleted concentration calculation can be improved. Some other nuclides have no available experimental validation, and their concentration calculation uncertainty is provided by the propagation of a priori nuclear data uncertainties. This paper describes the work plan of studies initiated this year to improve the accuracy of the DARWIN2.3 depleted material balance calculation concerning some nuclides of interest for the fuel cycle.
NASA Astrophysics Data System (ADS)
Hemmer, H.; Grong, Ø.; Klokkehaug, S.
2000-03-01
In the present investigation, a process model for electron beam (EB) welding of different grades of duplex stainless steels (i.e. SAF 2205 and 2507) has been developed. A number of attractive features are built into the original finite element code, including (1) a separate module for prediction of the penetration depth and distribution of the heat source into the plate, (2) adaptive refinement of the three-dimensional (3-D) element mesh for quick and reliable solution of the differential heat flow equation, and (3) special subroutines for calculation of the heat-affected zone (HAZ) microstructure evolution. The process model has been validated by comparison with experimental data obtained from in situ thermocouple measurements and optical microscope examinations. Subsequently, its aptness to alloy design and optimization of welding conditions for duplex stainless steels is illustrated in different numerical examples and case studies pertaining to EB welding of tubular joints.
Faure, Emmanuel; Savy, Thierry; Rizzi, Barbara; Melani, Camilo; Stašová, Olga; Fabrèges, Dimitri; Špir, Róbert; Hammons, Mark; Čúnderlík, Róbert; Recher, Gaëlle; Lombardot, Benoît; Duloquin, Louise; Colin, Ingrid; Kollár, Jozef; Desnoulez, Sophie; Affaticati, Pierre; Maury, Benoît; Boyreau, Adeline; Nief, Jean-Yves; Calvat, Pascal; Vernier, Philippe; Frain, Monique; Lutfalla, Georges; Kergosien, Yannick; Suret, Pierre; Remešíková, Mariana; Doursat, René; Sarti, Alessandro; Mikula, Karol; Peyriéras, Nadine; Bourgine, Paul
2016-01-01
The quantitative and systematic analysis of embryonic cell dynamics from in vivo 3D+time image data sets is a major challenge at the forefront of developmental biology. Despite recent breakthroughs in the microscopy imaging of living systems, producing an accurate cell lineage tree for any developing organism remains a difficult task. We present here the BioEmergences workflow integrating all reconstruction steps from image acquisition and processing to the interactive visualization of reconstructed data. Original mathematical methods and algorithms underlie image filtering, nucleus centre detection, nucleus and membrane segmentation, and cell tracking. They are demonstrated on zebrafish, ascidian and sea urchin embryos with stained nuclei and membranes. Subsequent validation and annotations are carried out using Mov-IT, a custom-made graphical interface. Compared with eight other software tools, our workflow achieved the best lineage score. Delivered in standalone or web service mode, BioEmergences and Mov-IT offer a unique set of tools for in silico experimental embryology. PMID:26912388
Matsuzaki, Ryosuke; Tachikawa, Takeshi; Ishizuka, Junya
2018-03-01
Accurate simulations of carbon fiber-reinforced plastic (CFRP) molding are vital for the development of high-quality products. However, such simulations are challenging and previous attempts to improve the accuracy of simulations by incorporating the data acquired from mold monitoring have not been completely successful. Therefore, in the present study, we developed a method to accurately predict various CFRP thermoset molding characteristics based on data assimilation, a process that combines theoretical and experimental values. The degree of cure as well as temperature and thermal conductivity distributions during the molding process were estimated using both temperature data and numerical simulations. An initial numerical experiment demonstrated that the internal mold state could be determined solely from the surface temperature values. A subsequent numerical experiment to validate this method showed that estimations based on surface temperatures were highly accurate in the case of degree of cure and internal temperature, although predictions of thermal conductivity were more difficult.
Identifying Interactions that Determine Fragment Binding at Protein Hotspots.
Radoux, Chris J; Olsson, Tjelvar S G; Pitt, Will R; Groom, Colin R; Blundell, Tom L
2016-05-12
Locating a ligand-binding site is an important first step in structure-guided drug discovery, but current methods do little to suggest which interactions within a pocket are the most important for binding. Here we illustrate a method that samples atomic hotspots with simple molecular probes to produce fragment hotspot maps. These maps specifically highlight fragment-binding sites and their corresponding pharmacophores. For ligand-bound structures, they provide an intuitive visual guide within the binding site, directing medicinal chemists where to grow the molecule and alerting them to suboptimal interactions within the original hit. The fragment hotspot map calculation is validated using experimental binding positions of 21 fragments and subsequent lead molecules. The ligands are found in high scoring areas of the fragment hotspot maps, with fragment atoms having a median percentage rank of 97%. Protein kinase B and pantothenate synthetase are examined in detail. In each case, the fragment hotspot maps are able to rationalize a Free-Wilson analysis of SAR data from a fragment-based drug design project.
NASA Technical Reports Server (NTRS)
Murthy, P. L. N.; Chamis, C. C.
1992-01-01
A generic unit cell model which includes a unique fiber substructuring concept is proposed for the development of micromechanics equations for continuous fiber reinforcement ceramic composites. The unit cell consists of three constituents: fiber, matrix, and an interphase. In the present approach, the unit cell is further subdivided into several slices and the equations of micromechanics are derived for each slice. These are subsequently integrated to obtain ply level properties. A stand alone computer code containing the micromechanics model as a module is currently being developed specifically for the analysis of ceramic matrix composites. Towards this development, equivalent ply property results for a SiC/Ti-15-3 composite with 0.5 fiber volume ratio are presented and compared with those obtained from customary micromechanics models to illustrate the concept. Also, comparisons with limited experimental data for the ceramic matrix composite, SiC/RBSN (Reaction Bonded Silicon Nitride) with a 0.3 fiber volume ratio are given to validate the concepts.
Modelling Sawing of Metal Tubes Through FEM Simulation
NASA Astrophysics Data System (ADS)
Bort, C. M. Giorgio; Bosetti, P.; Bruschi, S.
2011-05-01
The paper presents the development of a numerical model of the sawing process of AISI 304 thin tubes, which is cut through a circular blade with alternating roughing and finishing teeth. The numerical simulation environment is the three-dimensional FEM software Deform™ v.10.1. The teeth actual trajectories were determined by a blade kinematics analysis developed in Matlab™. Due to the manufacturing rolling steps and subsequent welding stage, the tube material is characterized by a gradient of properties along its thickness. Consequently, a simplified cutting test was set up and carried out in order to identify the values of relevant material parameters to be used in the numerical model. The dedicated test was the Orthogonal Tube Cutting test (OTC), which was performed on an instrumented lathe. The proposed numerical model was validated by comparing numerical results and experimental data obtained from sawing tests carried out on an industrial machine. The following outputs were compared: the cutting force, the chip thickness, and the chip contact area.
Quantitative subsurface analysis using frequency modulated thermal wave imaging
NASA Astrophysics Data System (ADS)
Subhani, S. K.; Suresh, B.; Ghali, V. S.
2018-01-01
Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.
An interactive web-based application for Comprehensive Analysis of RNAi-screen Data.
Dutta, Bhaskar; Azhir, Alaleh; Merino, Louis-Henri; Guo, Yongjian; Revanur, Swetha; Madhamshettiwar, Piyush B; Germain, Ronald N; Smith, Jennifer A; Simpson, Kaylene J; Martin, Scott E; Buehler, Eugen; Beuhler, Eugen; Fraser, Iain D C
2016-02-23
RNAi screens are widely used in functional genomics. Although the screen data can be susceptible to a number of experimental biases, many of these can be corrected by computational analysis. For this purpose, here we have developed a web-based platform for integrated analysis and visualization of RNAi screen data named CARD (for Comprehensive Analysis of RNAi Data; available at https://card.niaid.nih.gov). CARD allows the user to seamlessly carry out sequential steps in a rigorous data analysis workflow, including normalization, off-target analysis, integration of gene expression data, optimal thresholds for hit selection and network/pathway analysis. To evaluate the utility of CARD, we describe analysis of three genome-scale siRNA screens and demonstrate: (i) a significant increase both in selection of subsequently validated hits and in rejection of false positives, (ii) an increased overlap of hits from independent screens of the same biology and (iii) insight to microRNA (miRNA) activity based on siRNA seed enrichment.
NASA Astrophysics Data System (ADS)
Leonard, T.; Spence, S.; Early, J.; Filsinger, D.
2013-12-01
Mixed flow turbines represent a potential solution to the increasing requirement for high pressure, low velocity ratio operation in turbocharger applications. While literature exists for the use of these turbines at such operating conditions, there is a lack of detailed design guidance for defining the basic geometry of the turbine, in particular, the cone angle - the angle at which the inlet of the mixed flow turbine is inclined to the axis. This investigates the effect and interaction of such mixed flow turbine design parameters. Computational Fluids Dynamics was initially used to investigate the performance of a modern radial turbine to create a baseline for subsequent mixed flow designs. Existing experimental data was used to validate this model. Using the CFD model, a number of mixed flow turbine designs were investigated. These included studies varying the cone angle and the associated inlet blade angle. The results of this analysis provide insight into the performance of a mixed flow turbine with respect to cone and inlet blade angle.
Rezapour, Ehsan; Pettersen, Kristin Y; Liljebäck, Pål; Gravdahl, Jan T; Kelasidi, Eleni
This paper considers path following control of planar snake robots using virtual holonomic constraints. In order to present a model-based path following control design for the snake robot, we first derive the Euler-Lagrange equations of motion of the system. Subsequently, we define geometric relations among the generalized coordinates of the system, using the method of virtual holonomic constraints. These appropriately defined constraints shape the geometry of a constraint manifold for the system, which is a submanifold of the configuration space of the robot. Furthermore, we show that the constraint manifold can be made invariant by a suitable choice of feedback. In particular, we analytically design a smooth feedback control law to exponentially stabilize the constraint manifold. We show that enforcing the appropriately defined virtual holonomic constraints for the configuration variables implies that the robot converges to and follows a desired geometric path. Numerical simulations and experimental results are presented to validate the theoretical approach.
Off-axis digital holographic microscopy with LED illumination based on polarization filtering.
Guo, Rongli; Yao, Baoli; Gao, Peng; Min, Junwei; Zhou, Meiling; Han, Jun; Yu, Xun; Yu, Xianghua; Lei, Ming; Yan, Shaohui; Yang, Yanlong; Dan, Dan; Ye, Tong
2013-12-01
A reflection mode digital holographic microscope with light emitting diode (LED) illumination and off-axis interferometry is proposed. The setup is comprised of a Linnik interferometer and a grating-based 4f imaging unit. Both object and reference waves travel coaxially and are split into multiple diffraction orders in the Fourier plane by the grating. The zeroth and first orders are filtered by a polarizing array to select orthogonally polarized object waves and reference waves. Subsequently, the object and reference waves are combined again in the output plane of the 4f system, and then the hologram with uniform contrast over the entire field of view can be acquired with the aid of a polarizer. The one-shot nature in the off-axis configuration enables an interferometric recording time on a millisecond scale. The validity of the proposed setup is illustrated by imaging nanostructured substrates, and the experimental results demonstrate that the phase noise is reduced drastically by an order of 68% when compared to a He-Ne laser-based result.
Inhomogeneous Poisson process rate function inference from dead-time limited observations.
Verma, Gunjan; Drost, Robert J
2017-05-01
The estimation of an inhomogeneous Poisson process (IHPP) rate function from a set of process observations is an important problem arising in optical communications and a variety of other applications. However, because of practical limitations of detector technology, one is often only able to observe a corrupted version of the original process. In this paper, we consider how inference of the rate function is affected by dead time, a period of time after the detection of an event during which a sensor is insensitive to subsequent IHPP events. We propose a flexible nonparametric Bayesian approach to infer an IHPP rate function given dead-time limited process realizations. Simulation results illustrate the effectiveness of our inference approach and suggest its ability to extend the utility of existing sensor technology by permitting more accurate inference on signals whose observations are dead-time limited. We apply our inference algorithm to experimentally collected optical communications data, demonstrating the practical utility of our approach in the context of channel modeling and validation.
Gao, Li; Duan, Dan-Dan; Zhang, Jian-Qin; Zhou, Yu-Zhi; Qin, Xue-Mei; Du, Guan-Hua
2016-03-15
Aging is one of the most complicated phenomena and is the main risk factor for age-related diseases. Based on the public aging-related gene data, we propose a computational approach to predict the antiaging activities of compounds. This approach integrates network pharmacology and target fishing methods with the aim of identifying a potential antiaging compound from Scutellaria baicalensis Georgi. Utilizing this approach and subsequent experimental validation, it was found that baicalein at concentrations of 0.04, 0.2, and 1 mg/mL extended the mean, median, and maximum life spans in Drosophila melanogaster. Particularly, 0.2 mg/mL baicalein extends the mean and median life spans in male flies by 19.80% and 25.64%, respectively. Meanwhile, it was discovered that baicalein improved fertility in flies. Baicalein exerts antiaging effects likely through attenuating oxidative stress, including increases of CAT activity and GSH level and decrease of GSSG level.
A Robust Inner and Outer Loop Control Method for Trajectory Tracking of a Quadrotor
Xia, Dunzhu; Cheng, Limei; Yao, Yanhong
2017-01-01
In order to achieve the complicated trajectory tracking of quadrotor, a geometric inner and outer loop control scheme is presented. The outer loop generates the desired rotation matrix for the inner loop. To improve the response speed and robustness, a geometric SMC controller is designed for the inner loop. The outer loop is also designed via sliding mode control (SMC). By Lyapunov theory and cascade theory, the closed-loop system stability is guaranteed. Next, the tracking performance is validated by tracking three representative trajectories. Then, the robustness of the proposed control method is illustrated by trajectory tracking in presence of model uncertainty and disturbances. Subsequently, experiments are carried out to verify the method. In the experiment, ultra wideband (UWB) is used for indoor positioning. Extended Kalman Filter (EKF) is used for fusing inertial measurement unit (IMU) and UWB measurements. The experimental results show the feasibility of the designed controller in practice. The comparative experiments with PD and PD loop demonstrate the robustness of the proposed control method. PMID:28925984
NASA Astrophysics Data System (ADS)
Harahap, S. A. A.; Nazar, A.; Yunita, M.; Pasaribu, RA; Panjaitan, F.; Yanuar, F.; Misran, E.
2018-02-01
Adsorption of β-carotene in crude palm oil (CPO) was studied using activated carbon produced from tea waste (ACTW) an adsorbent. Isothermal studies were carried out at 60 °C with the ratio of activated carbon to CPO were 1:3, 1:4, 1:5, and 1:6, respectively. The ACTW showed excellent performance as the percentage of adsorption of β-carotene from CPO was > 99%. The best percentage removal (R) was achieved at ACTW to CPO ratio equal to 1:3, which was 99.61%. The appropriate isotherm model for this study was Freundlich isotherm model. The combination of Freundlich isotherm equation and mass balance equation showed a good agreement when validated to the experimental data. The equation subsequently executed to predict the removal efficiency under given sets of operating conditions. At a targetted R, CPO volume can be estimated for a certain initial concentration β-carotene in CPO C0 and mass of ACTW adsorbent M used.
Analysis of Protein Kinetics Using Fluorescence Recovery After Photobleaching (FRAP).
Giakoumakis, Nickolaos Nikiforos; Rapsomaniki, Maria Anna; Lygerou, Zoi
2017-01-01
Fluorescence recovery after photobleaching (FRAP) is a cutting-edge live-cell functional imaging technique that enables the exploration of protein dynamics in individual cells and thus permits the elucidation of protein mobility, function, and interactions at a single-cell level. During a typical FRAP experiment, fluorescent molecules in a defined region of interest within the cell are bleached by a short and powerful laser pulse, while the recovery of the fluorescence in the region is monitored over time by time-lapse microscopy. FRAP experimental setup and image acquisition involve a number of steps that need to be carefully executed to avoid technical artifacts. Equally important is the subsequent computational analysis of FRAP raw data, to derive quantitative information on protein diffusion and binding parameters. Here we present an integrated in vivo and in silico protocol for the analysis of protein kinetics using FRAP. We focus on the most commonly encountered challenges and technical or computational pitfalls and their troubleshooting so that valid and robust insight into protein dynamics within living cells is gained.
An In Vitro Translation, Selection, and Amplification System for Peptide Nucleic Acids
Brudno, Yevgeny; Birnbaum, Michael E.; Kleiner, Ralph E.; Liu, David R.
2009-01-01
Methods to evolve synthetic, rather than biological, polymers could significantly expand the functional potential of polymers that emerge from in vitro evolution. Requirements for synthetic polymer evolution include: (i) sequence-specific polymerization of synthetic building blocks on an amplifiable template; (ii) display of the newly translated polymer strand in a manner that allows it to adopt folded structures; (iii) selection of synthetic polymer libraries for desired binding or catalytic properties; and (iv) amplification of template sequences surviving selection in a manner that allows subsequent translation. Here we report the development of such a system for peptide nucleic acids (PNAs) using a set of twelve PNA pentamer building blocks. We validated the system by performing six iterated cycles of translation, selection, and amplification on a library of 4.3 × 108 PNA-encoding DNA templates and observed >1,000,000-fold overall enrichment of a template encoding a biotinylated (streptavidin-binding) PNA. These results collectively provide an experimental foundation for PNA evolution in the laboratory. PMID:20081830
The QSAR study of flavonoid-metal complexes scavenging rad OH free radical
NASA Astrophysics Data System (ADS)
Wang, Bo-chu; Qian, Jun-zhen; Fan, Ying; Tan, Jun
2014-10-01
Flavonoid-metal complexes have antioxidant activities. However, quantitative structure-activity relationships (QSAR) of flavonoid-metal complexes and their antioxidant activities has still not been tackled. On the basis of 21 structures of flavonoid-metal complexes and their antioxidant activities for scavenging rad OH free radical, we optimised their structures using Gaussian 03 software package and we subsequently calculated and chose 18 quantum chemistry descriptors such as dipole, charge and energy. Then we chose several quantum chemistry descriptors that are very important to the IC50 of flavonoid-metal complexes for scavenging rad OH free radical through method of stepwise linear regression, Meanwhile we obtained 4 new variables through the principal component analysis. Finally, we built the QSAR models based on those important quantum chemistry descriptors and the 4 new variables as the independent variables and the IC50 as the dependent variable using an Artificial Neural Network (ANN), and we validated the two models using experimental data. These results show that the two models in this paper are reliable and predictable.
Identification of consensus biomarkers for predicting non-genotoxic hepatocarcinogens
Huang, Shan-Han; Tung, Chun-Wei
2017-01-01
The assessment of non-genotoxic hepatocarcinogens (NGHCs) is currently relying on two-year rodent bioassays. Toxicogenomics biomarkers provide a potential alternative method for the prioritization of NGHCs that could be useful for risk assessment. However, previous studies using inconsistently classified chemicals as the training set and a single microarray dataset concluded no consensus biomarkers. In this study, 4 consensus biomarkers of A2m, Ca3, Cxcl1, and Cyp8b1 were identified from four large-scale microarray datasets of the one-day single maximum tolerated dose and a large set of chemicals without inconsistent classifications. Machine learning techniques were subsequently applied to develop prediction models for NGHCs. The final bagging decision tree models were constructed with an average AUC performance of 0.803 for an independent test. A set of 16 chemicals with controversial classifications were reclassified according to the consensus biomarkers. The developed prediction models and identified consensus biomarkers are expected to be potential alternative methods for prioritization of NGHCs for further experimental validation. PMID:28117354
An interactive web-based application for Comprehensive Analysis of RNAi-screen Data
Dutta, Bhaskar; Azhir, Alaleh; Merino, Louis-Henri; Guo, Yongjian; Revanur, Swetha; Madhamshettiwar, Piyush B.; Germain, Ronald N.; Smith, Jennifer A.; Simpson, Kaylene J.; Martin, Scott E.; Beuhler, Eugen; Fraser, Iain D. C.
2016-01-01
RNAi screens are widely used in functional genomics. Although the screen data can be susceptible to a number of experimental biases, many of these can be corrected by computational analysis. For this purpose, here we have developed a web-based platform for integrated analysis and visualization of RNAi screen data named CARD (for Comprehensive Analysis of RNAi Data; available at https://card.niaid.nih.gov). CARD allows the user to seamlessly carry out sequential steps in a rigorous data analysis workflow, including normalization, off-target analysis, integration of gene expression data, optimal thresholds for hit selection and network/pathway analysis. To evaluate the utility of CARD, we describe analysis of three genome-scale siRNA screens and demonstrate: (i) a significant increase both in selection of subsequently validated hits and in rejection of false positives, (ii) an increased overlap of hits from independent screens of the same biology and (iii) insight to microRNA (miRNA) activity based on siRNA seed enrichment. PMID:26902267
Park, Lora E
2007-04-01
Appearance-Based Rejection Sensitivity (Appearance-RS) is a personality-processing system characterized by anxious concerns and expectations about being rejected based on one's physical attractiveness. People differ in their sensitivity to rejection based on appearance, with consequences for mental and physical health, self-esteem, affect, and feelings of belonging. Study 1 describes the development and validation of the Appearance-RS scale, its relation to personality variables and to health-related outcomes. Study 2 provides experimental evidence that high Appearance-RS people feel more alone and rejected when asked to think about negative aspects of their appearance. Finally, Study 3 tests ways to reduce the negative effects of receiving an appearance threat among high Appearance-RS participants. Specifically, high Appearance-RS participants who engaged in self-affirmation (thought of their personal strengths) or received a secure attachment prime (thought of a close, caring relationship) were buffered from the negative effects of an appearance threat on subsequent state self-esteem and mood.
Torfs, Elena; Martí, M Carmen; Locatelli, Florent; Balemans, Sophie; Bürger, Raimund; Diehl, Stefan; Laurent, Julien; Vanrolleghem, Peter A; François, Pierre; Nopens, Ingmar
2017-02-01
A new perspective on the modelling of settling behaviour in water resource recovery facilities is introduced. The ultimate goal is to describe in a unified way the processes taking place both in primary settling tanks (PSTs) and secondary settling tanks (SSTs) for a more detailed operation and control. First, experimental evidence is provided, pointing out distributed particle properties (such as size, shape, density, porosity, and flocculation state) as an important common source of distributed settling behaviour in different settling unit processes and throughout different settling regimes (discrete, hindered and compression settling). Subsequently, a unified model framework that considers several particle classes is proposed in order to describe distributions in settling behaviour as well as the effect of variations in particle properties on the settling process. The result is a set of partial differential equations (PDEs) that are valid from dilute concentrations, where they correspond to discrete settling, to concentrated suspensions, where they correspond to compression settling. Consequently, these PDEs model both PSTs and SSTs.
Research on dynamic creep strain and settlement prediction under the subway vibration loading.
Luo, Junhui; Miao, Linchang
2016-01-01
This research aims to explore the dynamic characteristics and settlement prediction of soft soil. Accordingly, the dynamic shear modulus formula considering the vibration frequency was utilized and the dynamic triaxial test conducted to verify the validity of the formula. Subsequently, the formula was applied to the dynamic creep strain function, with the factors influencing the improved dynamic creep strain curve of soft soil being analyzed. Meanwhile, the variation law of dynamic stress with sampling depth was obtained through the finite element simulation of subway foundation. Furthermore, the improved dynamic creep strain curve of soil layer was determined based on the dynamic stress. Thereafter, it could to estimate the long-term settlement under subway vibration loading by norms. The results revealed that the dynamic shear modulus formula is straightforward and practical in terms of its application to the vibration frequency. The values predicted using the improved dynamic creep strain formula closed to the experimental values, whilst the estimating settlement closed to the measured values obtained in the field test.
Ai, Haiming; Wu, Shuicai; Gao, Hongjian; Zhao, Lei; Yang, Chunlan; Zeng, Yi
2012-01-01
The temperature distribution in the region near a microwave antenna is a critical factor that affects the entire temperature field during microwave ablation of tissue. It is challenging to predict this distribution precisely, because the temperature in the near-antenna region varies greatly. The effects of water vaporisation and subsequent tissue carbonisation in an ex vivo porcine liver were therefore studied experimentally and in simulations. The enthalpy and high-temperature specific absorption rate (SAR) of liver tissues were calculated and incorporated into the simulation process. The accuracy of predictions for near-field temperatures in our simulations has reached the level where the average maximum error is less than 5°C. In addition, a modified thermal model that accounts for water vaporisation and the change in the SAR distribution pattern is proposed and validated with experiment. The results from this study may be useful in the clinical practice of microwave ablation and can be applied to predict the temperature field in surgical planning.
Ràfols, Clara; Bosch, Elisabeth; Barbas, Rafael; Prohens, Rafel
2016-07-01
A study about the suitability of the chelation reaction of Ca(2+)with ethylenediaminetetraacetic acid (EDTA) as a validation standard for Isothermal Titration Calorimeter measurements has been performed exploring the common experimental variables (buffer, pH, ionic strength and temperature). Results obtained in a variety of experimental conditions have been amended according to the side reactions involved in the main process and to the experimental ionic strength and, finally, validated by contrast with the potentiometric reference values. It is demonstrated that the chelation reaction performed in acetate buffer 0.1M and 25°C shows accurate and precise results and it is robust enough to be adopted as a standard calibration process. Copyright © 2016 Elsevier B.V. All rights reserved.
Experimental validation of an ultrasonic flowmeter for unsteady flows
NASA Astrophysics Data System (ADS)
Leontidis, V.; Cuvier, C.; Caignaert, G.; Dupont, P.; Roussette, O.; Fammery, S.; Nivet, P.; Dazin, A.
2018-04-01
An ultrasonic flowmeter was developed for further applications in cryogenic conditions and for measuring flow rate fluctuations in the range of 0 to 70 Hz. The prototype was installed in a flow test rig, and was validated experimentally both in steady and unsteady water flow conditions. A Coriolis flowmeter was used for the calibration under steady state conditions, whereas in the unsteady case the validation was done simultaneously against two methods: particle image velocimetry (PIV), and with pressure transducers installed flush on the wall of the pipe. The results show that the developed flowmeter and the proposed methodology can accurately measure the frequency and amplitude of unsteady fluctuations in the experimental range of 0-9 l s-1 of the mean main flow rate and 0-70 Hz of the imposed disturbances.
Flight Research and Validation Formerly Experimental Capabilities Supersonic Project
NASA Technical Reports Server (NTRS)
Banks, Daniel
2009-01-01
This slide presentation reviews the work of the Experimental Capabilities Supersonic project, that is being reorganized into Flight Research and Validation. The work of Experimental Capabilities Project in FY '09 is reviewed, and the specific centers that is assigned to do the work is given. The portfolio of the newly formed Flight Research and Validation (FRV) group is also reviewed. The various projects for FY '10 for the FRV are detailed. These projects include: Eagle Probe, Channeled Centerbody Inlet Experiment (CCIE), Supersonic Boundary layer Transition test (SBLT), Aero-elastic Test Wing-2 (ATW-2), G-V External Vision Systems (G5 XVS), Air-to-Air Schlieren (A2A), In Flight Background Oriented Schlieren (BOS), Dynamic Inertia Measurement Technique (DIM), and Advanced In-Flight IR Thermography (AIR-T).
Aerodynamic Database Development for Mars Smart Lander Vehicle Configurations
NASA Technical Reports Server (NTRS)
Bobskill, Glenn J.; Parikh, Paresh C.; Prabhu, Ramadas K.; Tyler, Erik D.
2002-01-01
An aerodynamic database has been generated for the Mars Smart Lander Shelf-All configuration using computational fluid dynamics (CFD) simulations. Three different CFD codes, USM3D and FELISA, based on unstructured grid technology and LAURA, an established and validated structured CFD code, were used. As part of this database development, the results for the Mars continuum were validated with experimental data and comparisons made where applicable. The validation of USM3D and LAURA with the Unitary experimental data, the use of intermediate LAURA check analyses, as well as the validation of FELISA with the Mach 6 CF(sub 4) experimental data provided a higher confidence in the ability for CFD to provide aerodynamic data in order to determine the static trim characteristics for longitudinal stability. The analyses of the noncontinuum regime showed the existence of multiple trim angles of attack that can be unstable or stable trim points. This information is needed to design guidance controller throughout the trajectory.
Issues and approach to develop validated analysis tools for hypersonic flows: One perspective
NASA Technical Reports Server (NTRS)
Deiwert, George S.
1993-01-01
Critical issues concerning the modeling of low density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools, and the activity in the NASA Ames Research Center's Aerothermodynamics Branch is described. Inherent in the process is a strong synergism between ground test and real gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flowfield simulation codes are discussed. These models were partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions is sparse and reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high enthalpy flow facilities, such as shock tubes and ballistic ranges.
NASA Technical Reports Server (NTRS)
Baumeister, Joseph F.
1994-01-01
A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.
Issues and approach to develop validated analysis tools for hypersonic flows: One perspective
NASA Technical Reports Server (NTRS)
Deiwert, George S.
1992-01-01
Critical issues concerning the modeling of low-density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools. A description of the activity in the Ames Research Center's Aerothermodynamics Branch is also given. Inherent in the process is a strong synergism between ground test and real-gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flow-field simulation codes are discussed. These models have been partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions are sparse; reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground-based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high-enthalpy flow facilities, such as shock tubes and ballistic ranges.
NASA Astrophysics Data System (ADS)
Paul, A.; Reuther, F.; Neumann, S.; Albert, A.; Landgrebe, D.
2017-09-01
One field in the work of the Fraunhofer Institute for Machine Tools and Forming Technology IWU in Chemnitz is industry applied research in Hot Metal Gas Forming, combined with press hardening in one process step. In this paper the results of investigations on new press hardening steels from SSAB AB (Docol®1800 Bor and Docol®2000 Bor) are presented. Hot tensile tests recorded by the project partner (University of West Bohemia, Faculty of Mechanical Engineering) were used to create a material model for thermo-mechanical forming simulations. For this purpose the provided raw data were converted into flow curve approximations of the real stress-real strain-curves for both materials and afterwards integrated in a LS-DYNA simulation model of Hot Metal Gas Forming with all relevant boundary conditions and sub-stages. Preliminary experimental tests were carried out using a tool at room temperature to permit evaluation of the forming behaviour of Docol 1800 Bor and Docol 2000 Bor tubes as well as validation of the simulation model. Using this demonstrator geometry (outer diameter 57 mm, tube length 300 mm, wall thickness 1.5 mm), the intention was to perform a series of tests with different furnace temperatures (from 870 °C to 1035 °C), maximum internal pressures (up to 67 MPa) and pressure build-up rates (up to 40 MPa/s) to evaluate the formability of Docol 1800 Bor and Docol 2000 Bor. Selected demonstrator parts produced in that way were subsequently analysed by wall thickness and hardness measurements. The tests were carried out using the completely modernized Dunkes/AP&T HS3-1500 hydroforming press at the Fraunhofer IWU. In summary, creating a consistent simulation model with all relevant sub-stages was successfully established in LS-DYNA. The computation results show a high correlation with the experimental data regarding the thinning behaviour. The Hot Metal Gas Forming of the demonstrator geometry was successfully established as well. Different hardness values could be achieved depending on the furnace temperatures and the investigated material. Hardness up to 620 HV could be measured on the component with a complete martensitic structure.
Testing the Validity of Local Flux Laws in an Experimental Eroding Landscape
NASA Astrophysics Data System (ADS)
Sweeney, K. E.; Roering, J. J.; Ellis, C.
2015-12-01
Linking sediment transport to landscape evolution is fundamental to interpreting climate and tectonic signals from topography and sedimentary deposits. Most geomorphic process laws consist of simple continuum relationships between sediment flux and local topography. However, recent work has shown that nonlocal formulations, whereby sediment flux depends on upslope conditions, are more accurate descriptions of sediment motion, particularly in steep topography. Discriminating between local and nonlocal processes in natural landscapes is complicated by the scarcity of high-resolution topographic data and by the difficulty of measuring sediment flux. To test the validity of local formulations of sediment transport, we use an experimental erosive landscape that combines disturbance-driven, diffusive sediment transport and surface runoff. We conducted our experiments in the eXperimental Landscape Model at St. Anthony Falls Laboratory a 0.5 x 0.5 m test flume filled with crystalline silica (D50 = 30μ) mixed with water to increase cohesion and preclude surface infiltration. Topography is measured with a sheet laser scanner; total sediment flux is tracked with a series of load cells. We simulate uplift (relative baselevel fall) by dropping two parallel weirs at the edges of the experiment. Diffusive sediment transport in our experiments is driven by rainsplash from a constant head drip tank fitted with 625 blunt needles of fixed diameter; sediment is mobilized both through drop impact and the subsequent runoff of the drops. To drive advective transport, we produce surface runoff via a ring of misters that produce droplets that are too small to disturb the sediment surface on impact. Using the results from five experiments that systematically vary the time of drip box rainfall relative to misting rainfall, we calculate local erosion in our experiments by differencing successive time-slices of topography and test whether these patterns are related to local topographic metrics. By examining these patterns over different timescales, we are able to assess whether there is a signature of nonlocal transport in long-term topographic evolution or if, instead, local formulations are appropriate over timescales much greater than individual transport events.
NASA Astrophysics Data System (ADS)
Greiner, Nathan J.
Modern turbine engines require high turbine inlet temperatures and pressures to maximize thermal efficiency. Increasing the turbine inlet temperature drives higher heat loads on the turbine surfaces. In addition, increasing pressure ratio increases the turbine coolant temperature such that the ability to remove heat decreases. As a result, highly effective external film cooling is required to reduce the heat transfer to turbine surfaces. Testing of film cooling on engine hardware at engine temperatures and pressures can be exceedingly difficult and expensive. Thus, modern studies of film cooling are often performed at near ambient conditions. However, these studies are missing an important aspect in their characterization of film cooling effectiveness. Namely, they do not model effect of thermal property variations that occur within the boundary and film cooling layers at engine conditions. Also, turbine surfaces can experience significant radiative heat transfer that is not trivial to estimate analytically. The present research first computationally examines the effect of large temperature variations on a turbulent boundary layer. Subsequently, a method to model the effect of large temperature variations within a turbulent boundary layer in an environment coupled with significant radiative heat transfer is proposed and experimentally validated. Next, a method to scale turbine cooling from ambient to engine conditions via non-dimensional matching is developed computationally and the experimentally validated at combustion temperatures. Increasing engine efficiency and thrust to weight ratio demands have driven increased combustor fuel-air ratios. Increased fuel-air ratios increase the possibility of unburned fuel species entering the turbine. Alternatively, advanced ultra-compact combustor designs have been proposed to decrease combustor length, increase thrust, or generate power for directed energy weapons. However, the ultra-compact combustor design requires a film cooled vane within the combustor. In both these environments, the unburned fuel in the core flow encounters the oxidizer rich film cooling stream, combusts, and can locally heat the turbine surface rather than the intended cooling of the surface. Accordingly, a method to quantify film cooling performance in a fuel rich environment is prescribed. Finally, a method to film cool in a fuel rich environment is experimentally demonstrated.
NASA Astrophysics Data System (ADS)
Mashayekhi, Somayeh; Miles, Paul; Hussaini, M. Yousuff; Oates, William S.
2018-02-01
In this paper, fractional and non-fractional viscoelastic models for elastomeric materials are derived and analyzed in comparison to experimental results. The viscoelastic models are derived by expanding thermodynamic balance equations for both fractal and non-fractal media. The order of the fractional time derivative is shown to strongly affect the accuracy of the viscoelastic constitutive predictions. Model validation uses experimental data describing viscoelasticity of the dielectric elastomer Very High Bond (VHB) 4910. Since these materials are known for their broad applications in smart structures, it is important to characterize and accurately predict their behavior across a large range of time scales. Whereas integer order viscoelastic models can yield reasonable agreement with data, the model parameters often lack robustness in prediction at different deformation rates. Alternatively, fractional order models of viscoelasticity provide an alternative framework to more accurately quantify complex rate-dependent behavior. Prior research that has considered fractional order viscoelasticity lacks experimental validation and contains limited links between viscoelastic theory and fractional order derivatives. To address these issues, we use fractional order operators to experimentally validate fractional and non-fractional viscoelastic models in elastomeric solids using Bayesian uncertainty quantification. The fractional order model is found to be advantageous as predictions are significantly more accurate than integer order viscoelastic models for deformation rates spanning four orders of magnitude.
Experimentally Induced Learned Helplessness: How Far Does it Generalize?
ERIC Educational Resources Information Center
Tuffin, Keith; And Others
1985-01-01
Assessed whether experimentally induced learned helplessness on a cognitive training task generalized to a situationally dissimilar social interaction test task. No significant differences were observed between groups on the subsequent test task, showing that helplessness failed to generalize. (Author/ABB)
Development, reliability, and validity of the My Child's Play (MCP) questionnaire.
Schneider, Eleanor; Rosenblum, Sara
2014-01-01
This article describes the development, reliability, and validity of My Child's Play (MCP), a parent questionnaire designed to evaluate the play of children ages 3-9 yr. The first phase of the study determined the questionnaire's content and face validity. Subsequently, the internal reliability consistency and construct and concurrent validity were demonstrated using 334 completed questionnaires. The MCP showed good internal consistency (α = .86). The factor analysis revealed four distinct factors with acceptable levels of internal reliability (Cronbach's αs = .63-.81) and gender- and age-related differences in play characteristics; both findings attest to the tool's construct validity. Significant correlations (r = .33, p < .0001) with the Parent as a Teacher Inventory demonstrate the MCP's concurrent validity. The MCP demonstrated acceptable reliability and validity. It appears to be a promising standardized assessment tool for use in research and practice to promote understanding of a child's play. Copyright © 2014 by the American Occupational Therapy Association, Inc.
Systems analysis identifies miR-29b regulation of invasiveness in melanoma.
Andrews, Miles C; Cursons, Joseph; Hurley, Daniel G; Anaka, Matthew; Cebon, Jonathan S; Behren, Andreas; Crampin, Edmund J
2016-11-16
In many cancers, microRNAs (miRs) contribute to metastatic progression by modulating phenotypic reprogramming processes such as epithelial-mesenchymal plasticity. This can be driven by miRs targeting multiple mRNA transcripts, inducing regulated changes across large sets of genes. The miR-target databases TargetScan and DIANA-microT predict putative relationships by examining sequence complementarity between miRs and mRNAs. However, it remains a challenge to identify which miR-mRNA interactions are active at endogenous expression levels, and of biological consequence. We developed a workflow to integrate TargetScan and DIANA-microT predictions into the analysis of data-driven associations calculated from transcript abundance (RNASeq) data, specifically the mutual information and Pearson's correlation metrics. We use this workflow to identify putative relationships of miR-mediated mRNA repression with strong support from both lines of evidence. Applying this approach systematically to a large, published collection of unique melanoma cell lines - the Ludwig Melbourne melanoma (LM-MEL) cell line panel - we identified putative miR-mRNA interactions that may contribute to invasiveness. This guided the selection of interactions of interest for further in vitro validation studies. Several miR-mRNA regulatory relationships supported by TargetScan and DIANA-microT demonstrated differential activity across cell lines of varying matrigel invasiveness. Strong negative statistical associations for these putative regulatory relationships were consistent with target mRNA inhibition by the miR, and suggest that differential activity of such miR-mRNA relationships contribute to differences in melanoma invasiveness. Many of these relationships were reflected across the skin cutaneous melanoma TCGA dataset, indicating that these observations also show graded activity across clinical samples. Several of these miRs are implicated in cancer progression (miR-211, -340, -125b, -221, and -29b). The specific role for miR-29b-3p in melanoma has not been well studied. We experimentally validated the predicted miR-29b-3p regulation of LAMC1 and PPIC and LASP1, and show that dysregulation of miR-29b-3p or these mRNA targets can influence cellular invasiveness in vitro. This analytic strategy provides a comprehensive, systems-level approach to identify miR-mRNA regulation in high-throughput cancer data, identifies novel putative interactions with functional phenotypic relevance, and can be used to direct experimental resources for subsequent experimental validation. Computational scripts are available: http://github.com/uomsystemsbiology/LMMEL-miR-miner.
Petitti, Tommasangelo; Candela, Maria Luigia; Ianni, Andrea; de Belvis, Antonio Giulio; Ricciardi, Walter; De Marinis, Maria Grazia
2015-01-01
There isn't a validated questionnaire in Italian language to evaluate the quality perceived by the patient in Digestive Endoscopy. validation of the translation of a questionnaire from English to Italian language to measure the level of patient satisfaction. we conducted a prospective study on validation in Italian of a short questionnaire adapted for Endoscopy by the American Society of Gastrointestinal Endoscopy, the GHAA-9m. It's been tested with the technique of the questionnaire/interview on 80 outpatients who underwent in the month of September 2014 to endoscopic examinations of the gastrointestinal tract. The patients were divided into 2 groups of 40 patients: group 1 was administered before the questionnaire and subsequently the interview was conducted, on the contrary on the group 2 was administered before the interview and subsequently the questionnaire. The results of the two groups were compared using the inter-rater agreement. It was also evaluated the internal consistency of the questions. The results show that the instrument is experienced as simple and quick to use for patients. Data analysis allowed us to conclude that the Italian translation is valid and consistent. In the phase of the interview there were some aspects that suggest, in a development of this tool, some changes that could increase the accuracy and informational content. The Italian version of the questionnaire GHAA-9m has good validity, reliability, and shows property valuation comparable to those of the American version and can therefore be used in daily practice Digestive Endoscopy.
Chen, Kaizhen; Seng, Kok-Yong
2012-09-01
A physiologically based pharmacokinetic and pharmacodynamic (PBPK/PD) model has been developed for low, medium and high levels of soman intoxication in the rat, marmoset, guinea pig and pig. The primary objective of this model was to describe the pharmacokinetics of soman after intravenous, intramuscular and subcutaneous administration in the rat, marmoset, guinea pig, and pig as well as its subsequent pharmacodynamic effects on blood acetylcholinesterase (AChE) levels, relating dosimetry to physiological response. The reactions modelled in each physiologically realistic compartment are: (1) partitioning of C(±)P(±) soman from the blood into the tissue; (2) inhibition of AChE and carboxylesterase (CaE) by soman; (3) elimination of soman by enzymatic hydrolysis; (4) de novo synthesis and degradation of AChE and CaE; and (5) aging of AChE-soman and CaE-soman complexes. The model was first calibrated for the rat, then extrapolated for validation in the marmoset, guinea pig and pig. Adequate fits to experimental data on the time course of soman pharmacokinetics and AChE inhibition were achieved in the mammalian models. In conclusion, the present model adequately predicts the dose-response relationship resulting from soman intoxication and can potentially be applied to predict soman pharmacokinetics and pharmacodynamics in other species, including human. Copyright © 2011 John Wiley & Sons, Ltd.
Computer Simulations of Coronary Blood Flow Through a Constriction
2014-03-01
interventional procedures (e.g., stent deployment). Building off previous models that have been partially validated with experimental data, this thesis... stent deployment). Building off previous models that have been partially validated with experimental data, this thesis continues to develop the...the artery and increase blood flow. Generally a stent , or a mesh wire tube, is permanently inserted in order to scaffold open the artery wall
Perceptions vs Reality: A Longitudinal Experiment in Influenced Judgement Performance
2003-03-25
validity were manifested equally between treatment and control groups , thereby lending further validity to the experimental research design . External...Stanley (1975) identify this as a True Experimental Design : Pretest- Posttest Control Group Design . However, due to the longitudinal aspect required to...1975:43). Nonequivalence will be ruled out as pretest equivalence is shown between treatment and control groups (1975:47). For quasi
Assessing the stability of human locomotion: a review of current measures
Bruijn, S. M.; Meijer, O. G.; Beek, P. J.; van Dieën, J. H.
2013-01-01
Falling poses a major threat to the steadily growing population of the elderly in modern-day society. A major challenge in the prevention of falls is the identification of individuals who are at risk of falling owing to an unstable gait. At present, several methods are available for estimating gait stability, each with its own advantages and disadvantages. In this paper, we review the currently available measures: the maximum Lyapunov exponent (λS and λL), the maximum Floquet multiplier, variability measures, long-range correlations, extrapolated centre of mass, stabilizing and destabilizing forces, foot placement estimator, gait sensitivity norm and maximum allowable perturbation. We explain what these measures represent and how they are calculated, and we assess their validity, divided up into construct validity, predictive validity in simple models, convergent validity in experimental studies, and predictive validity in observational studies. We conclude that (i) the validity of variability measures and λS is best supported across all levels, (ii) the maximum Floquet multiplier and λL have good construct validity, but negative predictive validity in models, negative convergent validity and (for λL) negative predictive validity in observational studies, (iii) long-range correlations lack construct validity and predictive validity in models and have negative convergent validity, and (iv) measures derived from perturbation experiments have good construct validity, but data are lacking on convergent validity in experimental studies and predictive validity in observational studies. In closing, directions for future research on dynamic gait stability are discussed. PMID:23516062
Yu, Zhangbin; Han, Shuping; Wu, Jinxia; Li, Mingxia; Wang, Huaiyan; Wang, Jimei; Liu, Jiebo; Pan, Xinnian; Yang, Jie; Chen, Chao
2014-01-01
to prospectively validate a previously constructed transcutaneous bilirubin (TcB) nomogram for identifying severe hyperbilirubinemia in healthy Chinese term and late-preterm infants. this was a multicenter study that included 9,174 healthy term and late-preterm infants in eight hospitals of China. TcB measurements were performed using a JM-103 bilirubinometer. TcB values were plotted on a previously developed TcB nomogram, to identify the predictive ability for subsequent significant hyperbilirubinemia. in the present study, 972 neonates (10.6%) developed significant hyperbilirubinemia. The 40(th) percentile of the nomogram could identify all neonates who were at risk of significant hyperbilirubinemia, but with a low positive predictive value (PPV) (18.9%). Of the 453 neonates above the 95(th) percentile, 275 subsequently developed significant hyperbilirubinemia, with a high PPV (60.7%), but with low sensitivity (28.3%). The 75(th) percentile was highly specific (81.9%) and moderately sensitive (79.8%). The area under the curve (AUC) for the TcB nomogram was 0.875. this study validated the previously developed TcB nomogram, which could be used to predict subsequent significant hyperbilirubinemia in healthy Chinese term and late-preterm infants. However, combining TcB nomogram and clinical risk factors could improve the predictive accuracy for severe hyperbilirubinemia, which was not assessed in the study. Further studies are necessary to confirm this combination. Copyright © 2014 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
Berghoff, Bork A; Karlsson, Torgny; Källman, Thomas; Wagner, E Gerhart H; Grabherr, Manfred G
2017-01-01
Measuring how gene expression changes in the course of an experiment assesses how an organism responds on a molecular level. Sequencing of RNA molecules, and their subsequent quantification, aims to assess global gene expression changes on the RNA level (transcriptome). While advances in high-throughput RNA-sequencing (RNA-seq) technologies allow for inexpensive data generation, accurate post-processing and normalization across samples is required to eliminate any systematic noise introduced by the biochemical and/or technical processes. Existing methods thus either normalize on selected known reference genes that are invariant in expression across the experiment, assume that the majority of genes are invariant, or that the effects of up- and down-regulated genes cancel each other out during the normalization. Here, we present a novel method, moose 2 , which predicts invariant genes in silico through a dynamic programming (DP) scheme and applies a quadratic normalization based on this subset. The method allows for specifying a set of known or experimentally validated invariant genes, which guides the DP. We experimentally verified the predictions of this method in the bacterium Escherichia coli , and show how moose 2 is able to (i) estimate the expression value distances between RNA-seq samples, (ii) reduce the variation of expression values across all samples, and (iii) to subsequently reveal new functional groups of genes during the late stages of DNA damage. We further applied the method to three eukaryotic data sets, on which its performance compares favourably to other methods. The software is implemented in C++ and is publicly available from http://grabherr.github.io/moose2/. The proposed RNA-seq normalization method, moose 2 , is a valuable alternative to existing methods, with two major advantages: (i) in silico prediction of invariant genes provides a list of potential reference genes for downstream analyses, and (ii) non-linear artefacts in RNA-seq data are handled adequately to minimize variations between replicates.
Understanding the ignition mechanism of high-pressure spray flames
Dahms, Rainer N.; Paczko, Günter A.; Skeen, Scott A.; ...
2016-10-25
A conceptual model for turbulent ignition in high-pressure spray flames is presented. The model is motivated by first-principles simulations and optical diagnostics applied to the Sandia n-dodecane experiment. The Lagrangian flamelet equations are combined with full LLNL kinetics (2755 species; 11,173 reactions) to resolve all time and length scales and chemical pathways of the ignition process at engine-relevant pressures and turbulence intensities unattainable using classic DNS. The first-principles value of the flamelet equations is established by a novel chemical explosive mode-diffusion time scale analysis of the fully-coupled chemical and turbulent time scales. Contrary to conventional wisdom, this analysis reveals thatmore » the high Damköhler number limit, a key requirement for the validity of the flamelet derivation from the reactive Navier–Stokes equations, applies during the entire ignition process. Corroborating Rayleigh-scattering and formaldehyde PLIF with simultaneous schlieren imaging of mixing and combustion are presented. Our combined analysis establishes a characteristic temporal evolution of the ignition process. First, a localized first-stage ignition event consistently occurs in highest temperature mixture regions. This initiates, owed to the intense scalar dissipation, a turbulent cool flame wave propagating from this ignition spot through the entire flow field. This wave significantly decreases the ignition delay of lower temperature mixture regions in comparison to their homogeneous reference. This explains the experimentally observed formaldehyde formation across the entire spray head prior to high-temperature ignition which consistently occurs first in a broad range of rich mixture regions. There, the combination of first-stage ignition delay, shortened by the cool flame wave, and the subsequent delay until second-stage ignition becomes minimal. A turbulent flame subsequently propagates rapidly through the entire mixture over time scales consistent with experimental observations. As a result, we demonstrate that the neglect of turbulence-chemistry-interactions fundamentally fails to capture the key features of this ignition process.« less
Lucente, Giuseppe; Lam, Steven; Schneider, Heike; Picht, Thomas
2018-02-01
Non-invasive pre-surgical mapping of eloquent brain areas with navigated transcranial magnetic stimulation (nTMS) is a useful technique linked to the improvement of surgical planning and patient outcomes. The stimulator output intensity and subsequent resting motor threshold determination (rMT) are based on the motor-evoked potential (MEP) elicited in the target muscle with an amplitude above a predetermined threshold of 50 μV. However, a subset of patients is unable to achieve complete relaxation in the target muscles, resulting in false positives that jeopardize mapping validity with conventional MEP determination protocols. Our aim is to explore the feasibility and reproducibility of a novel mapping approach that investigates how an increase of the MEP amplitude threshold to 300 and 500 μV affects subsequent motor maps. Seven healthy subjects underwent motor mapping with nTMS. RMT was calculated with the conventional methodology in conjunction with experimental 300- and 500-μV MEP amplitude thresholds. Motor mapping was performed with 105% of rMT stimulator intensity using the FDI as the target muscle. Motor mapping was possible in all patients with both the conventional and experimental setups. Motor area maps with a conventional 50-μV threshold showed poor correlation with 300-μV (α = 0.446, p < 0.001) maps, but showed excellent consistency with 500-μV motor area maps (α = 0.974, p < 0.001). MEP latencies were significantly less variable (23 ms for 50 μV vs. 23.7 ms for 300 μV vs. 23.7 ms for 500 μV, p < 0.001). A slight but significant increase of the electric field (EF) value was found (EF: 60.8 V/m vs. 64.8 V/m vs. 66 V/m p < 0.001). Our study demonstrates the feasibility of increasing the MEP detection threshold to 500 μV in rMT determination and motor area mapping with nTMS without losing precision.
Experimentally validated finite element model of electrocaloric multilayer ceramic structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, N. A. S., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk; Correia, T. M., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk; Rokosz, M. K., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk
2014-07-28
A novel finite element model to simulate the electrocaloric response of a multilayer ceramic capacitor (MLCC) under real environment and operational conditions has been developed. The two-dimensional transient conductive heat transfer model presented includes the electrocaloric effect as a source term, as well as accounting for radiative and convective effects. The model has been validated with experimental data obtained from the direct imaging of MLCC transient temperature variation under application of an electric field. The good agreement between simulated and experimental data, suggests that the novel experimental direct measurement methodology and the finite element model could be used to supportmore » the design of optimised electrocaloric units and operating conditions.« less
Experimental and Quasi-Experimental Design.
ERIC Educational Resources Information Center
Cottrell, Edward B.
With an emphasis on the problems of control of extraneous variables and threats to internal and external validity, the arrangement or design of experiments is discussed. The purpose of experimentation in an educational institution, and the principles governing true experimentation (randomization, replication, and control) are presented, as are…
Psychometric Research in Reading.
ERIC Educational Resources Information Center
Davis, Frederick B.
This review of psychometric research in reading analyzes the factors which seem related to reading comprehension skills. Experimental analysis of reading comprehension by L. E. Thorndike revealed two major components: knowledge of word meanings and verbal reasoning abilities. Subsequent analysis of experimental studies of reading comprehension…
Huang, Hsin-Chung; Yang, Hwai-I; Chang, Yu-Hsun; Chang, Rui-Jane; Chen, Mei-Huei; Chen, Chien-Yi; Chou, Hung-Chieh; Hsieh, Wu-Shiun; Tsao, Po-Nien
2012-12-01
The aim of this study was to identify high-risk newborns who will subsequently develop significant hyperbilirubinemia Days 4 to 10 of life by using the clinical data from the first three days of life. We retrospectively collected exclusively breastfeeding healthy term and near-term newborns born in our nursery between May 1, 2002, to June 30, 2005. Clinical data, including serum bilirubin were collected and the significant predictors were identified. Bilirubin level ≥15mg/dL during Days 4 to 10 of life was defined as significant hyperbilirubinemia. A prediction model to predict subsequent hyperbilirubinemia was established. This model was externally validated in another group of newborns who were enrolled by the same criteria to test its discrimination capability. Totally, 1979 neonates were collected and 1208 cases were excluded by our exclusion criteria. Finally, 771 newborns were enrolled and 182 (23.6%) cases developed significant hyperbilirubinemia during Days 4 to 10 of life. In the logistic regression analysis, gestational age, maximal body weight loss percentage, and peak bilirubin level during the first 72 hours of life were significantly associated with subsequent hyperbilirubinemia. A prediction model was derived with the area under receiver operating characteristic (AUROC) curve of 0.788. Model validation in the separate study (N = 209) showed similar discrimination capability (AUROC = 0.8340). Gestational age, maximal body weight loss percentage, and peak serum bilirubin level during the first 3 days of life have highest predictive value of subsequent significant hyperbilirubinemia. We provide a good model to predict the risk of subsequent significant hyperbilirubinemia. Copyright © 2012. Published by Elsevier B.V.
Antoniewicz, Franziska; Brand, Ralf
2016-04-01
This multistudy report used an experimental approach to alter automatic evaluations of exercise (AEE). First, we investigated the plasticity of AEE (study 1). A computerized evaluative conditioning task was developed that altered the AEE of participants in two experimental groups (acquisition of positive/negative associations involving exercising) and a control group (η2 part. = .11). Second, we examined connections between changes in AEE and subsequent exercise behavior (chosen intensity on a bike ergometer; study 2) in individuals that were placed in groups according to their baseline AEE. Group differences in exercise behavior were detected (η2 part. = .29). The effect was driven by the performance of the group with preexisting negative AEE that acquired more positive associations. This illustrates the effect of altered AEE on subsequent exercise behavior and the potential of AEE as a target for exercise intervention.
Can We Study Autonomous Driving Comfort in Moving-Base Driving Simulators? A Validation Study.
Bellem, Hanna; Klüver, Malte; Schrauf, Michael; Schöner, Hans-Peter; Hecht, Heiko; Krems, Josef F
2017-05-01
To lay the basis of studying autonomous driving comfort using driving simulators, we assessed the behavioral validity of two moving-base simulator configurations by contrasting them with a test-track setting. With increasing level of automation, driving comfort becomes increasingly important. Simulators provide a safe environment to study perceived comfort in autonomous driving. To date, however, no studies were conducted in relation to comfort in autonomous driving to determine the extent to which results from simulator studies can be transferred to on-road driving conditions. Participants ( N = 72) experienced six differently parameterized lane-change and deceleration maneuvers and subsequently rated the comfort of each scenario. One group of participants experienced the maneuvers on a test-track setting, whereas two other groups experienced them in one of two moving-base simulator configurations. We could demonstrate relative and absolute validity for one of the two simulator configurations. Subsequent analyses revealed that the validity of the simulator highly depends on the parameterization of the motion system. Moving-base simulation can be a useful research tool to study driving comfort in autonomous vehicles. However, our results point at a preference for subunity scaling factors for both lateral and longitudinal motion cues, which might be explained by an underestimation of speed in virtual environments. In line with previous studies, we recommend lateral- and longitudinal-motion scaling factors of approximately 50% to 60% in order to obtain valid results for both active and passive driving tasks.
Self-esteem among nursing assistants: reliability and validity of the Rosenberg Self-Esteem Scale.
McMullen, Tara; Resnick, Barbara
2013-01-01
To establish the reliability and validity of the Rosenberg Self-Esteem Scale (RSES) when used with nursing assistants (NAs). Testing the RSES used baseline data from a randomized controlled trial testing the Res-Care Intervention. Female NAs were recruited from nursing homes (n = 508). Validity testing for the positive and negative subscales of the RSES was based on confirmatory factor analysis (CFA) using structural equation modeling and Rasch analysis. Estimates of reliability were based on Rasch analysis and the person separation index. Evidence supports the reliability and validity of the RSES in NAs although we recommend minor revisions to the measure for subsequent use. Establishing reliable and valid measures of self-esteem in NAs will facilitate testing of interventions to strengthen workplace self-esteem, job satisfaction, and retention.
NASA Astrophysics Data System (ADS)
Giardina, G.; Mandaglio, G.; Nasirov, A. K.; Anastasi, A.; Curciarello, F.; Fazio, G.
2018-02-01
Experimental and theoretical results of the PCN fusion probability of reactants in the entrance channel and the Wsur survival probability against fission at deexcitation of the compound nucleus formed in heavy-ion collisions are discussed. The theoretical results for a set of nuclear reactions leading to formation of compound nuclei (CNs) with the charge number Z = 102- 122 reveal a strong sensitivity of PCN to the characteristics of colliding nuclei in the entrance channel, dynamics of the reaction mechanism, and excitation energy of the system. We discuss the validity of assumptions and procedures for analysis of experimental data, and also the limits of validity of theoretical results obtained by the use of phenomenological models. The comparison of results obtained in many investigated reactions reveals serious limits of validity of the data analysis and calculation procedures.
Hovering Dual-Spin Vehicle Groundwork for Bias Momentum Sizing Validation Experiment
NASA Technical Reports Server (NTRS)
Rothhaar, Paul M.; Moerder, Daniel D.; Lim, Kyong B.
2008-01-01
Angular bias momentum offers significant stability augmentation for hovering flight vehicles. The reliance of the vehicle on thrust vectoring for agility and disturbance rejection is greatly reduced with significant levels of stored angular momentum in the system. A methodical procedure for bias momentum sizing has been developed in previous studies. This current study provides groundwork for experimental validation of that method using an experimental vehicle called the Dual-Spin Test Device, a thrust-levitated platform. Using measured data the vehicle's thrust vectoring units are modeled and a gust environment is designed and characterized. Control design is discussed. Preliminary experimental results of the vehicle constrained to three rotational degrees of freedom are compared to simulation for a case containing no bias momentum to validate the simulation. A simulation of a bias momentum dominant case is presented.
WEC-SIM Phase 1 Validation Testing -- Numerical Modeling of Experiments: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruehl, Kelley; Michelen, Carlos; Bosma, Bret
2016-08-01
The Wave Energy Converter Simulator (WEC-Sim) is an open-source code jointly developed by Sandia National Laboratories and the National Renewable Energy Laboratory. It is used to model wave energy converters subjected to operational and extreme waves. In order for the WEC-Sim code to be beneficial to the wave energy community, code verification and physical model validation is necessary. This paper describes numerical modeling of the wave tank testing for the 1:33-scale experimental testing of the floating oscillating surge wave energy converter. The comparison between WEC-Sim and the Phase 1 experimental data set serves as code validation. This paper is amore » follow-up to the WEC-Sim paper on experimental testing, and describes the WEC-Sim numerical simulations for the floating oscillating surge wave energy converter.« less
Examining students' views about validity of experiments: From introductory to Ph.D. students
NASA Astrophysics Data System (ADS)
Hu, Dehui; Zwickl, Benjamin M.
2018-06-01
We investigated physics students' epistemological views on measurements and validity of experimental results. The roles of experiments in physics have been underemphasized in previous research on students' personal epistemology, and there is a need for a broader view of personal epistemology that incorporates experiments. An epistemological framework incorporating the structure, methodology, and validity of scientific knowledge guided the development of an open-ended survey. The survey was administered to students in algebra-based and calculus-based introductory physics courses, upper-division physics labs, and physics Ph.D. students. Within our sample, we identified several differences in students' ideas about validity and uncertainty in measurement. The majority of introductory students justified the validity of results through agreement with theory or with results from others. Alternatively, Ph.D. students frequently justified the validity of results based on the quality of the experimental process and repeatability of results. When asked about the role of uncertainty analysis, introductory students tended to focus on the representational roles (e.g., describing imperfections, data variability, and human mistakes). However, advanced students focused on the inferential roles of uncertainty analysis (e.g., quantifying reliability, making comparisons, and guiding refinements). The findings suggest that lab courses could emphasize a variety of approaches to establish validity, such as by valuing documentation of the experimental process when evaluating the quality of student work. In order to emphasize the role of uncertainty in an authentic way, labs could provide opportunities to iterate, make repeated comparisons, and make decisions based on those comparisons.
NASA Technical Reports Server (NTRS)
Geng, Tao; Paxson, Daniel E.; Zheng, Fei; Kuznetsov, Andrey V.; Roberts, William L.
2008-01-01
Pulsed combustion is receiving renewed interest as a potential route to higher performance in air breathing propulsion systems. Pulsejets offer a simple experimental device with which to study unsteady combustion phenomena and validate simulations. Previous computational fluid dynamic (CFD) simulation work focused primarily on the pulsejet combustion and exhaust processes. This paper describes a new inlet sub-model which simulates the fluidic and mechanical operation of a valved pulsejet head. The governing equations for this sub-model are described. Sub-model validation is provided through comparisons of simulated and experimentally measured reed valve motion, and time averaged inlet mass flow rate. The updated pulsejet simulation, with the inlet sub-model implemented, is validated through comparison with experimentally measured combustion chamber pressure, inlet mass flow rate, operational frequency, and thrust. Additionally, the simulated pulsejet exhaust flowfield, which is dominated by a starting vortex ring, is compared with particle imaging velocimetry (PIV) measurements on the bases of velocity, vorticity, and vortex location. The results show good agreement between simulated and experimental data. The inlet sub-model is shown to be critical for the successful modeling of pulsejet operation. This sub-model correctly predicts both the inlet mass flow rate and its phase relationship with the combustion chamber pressure. As a result, the predicted pulsejet thrust agrees very well with experimental data.
Inhibitor-based validation of a homology model of the active-site of tripeptidyl peptidase II.
De Winter, Hans; Breslin, Henry; Miskowski, Tamara; Kavash, Robert; Somers, Marijke
2005-04-01
A homology model of the active site region of tripeptidyl peptidase II (TPP II) was constructed based on the crystal structures of four subtilisin-like templates. The resulting model was subsequently validated by judging expectations of the model versus observed activities for a broad set of prepared TPP II inhibitors. The structure-activity relationships observed for the prepared TPP II inhibitors correlated nicely with the structural details of the TPP II active site model, supporting the validity of this model and its usefulness for structure-based drug design and pharmacophore searching experiments.
Experimental Validation of an Integrated Controls-Structures Design Methodology
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.
1996-01-01
The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.
Detection of overreported psychopathology with the MMPI-2-RF [corrected] validity scales.
Sellbom, Martin; Bagby, R Michael
2010-12-01
We examined the utility of the validity scales on the recently released Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2 RF; Ben-Porath & Tellegen, 2008) to detect overreported psychopathology. This set of validity scales includes a newly developed scale and revised versions of the original MMPI-2 validity scales. We used an analogue, experimental simulation in which MMPI-2 RF responses (derived from archived MMPI-2 protocols) of undergraduate students instructed to overreport psychopathology (in either a coached or noncoached condition) were compared with those of psychiatric inpatients who completed the MMPI-2 under standardized instructions. The MMPI-2 RF validity scale Infrequent Psychopathology Responses best differentiated the simulation groups from the sample of patients, regardless of experimental condition. No other validity scale added consistent incremental predictive utility to Infrequent Psychopathology Responses in distinguishing the simulation groups from the sample of patients. Classification accuracy statistics confirmed the recommended cut scores in the MMPI-2 RF manual (Ben-Porath & Tellegen, 2008).
English, Sangeeta B.; Shih, Shou-Ching; Ramoni, Marco F.; Smith, Lois E.; Butte, Atul J.
2014-01-01
Though genome-wide technologies, such as microarrays, are widely used, data from these methods are considered noisy; there is still varied success in downstream biological validation. We report a method that increases the likelihood of successfully validating microarray findings using real time RT-PCR, including genes at low expression levels and with small differences. We use a Bayesian network to identify the most relevant sources of noise based on the successes and failures in validation for an initial set of selected genes, and then improve our subsequent selection of genes for validation based on eliminating these sources of noise. The network displays the significant sources of noise in an experiment, and scores the likelihood of validation for every gene. We show how the method can significantly increase validation success rates. In conclusion, in this study, we have successfully added a new automated step to determine the contributory sources of noise that determine successful or unsuccessful downstream biological validation. PMID:18790084
Fault-tolerant clock synchronization validation methodology. [in computer systems
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.
1987-01-01
A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.
ERIC Educational Resources Information Center
Cory, Charles H.
This report presents data concerning the validity of a set of experimental computerized and paper-and-pencil tests for measures of on-job performance on global and job elements. It reports on the usefulness of 30 experimental and operational variables for predicting marks on 42 job elements and on a global criterion for Electrician's Mate,…
Viscoelasticity of Axisymmetric Composite Structures: Analysis and Experimental Validation
2013-02-01
compressive stress at the interface between the composite and steel prior to the sheath’s cut-off. Accordingly, the viscoelastic analysis is used...The hoop-stress profile in figure 6 shows the steel region is in compression , resulting from the winding tension of composite overwrap. The stress...mechanical and thermal loads. Experimental validation of the model is conducted using a high- tensioned composite overwrapped on a steel cylinder. The creep
Verification of Experimental Techniques for Flow Surface Determination
NASA Technical Reports Server (NTRS)
Lissenden, Cliff J.; Lerch, Bradley A.; Ellis, John R.; Robinson, David N.
1996-01-01
The concept of a yield surface is central to the mathematical formulation of a classical plasticity theory. However, at elevated temperatures, material response can be highly time-dependent, which is beyond the realm of classical plasticity. Viscoplastic theories have been developed for just such conditions. In viscoplastic theories, the flow law is given in terms of inelastic strain rate rather than the inelastic strain increment used in time-independent plasticity. Thus, surfaces of constant inelastic strain rate or flow surfaces are to viscoplastic theories what yield surfaces are to classical plasticity. The purpose of the work reported herein was to validate experimental procedures for determining flow surfaces at elevated temperatures. Since experimental procedures for determining yield surfaces in axial/torsional stress space are well established, they were employed -- except inelastic strain rates were used rather than total inelastic strains. In yield-surface determinations, the use of small-offset definitions of yield minimizes the change of material state and allows multiple loadings to be applied to a single specimen. The key to the experiments reported here was precise, decoupled measurement of axial and torsional strain. With this requirement in mind, the performance of a high-temperature multi-axial extensometer was evaluated by comparing its results with strain gauge results at room temperature. Both the extensometer and strain gauges gave nearly identical yield surfaces (both initial and subsequent) for type 316 stainless steel (316 SS). The extensometer also successfully determined flow surfaces for 316 SS at 650 C. Furthermore, to judge the applicability of the technique for composite materials, yield surfaces were determined for unidirectional tungsten/Kanthal (Fe-Cr-Al).
Zelenyak, Andreea-Manuela; Schorer, Nora; Sause, Markus G R
2018-02-01
This paper presents a method for embedding realistic defect geometries of a fiber reinforced material in a finite element modeling environment in order to simulate active ultrasonic inspection. When ultrasonic inspection is used experimentally to investigate the presence of defects in composite materials, the microscopic defect geometry may cause signal characteristics that are difficult to interpret. Hence, modeling of this interaction is key to improve our understanding and way of interpreting the acquired ultrasonic signals. To model the true interaction of the ultrasonic wave field with such defect structures as pores, cracks or delamination, a realistic three dimensional geometry reconstruction is required. We present a 3D-image based reconstruction process which converts computed tomography data in adequate surface representations ready to be embedded for processing with finite element methods. Subsequent modeling using these geometries uses a multi-scale and multi-physics simulation approach which results in quantitative A-Scan ultrasonic signals which can be directly compared with experimental signals. Therefore, besides the properties of the composite material, a full transducer implementation, piezoelectric conversion and simultaneous modeling of the attached circuit is applied. Comparison between simulated and experimental signals provides very good agreement in electrical voltage amplitude and the signal arrival time and thus validates the proposed modeling approach. Simulating ultrasound wave propagation in a medium with a realistic shape of the geometry clearly shows a difference in how the disturbance of the waves takes place and finally allows more realistic modeling of A-scans. Copyright © 2017 Elsevier B.V. All rights reserved.
Protective pathways against colitis mediated by appendicitis and appendectomy.
Cheluvappa, R; Luo, A S; Palmer, C; Grimm, M C
2011-09-01
Appendicitis followed by appendectomy (AA) at a young age protects against inflammatory bowel disease (IBD). Using a novel murine appendicitis model, we showed that AA protected against subsequent experimental colitis. To delineate genes/pathways involved in this protection, AA was performed and samples harvested from the most distal colon. RNA was extracted from four individual colonic samples per group (AA group and double-laparotomy control group) and each sample microarray analysed followed by gene-set enrichment analysis (GSEA). The gene-expression study was validated by quantitative reverse transcription-polymerase chain reaction (RT-PCR) of 14 selected genes across the immunological spectrum. Distal colonic expression of 266 gene-sets was up-regulated significantly in AA group samples (false discovery rates < 1%; P-value < 0·001). Time-course RT-PCR experiments involving the 14 genes displayed down-regulation over 28 days. The IBD-associated genes tnfsf10, SLC22A5, C3, ccr5, irgm, ptger4 and ccl20 were modulated in AA mice 3 days after surgery. Many key immunological and cellular function-associated gene-sets involved in the protective effect of AA in experimental colitis were identified. The down-regulation of 14 selected genes over 28 days after surgery indicates activation, repression or de-repression of these genes leading to downstream AA-conferred anti-colitis protection. Further analysis of these genes, profiles and biological pathways may assist in developing better therapeutic strategies in the management of intractable IBD. © 2011 The Authors. Clinical and Experimental Immunology © 2011 British Society for Immunology.
Human motor unit recordings: origins and insight into the integrated motor system.
Duchateau, Jacques; Enoka, Roger M
2011-08-29
Soon after Edward Liddell [1895-1981] and Charles Sherrington [1857-1952] introduced the concept of a motor unit in 1925 and the necessary technology was developed, the recording of single motor unit activity became feasible in humans. It was quickly discovered by Edgar Adrian [1889-1977] and Detlev Bronk [1897-1975] that the force exerted by muscle during voluntary contractions was the result of the concurrent recruitment of motor units and modulation of the rate at which they discharged action potentials. Subsequent studies found that the relation between discharge frequency and motor unit force was characterized by a sigmoidal function. Based on observations on experimental animals, Elwood Henneman [1915-1996] proposed a "size principle" in 1957 and most studies in humans focussed on validating this concept during various types of muscle contractions. By the end of the 20th C, the experimental evidence indicated that the recruitment order of human motor units was determined primarily by motoneuron size and that the occasional changes in recruitment order were not an intended strategy of the central nervous system. Fundamental knowledge on the function of Sherrington's "common final pathway" was expanded with observations on motor unit rotation, minimal and maximal discharge rates, discharge variability, and self-sustained firing. Despite the great amount of work on characterizing motor unit activity during the first century of inquiry, however, many basic questions remain unanswered and these limit the extent to which findings on humans and experimental animals can be integrated and generalized to all movements. 2011 Elsevier B.V. All rights reserved.
A biphasic approach for the study of lift generation in soft porous media
NASA Astrophysics Data System (ADS)
Wu, Qianhong; Santhanam, Sridhar; Nathan, Rungun; Wang, Qiuyun
2017-04-01
Lift generation in highly compressible porous media under rapid compression continues to be an important topic in porous media flow. Although significant progress has been made, how to model different lifting forces during the compression process remains unclear. This is mainly because the input parameters of the existing theoretical studies, including the Darcy permeability of the porous media and the viscous damping coefficient of its solid phase, were manually adjusted so as to match the experimental data. In the current paper, we report a biphasic approach to experimentally and theoretically treat this limitation. Synthetic fibrous porous materials, whose permeability were precisely measured, were subsequently exposed to sudden impacts using a porous-walled cylinder-piston apparatus. The obtained time-dependent compression of the porous media, along with the permeability data, was applied in two different theoretical models to predict the pore pressure generation, a plug flow model and a consolidation model [Q. Wu et al., J. Fluid Mech. 542, 281 (2005a)]. Comparison between the theory and the experiments on the pore pressure distribution proved the validity of the consolidation model. Furthermore, a viscoelastic model, containing a nonlinear spring in conjunction with a linear viscoelastic generalized Maxwell mechanical module, was developed to characterize the solid phase lifting force. The model matched the experimental data very well. The paper presented herein, as one of the series studies on this topic, provides an important biphasic approach to characterize different forces that contribute to the lift generation in a soft porous medium under rapid compression.
NASA Astrophysics Data System (ADS)
Etxeberria, A.; Vechiu, I.; Baudoin, S.; Camblong, H.; Kreckelbergh, S.
2014-02-01
The increasing use of distributed generators, which are mainly based on renewable sources, can create several issues in the operation of the electric grid. The microgrid is being analysed as a solution to the integration in the grid of the renewable sources at a high penetration level in a controlled way. The storage systems play a vital role in order to keep the energy and power balance of the microgrid. Due to the technical limitations of the currently available storage systems, it is necessary to use more than one storage technology to satisfy the requirements of the microgrid application. This work validates in simulations and experimentally the use of a Three-Level Neutral Point Clamped converter to control the power flow of a hybrid storage system formed by a SuperCapacitor and a Vanadium Redox Battery. The operation of the system is validated in two case studies in the experimental platform installed in ESTIA. The experimental results prove the validity of the proposed system as well as the designed control algorithm. The good agreement among experimental and simulation results also validates the simulation model, that can therefore be used to analyse the operation of the system in different case studies.
Zimmerman, C.E.
2005-01-01
Analysis of otolith strontium (Sr) or strontium-to-calcium (Sr:Ca) ratios provides a powerful tool to reconstruct the chronology of migration among salinity environments for diadromous salmonids. Although use of this method has been validated by examination of known individuals and translocation experiments, it has never been validated under controlled experimental conditions. In this study, incorporation of otolith Sr was tested across a range of salinities and resulting levels of ambient Sr and Ca concentrations in juvenile chinook salmon (Oncorhynchus tshawytscha), coho salmon (Oncorhynchus kisutch), sockeye salmon (Oncorhynchus nerka), rainbow trout (Oncorhynchus rnykiss), and Arctic char (Salvelinus alpinus). Experimental water was mixed, using stream water and seawater as end members, to create experimental salinities of 0.1, 6.3, 12.7, 18.6, 25.5, and 33.0 psu. Otolith Sr and Sr:Ca ratios were significantly related to salinity for all species (r2 range: 0.80-0.91) but provide only enough predictive resolution to discriminate among fresh water, brackish water, and saltwater residency. These results validate the use of otolith Sr:Ca ratios to broadly discriminate salinity histories encountered by salmonids but highlight the need for further research concerning the influence of osmoregulation and physiological changes associated with smoking on otolith microchemistry.
Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram
2014-01-10
Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs' structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al., J. Control. Release 160 (2012) 147-157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-Nearest Neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used by us in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. © 2013.
Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram
2014-01-01
Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs’ structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al, Journal of Controlled Release, 160(2012) 14–157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-nearest neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. PMID:24184343
A Surrogate Approach to the Experimental Optimization of Multielement Airfoils
NASA Technical Reports Server (NTRS)
Otto, John C.; Landman, Drew; Patera, Anthony T.
1996-01-01
The incorporation of experimental test data into the optimization process is accomplished through the use of Bayesian-validated surrogates. In the surrogate approach, a surrogate for the experiment (e.g., a response surface) serves in the optimization process. The validation step of the framework provides a qualitative assessment of the surrogate quality, and bounds the surrogate-for-experiment error on designs "near" surrogate-predicted optimal designs. The utility of the framework is demonstrated through its application to the experimental selection of the trailing edge ap position to achieve a design lift coefficient for a three-element airfoil.
Looby, Mairead; Ibarra, Neysi; Pierce, James J; Buckley, Kevin; O'Donovan, Eimear; Heenan, Mary; Moran, Enda; Farid, Suzanne S; Baganz, Frank
2011-01-01
This study describes the application of quality by design (QbD) principles to the development and implementation of a major manufacturing process improvement for a commercially distributed therapeutic protein produced in Chinese hamster ovary cell culture. The intent of this article is to focus on QbD concepts, and provide guidance and understanding on how the various components combine together to deliver a robust process in keeping with the principles of QbD. A fed-batch production culture and a virus inactivation step are described as representative examples of upstream and downstream unit operations that were characterized. A systematic approach incorporating QbD principles was applied to both unit operations, involving risk assessment of potential process failure points, small-scale model qualification, design and execution of experiments, definition of operating parameter ranges and process validation acceptance criteria followed by manufacturing-scale implementation and process validation. Statistical experimental designs were applied to the execution of process characterization studies evaluating the impact of operating parameters on product quality attributes and process performance parameters. Data from process characterization experiments were used to define the proven acceptable range and classification of operating parameters for each unit operation. Analysis of variance and Monte Carlo simulation methods were used to assess the appropriateness of process design spaces. Successful implementation and validation of the process in the manufacturing facility and the subsequent manufacture of hundreds of batches of this therapeutic protein verifies the approaches taken as a suitable model for the development, scale-up and operation of any biopharmaceutical manufacturing process. Copyright © 2011 American Institute of Chemical Engineers (AIChE).
Evans, Douglas W; Rajagopalan, Padma; Devita, Raffaella; Sparks, Jessica L
2011-01-01
Liver sinusoidal endothelial cells (LSECs) are the primary site of numerous transport and exchange processes essential for liver function. LSECs rest on a sparse extracellular matrix layer housed in the space of Disse, a 0.5-1LSECs from hepatocytes. To develop bioengineered liver tissue constructs, it is important to understand the mechanical interactions among LSECs, hepatocytes, and the extracellular matrix in the space of Disse. Currently the mechanical properties of space of Disse matrix are not well understood. The objective of this study was to develop and validate a device for performing mechanical tests at the meso-scale (100nm-100m), to enable novel matrix characterization within the space of Disse. The device utilizes a glass micro-spherical indentor attached to a cantilever made from a fiber optic cable. The 3-axis translation table used to bring the specimen in contact with the indentor and deform the cantilever. A position detector monitors the location of a laser passing through the cantilever and allows for the calculation of subsequent tissue deformation. The design allows micro-newton and nano-newton stress-strain tissue behavior to be quantified. To validate the device accuracy, 11 samples of silicon rubber in two formulations were tested to experimentally confirm their Young's moduli. Prior macroscopic unconfined compression tests determined the formulations of EcoFlex030 (n-6) and EcoFlex010 (n-5) to posses Young's moduli of 92.67+-6.22 and 43.10+-3.29 kPa respectively. Optical measurements taken utilizing CITE's position control and fiber optic cantilever found the moduli to be 106.4 kPa and 47.82 kPa.
Characterization and validation of sampling and analytical methods for mycotoxins in workplace air.
Jargot, Danièle; Melin, Sandrine
2013-03-01
Mycotoxins are produced by certain plant or foodstuff moulds under growing, transport or storage conditions. They are toxic for humans and animals, some are carcinogenic. Methods to monitor occupational exposure to seven of the most frequently occurring airborne mycotoxins have been characterized and validated. Experimental aerosols have been generated from naturally contaminated particles for sampler evaluation. Air samples were collected on foam pads, using the CIP 10 personal aerosol sampler with its inhalable health-related aerosol fraction selector. The samples were subsequently solvent extracted from the sampling media, cleaned using immunoaffinity (IA) columns and analyzed by liquid chromatography with fluorescence detection. Ochratoxin A (OTA) or fumonisin and aflatoxin derivatives were detected and quantified. The quantification limits were 0.015 ng m(-3) OTA, 1 ng m(-3) fumonisins or 0.5 pg m(-3) aflatoxins, with a minimum dust concentration level of 1 mg m(-3) and a 4800 L air volume sampling. The methods were successfully applied to field measurements, which confirmed that workers could be exposed when handling contaminated materials. It was observed that airborne particles may be more contaminated than the bulk material itself. The validated methods have measuring ranges fully adapted to the concentrations found in the workplace. Their performance meets the general requirements laid down for chemical agent measurement procedures, with an expanded uncertainty less than 50% for most mycotoxins. The analytical uncertainty, comprised between 14 and 24%, was quite satisfactory given the low mycotoxin amounts, when compared to the food benchmarks. The methods are now user-friendly enough to be adopted for personal workplace sampling. They will later allow for mycotoxin occupational risk assessment, as only very few quantitative data have been available till now.
Anurag, Meenakshi; Punturi, Nindo; Hoog, Jeremy; Bainbridge, Matthew N; Ellis, Matthew J; Haricharan, Svasti
2018-05-23
This study was undertaken to conduct a comprehensive investigation of the role of DNA damage repair (DDR) defects in poor outcome ER+ disease. Expression and mutational status of DDR genes in ER+ breast tumors were correlated with proliferative response in neoadjuvant aromatase inhibitor therapy trials (discovery data set), with outcomes in METABRIC, TCGA and Loi data sets (validation data sets), and in patient derived xenografts. A causal relationship between candidate DDR genes and endocrine treatment response, and the underlying mechanism, was then tested in ER+ breast cancer cell lines. Correlations between loss of expression of three genes: CETN2 (p<0.001) and ERCC1 (p=0.01) from the nucleotide excision repair (NER) and NEIL2 (p=0.04) from the base excision repair (BER) pathways were associated with endocrine treatment resistance in discovery data sets, and subsequently validated in independent patient cohorts. Complementary mutation analysis supported associations between mutations in NER and BER pathways and reduced endocrine treatment response. A causal role for CETN2, NEIL2 and ERCC1 loss in intrinsic endocrine resistance was experimentally validated in ER+ breast cancer cell lines, and in ER+ patient-derived xenograft models. Loss of CETN2, NEIL2 or ERCC1 induced endocrine treatment response by dysregulating G1/S transition, and therefore, increased sensitivity to CDK4/6 inhibitors. A combined DDR signature score was developed that predicted poor outcome in multiple patient cohorts. This report identifies DDR defects as a new class of endocrine treatment resistance drivers and indicates new avenues for predicting efficacy of CDK4/6 inhibition in the adjuvant treatment setting. Copyright ©2018, American Association for Cancer Research.
The Use of Virtual Reality in the Study of People's Responses to Violent Incidents.
Rovira, Aitor; Swapp, David; Spanlang, Bernhard; Slater, Mel
2009-01-01
This paper reviews experimental methods for the study of the responses of people to violence in digital media, and in particular considers the issues of internal validity and ecological validity or generalisability of results to events in the real world. Experimental methods typically involve a significant level of abstraction from reality, with participants required to carry out tasks that are far removed from violence in real life, and hence their ecological validity is questionable. On the other hand studies based on field data, while having ecological validity, cannot control multiple confounding variables that may have an impact on observed results, so that their internal validity is questionable. It is argued that immersive virtual reality may provide a unification of these two approaches. Since people tend to respond realistically to situations and events that occur in virtual reality, and since virtual reality simulations can be completely controlled for experimental purposes, studies of responses to violence within virtual reality are likely to have both ecological and internal validity. This depends on a property that we call 'plausibility' - including the fidelity of the depicted situation with prior knowledge and expectations. We illustrate this with data from a previously published experiment, a virtual reprise of Stanley Milgram's 1960s obedience experiment, and also with pilot data from a new study being developed that looks at bystander responses to violent incidents.
The Use of Virtual Reality in the Study of People's Responses to Violent Incidents
Rovira, Aitor; Swapp, David; Spanlang, Bernhard; Slater, Mel
2009-01-01
This paper reviews experimental methods for the study of the responses of people to violence in digital media, and in particular considers the issues of internal validity and ecological validity or generalisability of results to events in the real world. Experimental methods typically involve a significant level of abstraction from reality, with participants required to carry out tasks that are far removed from violence in real life, and hence their ecological validity is questionable. On the other hand studies based on field data, while having ecological validity, cannot control multiple confounding variables that may have an impact on observed results, so that their internal validity is questionable. It is argued that immersive virtual reality may provide a unification of these two approaches. Since people tend to respond realistically to situations and events that occur in virtual reality, and since virtual reality simulations can be completely controlled for experimental purposes, studies of responses to violence within virtual reality are likely to have both ecological and internal validity. This depends on a property that we call ‘plausibility’ – including the fidelity of the depicted situation with prior knowledge and expectations. We illustrate this with data from a previously published experiment, a virtual reprise of Stanley Milgram's 1960s obedience experiment, and also with pilot data from a new study being developed that looks at bystander responses to violent incidents. PMID:20076762
Riedl, Janet; Esslinger, Susanne; Fauhl-Hassek, Carsten
2015-07-23
Food fingerprinting approaches are expected to become a very potent tool in authentication processes aiming at a comprehensive characterization of complex food matrices. By non-targeted spectrometric or spectroscopic chemical analysis with a subsequent (multivariate) statistical evaluation of acquired data, food matrices can be investigated in terms of their geographical origin, species variety or possible adulterations. Although many successful research projects have already demonstrated the feasibility of non-targeted fingerprinting approaches, their uptake and implementation into routine analysis and food surveillance is still limited. In many proof-of-principle studies, the prediction ability of only one data set was explored, measured within a limited period of time using one instrument within one laboratory. Thorough validation strategies that guarantee reliability of the respective data basis and that allow conclusion on the applicability of the respective approaches for its fit-for-purpose have not yet been proposed. Within this review, critical steps of the fingerprinting workflow were explored to develop a generic scheme for multivariate model validation. As a result, a proposed scheme for "good practice" shall guide users through validation and reporting of non-targeted fingerprinting results. Furthermore, food fingerprinting studies were selected by a systematic search approach and reviewed with regard to (a) transparency of data processing and (b) validity of study results. Subsequently, the studies were inspected for measures of statistical model validation, analytical method validation and quality assurance measures. In this context, issues and recommendations were found that might be considered as an actual starting point for developing validation standards of non-targeted metabolomics approaches for food authentication in the future. Hence, this review intends to contribute to the harmonization and standardization of food fingerprinting, both required as a prior condition for the authentication of food in routine analysis and official control. Copyright © 2015 Elsevier B.V. All rights reserved.
Validation of Magnetic Resonance Thermometry by Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Rydquist, Grant; Owkes, Mark; Verhulst, Claire M.; Benson, Michael J.; Vanpoppel, Bret P.; Burton, Sascha; Eaton, John K.; Elkins, Christopher P.
2016-11-01
Magnetic Resonance Thermometry (MRT) is a new experimental technique that can create fully three-dimensional temperature fields in a noninvasive manner. However, validation is still required to determine the accuracy of measured results. One method of examination is to compare data gathered experimentally to data computed with computational fluid dynamics (CFD). In this study, large-eddy simulations have been performed with the NGA computational platform to generate data for a comparison with previously run MRT experiments. The experimental setup consisted of a heated jet inclined at 30° injected into a larger channel. In the simulations, viscosity and density were scaled according to the local temperature to account for differences in buoyant and viscous forces. A mesh-independent study was performed with 5 mil-, 15 mil- and 45 mil-cell meshes. The program Star-CCM + was used to simulate the complete experimental geometry. This was compared to data generated from NGA. Overall, both programs show good agreement with the experimental data gathered with MRT. With this data, the validity of MRT as a diagnostic tool has been shown and the tool can be used to further our understanding of a range of flows with non-trivial temperature distributions.
COMPRESSORS, *AIR FLOW, TURBOFAN ENGINES , TRANSIENTS, SURGES, STABILITY, COMPUTERIZED SIMULATION, EXPERIMENTAL DATA, VALIDATION, DIGITAL SIMULATION, INLET GUIDE VANES , ROTATION, STALLING, RECOVERY, HYSTERESIS
The Question of Education Science: "Experiment"ism Versus "Experimental"ism
ERIC Educational Resources Information Center
Howe, Kenneth R.
2005-01-01
The ascendant view in the current debate about education science -- experimentism -- is a reassertion of the randomized experiment as the methodological gold standard. Advocates of this view have ignored, not answered, long-standing criticisms of the randomized experiment: its frequent impracticality, its lack of external validity, its confinement…
Internal Validity: A Must in Research Designs
ERIC Educational Resources Information Center
Cahit, Kaya
2015-01-01
In experimental research, internal validity refers to what extent researchers can conclude that changes in dependent variable (i.e. outcome) are caused by manipulations in independent variable. The causal inference permits researchers to meaningfully interpret research results. This article discusses (a) internal validity threats in social and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freed, Melanie; Miller, Stuart; Tang, Katherine
Purpose: MANTIS is a Monte Carlo code developed for the detailed simulation of columnar CsI scintillator screens in x-ray imaging systems. Validation of this code is needed to provide a reliable and valuable tool for system optimization and accurate reconstructions for a variety of x-ray applications. Whereas previous validation efforts have focused on matching of summary statistics, in this work the authors examine the complete point response function (PRF) of the detector system in addition to relative light output values. Methods: Relative light output values and high-resolution PRFs have been experimentally measured with a custom setup. A corresponding set ofmore » simulated light output values and PRFs have also been produced, where detailed knowledge of the experimental setup and CsI:Tl screen structures are accounted for in the simulations. Four different screens were investigated with different thicknesses, column tilt angles, and substrate types. A quantitative comparison between the experimental and simulated PRFs was performed for four different incidence angles (0 deg., 15 deg., 30 deg., and 45 deg.) and two different x-ray spectra (40 and 70 kVp). The figure of merit (FOM) used measures the normalized differences between the simulated and experimental data averaged over a region of interest. Results: Experimental relative light output values ranged from 1.456 to 1.650 and were in approximate agreement for aluminum substrates, but poor agreement for graphite substrates. The FOMs for all screen types, incidence angles, and energies ranged from 0.1929 to 0.4775. To put these FOMs in context, the same FOM was computed for 2D symmetric Gaussians fit to the same experimental data. These FOMs ranged from 0.2068 to 0.8029. Our analysis demonstrates that MANTIS reproduces experimental PRFs with higher accuracy than a symmetric 2D Gaussian fit to the experimental data in the majority of cases. Examination of the spatial distribution of differences between the PRFs shows that the main reason for errors between MANTIS and the experimental data is that MANTIS-generated PRFs are sharper than the experimental PRFs. Conclusions: The experimental validation of MANTIS performed in this study demonstrates that MANTIS is able to reliably predict experimental PRFs, especially for thinner screens, and can reproduce the highly asymmetric shape seen in the experimental data. As a result, optimizations and reconstructions carried out using MANTIS should yield results indicative of actual detector performance. Better characterization of screen properties is necessary to reconcile the simulated light output values with experimental data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jernigan, Dann A.; Blanchat, Thomas K.
It is necessary to improve understanding and develop temporally- and spatially-resolved integral scale validation data of the heat flux incident to a complex object in addition to measuring the thermal response of said object located within the fire plume for the validation of the SIERRA/FUEGO/SYRINX fire and SIERRA/CALORE codes. To meet this objective, a complex calorimeter with sufficient instrumentation to allow validation of the coupling between FUEGO/SYRINX/CALORE has been designed, fabricated, and tested in the Fire Laboratory for Accreditation of Models and Experiments (FLAME) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisonmore » between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. This report presents the data validation steps and processes, the results of the penlight radiant heat experiments (for the purpose of validating the CALORE heat transfer modeling of the complex calorimeter), and the results of the fire tests in FLAME.« less
The Effect of Interpolating Success Experiences into Classes for the Retarded. Final Report.
ERIC Educational Resources Information Center
Eaglstein, Solomon A.
The study was conducted to determine the effects of experimentally-arranged success and failure experiences on the subsequent performance of learning tasks by educable mentally retarded (EMR) students. Subjects were 68 EMR intermediate grade children, divided into four groups. Prior to the learning task on each of 5 subsequent days, subjects in…
ERIC Educational Resources Information Center
Larsson, Henrik; Sariaslan, Amir; Långström, Niklas; D'Onofrio, Brian; Lichtenstein, Paul
2014-01-01
Background: Studies have found negative associations between socioeconomic position and attention deficit/hyperactivity disorder (ADHD), but it remains unclear if this association is causal. The aim of this study was to determine the extent to which the association between family income in early childhood and subsequent ADHD depends on measured…
The Impact of Participating in a Peer Assessment Activity on Subsequent Academic Performance
ERIC Educational Resources Information Center
Jhangiani, Rajiv S.
2016-01-01
The present study investigates the impact of participation in a peer assessment activity on subsequent academic performance. Students in two sections of an introductory psychology course completed a practice quiz 1 week prior to each of three course exams. Students in the experimental group participated in a five-step double-blind peer assessment…
Achieving external validity in home advantage research: generalizing crowd noise effects
Myers, Tony D.
2014-01-01
Different factors have been postulated to explain the home advantage phenomenon in sport. One plausible explanation investigated has been the influence of a partisan home crowd on sports officials' decisions. Different types of studies have tested the crowd influence hypothesis including purposefully designed experiments. However, while experimental studies investigating crowd influences have high levels of internal validity, they suffer from a lack of external validity; decision-making in a laboratory setting bearing little resemblance to decision-making in live sports settings. This focused review initially considers threats to external validity in applied and theoretical experimental research. Discussing how such threats can be addressed using representative design by focusing on a recently published study that arguably provides the first experimental evidence of the impact of live crowd noise on officials in sport. The findings of this controlled experiment conducted in a real tournament setting offer a level of confirmation of the findings of laboratory studies in the area. Finally directions for future research and the future conduct of crowd noise studies are discussed. PMID:24917839
Rendering the "Not-So-Simple" Pendulum Experimentally Accessible.
ERIC Educational Resources Information Center
Jackson, David P.
1996-01-01
Presents three methods for obtaining experimental data related to acceleration of a simple pendulum. Two of the methods involve angular position measurements and the subsequent calculation of the acceleration while the third method involves a direct measurement of the acceleration. Compares these results with theoretical calculations and…
Numerical Modelling of Femur Fracture and Experimental Validation Using Bone Simulant.
Marco, Miguel; Giner, Eugenio; Larraínzar-Garijo, Ricardo; Caeiro, José Ramón; Miguélez, María Henar
2017-10-01
Bone fracture pattern prediction is still a challenge and an active field of research. The main goal of this article is to present a combined methodology (experimental and numerical) for femur fracture onset analysis. Experimental work includes the characterization of the mechanical properties and fracture testing on a bone simulant. The numerical work focuses on the development of a model whose material properties are provided by the characterization tests. The fracture location and the early stages of the crack propagation are modelled using the extended finite element method and the model is validated by fracture tests developed in the experimental work. It is shown that the accuracy of the numerical results strongly depends on a proper bone behaviour characterization.
Software aspects of the Geant4 validation repository
NASA Astrophysics Data System (ADS)
Dotti, Andrea; Wenzel, Hans; Elvira, Daniel; Genser, Krzysztof; Yarba, Julia; Carminati, Federico; Folger, Gunter; Konstantinov, Dmitri; Pokorski, Witold; Ribon, Alberto
2017-10-01
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER is easily accessible via a web application. In addition, a web service allows for programmatic access to the repository to extract records in JSON or XML exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
Software Aspects of the Geant4 Validation Repository
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dotti, Andrea; Wenzel, Hans; Elvira, Daniel
2016-01-01
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientic Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER is easily accessible via a web application. In addition, a web service allows for programmatic access to the repository to extract records in JSON or XML exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
Distress modeling for DARWin-ME : final report.
DOT National Transportation Integrated Search
2013-12-01
Distress prediction models, or transfer functions, are key components of the Pavement M-E Design and relevant analysis. The accuracy of such models depends on a successful process of calibration and subsequent validation of model coefficients in the ...
A kinetic model of municipal sludge degradation during non-catalytic wet oxidation.
Prince-Pike, Arrian; Wilson, David I; Baroutian, Saeid; Andrews, John; Gapes, Daniel J
2015-12-15
Wet oxidation is a successful process for the treatment of municipal sludge. In addition, the resulting effluent from wet oxidation is a useful carbon source for subsequent biological nutrient removal processes in wastewater treatment. Owing to limitations with current kinetic models, this study produced a kinetic model which predicts the concentrations of key intermediate components during wet oxidation. The model was regressed from lab-scale experiments and then subsequently validated using data from a wet oxidation pilot plant. The model was shown to be accurate in predicting the concentrations of each component, and produced good results when applied to a plant 500 times larger in size. A statistical study was undertaken to investigate the validity of the regressed model parameters. Finally the usefulness of the model was demonstrated by suggesting optimum operating conditions such that volatile fatty acids were maximised. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cross sections for electron impact excitation of the C 1Π and D 1Σ+ electronic states in N2O
NASA Astrophysics Data System (ADS)
Kawahara, H.; Suzuki, D.; Kato, H.; Hoshino, M.; Tanaka, H.; Ingólfsson, O.; Campbell, L.; Brunger, M. J.
2009-09-01
Differential and integral cross sections for electron-impact excitation of the dipole-allowed C Π1 and D Σ1+ electronic states of nitrous oxide have been measured. The differential cross sections were determined by analysis of normalized energy-loss spectra obtained using a crossed-beam apparatus at six electron energies in the range 15-200 eV. Integral cross sections were subsequently derived from these data. The present work was undertaken in order to check both the validity of the only other comprehensive experimental study into these excitation processes [Marinković et al., J. Phys. B 32, 1949 (1998)] and to extend the energy range of those data. Agreement with the earlier data, particularly at the lower common energies, was typically found to be fair. In addition, the BEf-scaling approach [Kim, J. Chem. Phys. 126, 064305 (2007)] is used to calculate integral cross sections for the C Π1 and D Σ1+ states, from their respective thresholds to 5000 eV. In general, good agreement is found between the experimental integral cross sections and those calculated within the BEf-scaling paradigm, the only exception being at the lowest energies of this study. Finally, optical oscillator strengths, also determined as a part of the present investigations, were found to be in fair accordance with previous corresponding determinations.
A User-Friendly Model for Spray Drying to Aid Pharmaceutical Product Development
Grasmeijer, Niels; de Waard, Hans; Hinrichs, Wouter L. J.; Frijlink, Henderik W.
2013-01-01
The aim of this study was to develop a user-friendly model for spray drying that can aid in the development of a pharmaceutical product, by shifting from a trial-and-error towards a quality-by-design approach. To achieve this, a spray dryer model was developed in commercial and open source spreadsheet software. The output of the model was first fitted to the experimental output of a Büchi B-290 spray dryer and subsequently validated. The predicted outlet temperatures of the spray dryer model matched the experimental values very well over the entire range of spray dryer settings that were tested. Finally, the model was applied to produce glassy sugars by spray drying, an often used excipient in formulations of biopharmaceuticals. For the production of glassy sugars, the model was extended to predict the relative humidity at the outlet, which is not measured in the spray dryer by default. This extended model was then successfully used to predict whether specific settings were suitable for producing glassy trehalose and inulin by spray drying. In conclusion, a spray dryer model was developed that is able to predict the output parameters of the spray drying process. The model can aid the development of spray dried pharmaceutical products by shifting from a trial-and-error towards a quality-by-design approach. PMID:24040240
A new mathematical model of bacterial interactions in two-species oral biofilms
Martin, Bénédicte; Tamanai-Shacoori, Zohreh; Bronsard, Julie; Ginguené, Franck; Meuric, Vincent
2017-01-01
Periodontitis are bacterial inflammatory diseases, where the bacterial biofilms present on the tooth-supporting tissues switch from a healthy state towards a pathogenic state. Among bacterial species involved in the disease, Porphyromonas gingivalis has been shown to induce dysbiosis, and to induce virulence of otherwise healthy bacteria like Streptococcus gordonii. During biofilm development, primary colonizers such as S. gordonii first attach to the surface and allow the subsequent adhesion of periodontal pathogens such as P. gingivalis. Interactions between those two bacteria have been extensively studied during the adhesion step of the biofilm. The aim of the study was to understand interactions of both species during the growing phase of the biofilm, for which little knowledge is available, using a mathematical model. This two-species biofilm model was based on a substrate-dependent growth, implemented with damage parameters, and validated thanks to data obtained on experimental biofilms. Three different hypothesis of interactions were proposed and assayed using this model: independence, competition between both bacteria species, or induction of toxicity by one species for the other species. Adequacy between experimental and simulated biofilms were found with the last hypothetic mathematical model. This new mathematical model of two species bacteria biofilms, dependent on different substrates for growing, can be applied to any bacteria species, environmental conditions, or steps of biofilm development. It will be of great interest for exploring bacterial interactions in biofilm conditions. PMID:28253369
NASA Astrophysics Data System (ADS)
Montazeri, A.; West, C.; Monk, S. D.; Taylor, C. J.
2017-04-01
This paper concerns the problem of dynamic modelling and parameter estimation for a seven degree of freedom hydraulic manipulator. The laboratory example is a dual-manipulator mobile robotic platform used for research into nuclear decommissioning. In contrast to earlier control model-orientated research using the same machine, the paper develops a nonlinear, mechanistic simulation model that can subsequently be used to investigate physically meaningful disturbances. The second contribution is to optimise the parameters of the new model, i.e. to determine reliable estimates of the physical parameters of a complex robotic arm which are not known in advance. To address the nonlinear and non-convex nature of the problem, the research relies on the multi-objectivisation of an output error single-performance index. The developed algorithm utilises a multi-objective genetic algorithm (GA) in order to find a proper solution. The performance of the model and the GA is evaluated using both simulated (i.e. with a known set of 'true' parameters) and experimental data. Both simulation and experimental results show that multi-objectivisation has improved convergence of the estimated parameters compared to the single-objective output error problem formulation. This is achieved by integrating the validation phase inside the algorithm implicitly and exploiting the inherent structure of the multi-objective GA for this specific system identification problem.
NASA Astrophysics Data System (ADS)
Arain, Salma Aslam; Kazi, Tasneem G.; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal
2014-12-01
An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu2+) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu2+ using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046 μg L-1 and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu2+ in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu2+ in serum samples of different viral hepatitis patients and healthy controls.
The use of miniature supersonic nozzles for microparticle acceleration: a numerical study.
Liu, Y
2007-10-01
By means of a high-speed gas flow generated by a miniature supersonic nozzle, we proposed a unique biolistic method to accelerate microparticle formulation of drugs to sufficient momentum, to penetrate the outer layer of human skin or mucosal tissue for the treatment of a range of diseases. One of the main concerns for designing and evaluating this system is ensuring microparticles delivery into human skin with a controllable velocity range and spatial distribution. The initial experimental work suggested that the performance of the transdermal delivery strongly depends on aerodynamics of the supersonic nozzles employed. In this paper, computational fluid dynamics (CFD) is utilized to characterize existing prototype biolistic delivery systems, the device with a converging-diverging supersonic nozzle (CDSN) and the device based on the contoured-shock-tube (CST) design, with the aim at investigating the transient gas and particle dynamics in the supersonic nozzles. Whenever possible, predicted pressure and Mach number histories, 2-D flow structures, and particle velocity distributions are made to compare with the corresponding experimental measurements to validate the implemented numerical approach. The gas-particle interaction and performance of two biolistic devices are interrogated and distinguished. Subsequently, the particle impact conditions are presented and discussed. It is demonstrated that the CST can deliver microparticles with a narrow and more controllable velocity range and spatial distribution.
NASA Astrophysics Data System (ADS)
Wang, Zengwei; Zhu, Ping; Liu, Zhao
2018-01-01
A generalized method for predicting the decoupled transfer functions based on in-situ transfer functions is proposed. The method allows predicting the decoupled transfer functions using coupled transfer functions, without disassembling the system. Two ways to derive relationships between the decoupled and coupled transfer functions are presented. Issues related to immeasurability of coupled transfer functions are also discussed. The proposed method is validated by numerical and experimental case studies.
Flexible energy harvesting from hard piezoelectric beams
NASA Astrophysics Data System (ADS)
Delnavaz, Aidin; Voix, Jérémie
2016-11-01
This paper presents design, multiphysics finite element modeling and experimental validation of a new miniaturized PZT generator that integrates a bulk piezoelectric ceramic onto a flexible platform for energy harvesting from the human body pressing force. In spite of its flexibility, the mechanical structure of the proposed device is simple to fabricate and efficient for the energy conversion. The finite element model involves both mechanical and piezoelectric parts of the device coupled with the electrical circuit model. The energy harvester prototype was fabricated and tested under the low frequency periodic pressing force during 10 seconds. The experimental results show that several nano joules of electrical energy is stored in a capacitor that is quite significant given the size of the device. The finite element model is validated by observing a good agreement between experimental and simulation results. the validated model could be used for optimizing the device for energy harvesting from earcanal deformations.
Using entropy measures to characterize human locomotion.
Leverick, Graham; Szturm, Tony; Wu, Christine Q
2014-12-01
Entropy measures have been widely used to quantify the complexity of theoretical and experimental dynamical systems. In this paper, the value of using entropy measures to characterize human locomotion is demonstrated based on their construct validity, predictive validity in a simple model of human walking and convergent validity in an experimental study. Results show that four of the five considered entropy measures increase meaningfully with the increased probability of falling in a simple passive bipedal walker model. The same four entropy measures also experienced statistically significant increases in response to increasing age and gait impairment caused by cognitive interference in an experimental study. Of the considered entropy measures, the proposed quantized dynamical entropy (QDE) and quantization-based approximation of sample entropy (QASE) offered the best combination of sensitivity to changes in gait dynamics and computational efficiency. Based on these results, entropy appears to be a viable candidate for assessing the stability of human locomotion.
Experimental validation of calculated atomic charges in ionic liquids
NASA Astrophysics Data System (ADS)
Fogarty, Richard M.; Matthews, Richard P.; Ashworth, Claire R.; Brandt-Talbot, Agnieszka; Palgrave, Robert G.; Bourne, Richard A.; Vander Hoogerstraete, Tom; Hunt, Patricia A.; Lovelock, Kevin R. J.
2018-05-01
A combination of X-ray photoelectron spectroscopy and near edge X-ray absorption fine structure spectroscopy has been used to provide an experimental measure of nitrogen atomic charges in nine ionic liquids (ILs). These experimental results are used to validate charges calculated with three computational methods: charges from electrostatic potentials using a grid-based method (ChelpG), natural bond orbital population analysis, and the atoms in molecules approach. By combining these results with those from a previous study on sulfur, we find that ChelpG charges provide the best description of the charge distribution in ILs. However, we find that ChelpG charges can lead to significant conformational dependence and therefore advise that small differences in ChelpG charges (<0.3 e) should be interpreted with care. We use these validated charges to provide physical insight into nitrogen atomic charges for the ILs probed.
Investigation of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams
NASA Technical Reports Server (NTRS)
Davis, Brian A.
2005-01-01
Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical model. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. Excellent agreement is achieved between the predicted and measured results, thereby quantitatively validating the numerical tool.
NASA Astrophysics Data System (ADS)
Nishida, R. T.; Beale, S. B.; Pharoah, J. G.; de Haart, L. G. J.; Blum, L.
2018-01-01
This work is among the first where the results of an extensive experimental research programme are compared to performance calculations of a comprehensive computational fluid dynamics model for a solid oxide fuel cell stack. The model, which combines electrochemical reactions with momentum, heat, and mass transport, is used to obtain results for an established industrial-scale fuel cell stack design with complex manifolds. To validate the model, comparisons with experimentally gathered voltage and temperature data are made for the Jülich Mark-F, 18-cell stack operating in a test furnace. Good agreement is obtained between the model and experiment results for cell voltages and temperature distributions, confirming the validity of the computational methodology for stack design. The transient effects during ramp up of current in the experiment may explain a lower average voltage than model predictions for the power curve.
Lingner, Thomas; Kataya, Amr R. A.; Reumann, Sigrun
2012-01-01
We recently developed the first algorithms specifically for plants to predict proteins carrying peroxisome targeting signals type 1 (PTS1) from genome sequences.1 As validated experimentally, the prediction methods are able to correctly predict unknown peroxisomal Arabidopsis proteins and to infer novel PTS1 tripeptides. The high prediction performance is primarily determined by the large number and sequence diversity of the underlying positive example sequences, which mainly derived from EST databases. However, a few constructs remained cytosolic in experimental validation studies, indicating sequencing errors in some ESTs. To identify erroneous sequences, we validated subcellular targeting of additional positive example sequences in the present study. Moreover, we analyzed the distribution of prediction scores separately for each orthologous group of PTS1 proteins, which generally resembled normal distributions with group-specific mean values. The cytosolic sequences commonly represented outliers of low prediction scores and were located at the very tail of a fitted normal distribution. Three statistical methods for identifying outliers were compared in terms of sensitivity and specificity.” Their combined application allows elimination of erroneous ESTs from positive example data sets. This new post-validation method will further improve the prediction accuracy of both PTS1 and PTS2 protein prediction models for plants, fungi, and mammals. PMID:22415050
Lingner, Thomas; Kataya, Amr R A; Reumann, Sigrun
2012-02-01
We recently developed the first algorithms specifically for plants to predict proteins carrying peroxisome targeting signals type 1 (PTS1) from genome sequences. As validated experimentally, the prediction methods are able to correctly predict unknown peroxisomal Arabidopsis proteins and to infer novel PTS1 tripeptides. The high prediction performance is primarily determined by the large number and sequence diversity of the underlying positive example sequences, which mainly derived from EST databases. However, a few constructs remained cytosolic in experimental validation studies, indicating sequencing errors in some ESTs. To identify erroneous sequences, we validated subcellular targeting of additional positive example sequences in the present study. Moreover, we analyzed the distribution of prediction scores separately for each orthologous group of PTS1 proteins, which generally resembled normal distributions with group-specific mean values. The cytosolic sequences commonly represented outliers of low prediction scores and were located at the very tail of a fitted normal distribution. Three statistical methods for identifying outliers were compared in terms of sensitivity and specificity." Their combined application allows elimination of erroneous ESTs from positive example data sets. This new post-validation method will further improve the prediction accuracy of both PTS1 and PTS2 protein prediction models for plants, fungi, and mammals.
NASA Astrophysics Data System (ADS)
Brown, Alexander; Eviston, Connor
2017-02-01
Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Justin Leigh; Smith, Curtis Lee; Burns, Douglas Edward
This report describes the development plan for a new multi-partner External Hazards Experimental Group (EHEG) coordinated by Idaho National Laboratory (INL) within the Risk-Informed Safety Margin Characterization (RISMC) technical pathway of the Light Water Reactor Sustainability Program. Currently, there is limited data available for development and validation of the tools and methods being developed in the RISMC Toolkit. The EHEG is being developed to obtain high-quality, small- and large-scale experimental data validation of RISMC tools and methods in a timely and cost-effective way. The group of universities and national laboratories that will eventually form the EHEG (which is ultimately expectedmore » to include both the initial participants and other universities and national laboratories that have been identified) have the expertise and experimental capabilities needed to both obtain and compile existing data archives and perform additional seismic and flooding experiments. The data developed by EHEG will be stored in databases for use within RISMC. These databases will be used to validate the advanced external hazard tools and methods.« less
Pretest information for a test to validate plume simulation procedures (FA-17)
NASA Technical Reports Server (NTRS)
Hair, L. M.
1978-01-01
The results of an effort to plan a final verification wind tunnel test to validate the recommended correlation parameters and application techniques were presented. The test planning effort was complete except for test site finalization and the associated coordination. Two suitable test sites were identified. Desired test conditions were shown. Subsequent sections of this report present the selected model and test site, instrumentation of this model, planned test operations, and some concluding remarks.
Chen, Hongda; Knebel, Phillip; Brenner, Hermann
2016-07-01
Search for biomarkers for early detection of cancer is a very active area of research, but most studies are done in clinical rather than screening settings. We aimed to empirically evaluate the role of study setting for early detection marker identification and validation. A panel of 92 candidate cancer protein markers was measured in 35 clinically identified colorectal cancer patients and 35 colorectal cancer patients identified at screening colonoscopy. For each case group, we selected 38 controls without colorectal neoplasms at screening colonoscopy. Single-, two- and three-marker combinations discriminating cases and controls were identified in each setting and subsequently validated in the alternative setting. In all scenarios, a higher number of predictive biomarkers were initially detected in the clinical setting, but a substantially lower proportion of identified biomarkers could subsequently be confirmed in the screening setting. Confirmation rates were 50.0%, 84.5%, and 74.2% for one-, two-, and three-marker algorithms identified in the screening setting and were 42.9%, 18.6%, and 25.7% for algorithms identified in the clinical setting. Validation of early detection markers of cancer in a true screening setting is important to limit the number of false-positive findings. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Lee, Jang Ho
2012-01-01
Experimental methods have played a significant role in the growth of English teaching and learning studies. The paper presented here outlines basic features of experimental design, including the manipulation of independent variables, the role and practicality of randomised controlled trials (RCTs) in educational research, and alternative methods…
ERIC Educational Resources Information Center
Wilcox, Bethany R.; Lewandowski, H. J.
2016-01-01
Student learning in instructional physics labs represents a growing area of research that includes investigations of students' beliefs and expectations about the nature of experimental physics. To directly probe students' epistemologies about experimental physics and support broader lab transformation efforts at the University of Colorado Boulder…
A More Rigorous Quasi-Experimental Alternative to the One-Group Pretest-Posttest Design.
ERIC Educational Resources Information Center
Johnson, Craig W.
1986-01-01
A simple quasi-experimental design is described which may have utility in a variety of applied and laboratory research settings where ordinarily the one-group pretest-posttest pre-experimental design might otherwise be the procedure of choice. The design approaches the internal validity of true experimental designs while optimizing external…
Development of a Conservative Model Validation Approach for Reliable Analysis
2015-01-01
CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account
Experimental validation of a new heterogeneous mechanical test design
NASA Astrophysics Data System (ADS)
Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.
2018-05-01
Standard material parameters identification strategies generally use an extensive number of classical tests for collecting the required experimental data. However, a great effort has been made recently by the scientific and industrial communities to support this experimental database on heterogeneous tests. These tests can provide richer information on the material behavior allowing the identification of a more complete set of material parameters. This is a result of the recent development of full-field measurements techniques, like digital image correlation (DIC), that can capture the heterogeneous deformation fields on the specimen surface during the test. Recently, new specimen geometries were designed to enhance the richness of the strain field and capture supplementary strain states. The butterfly specimen is an example of these new geometries, designed through a numerical optimization procedure where an indicator capable of evaluating the heterogeneity and the richness of strain information. However, no experimental validation was yet performed. The aim of this work is to experimentally validate the heterogeneous butterfly mechanical test in the parameter identification framework. For this aim, DIC technique and a Finite Element Model Up-date inverse strategy are used together for the parameter identification of a DC04 steel, as well as the calculation of the indicator. The experimental tests are carried out in a universal testing machine with the ARAMIS measuring system to provide the strain states on the specimen surface. The identification strategy is accomplished with the data obtained from the experimental tests and the results are compared to a reference numerical solution.
Hoehl, Stefanie; Zettersten, Martin; Schleihauf, Hanna; Grätz, Sabine; Pauen, Sabina
2014-06-01
The tendency to imitate causally irrelevant actions is termed overimitation. Here we investigated (a) whether communication of a model performing irrelevant actions is necessary to elicit overimitation in preschoolers and (b) whether communication of another model performing an efficient action modulates the subsequent reduction of overimitation. In the study, 5-year-olds imitated irrelevant actions both when they were modeled by a communicative and pedagogical experimenter and when they were modeled by a non-communicative and non-pedagogical experimenter. However, children stopped using the previously learned irrelevant actions only when they were subsequently shown the more efficient way to achieve the goal by a pedagogical experimenter. Thus, communication leads preschoolers to adapt their imitative behavior but does not seem to affect overimitation in the first place. Results are discussed with regard to the importance of communication for the transmission of cultural knowledge during development. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Grasza, K.; Palosz, W.; Trivedi, S. B.
1998-01-01
The process of the development of the nuclei and of subsequent seeding in 'contactless' physical vapor transport is investigated experimentally. Consecutive stages of the Low Supersaturation Nucleation in 'contactless' geometry for growth of CdTe crystals from the vapor are shown. The effects of the temperature field, geometry of the system, and experimental procedures on the process are presented and discussed. The experimental results are found to be consistent with our earlier numerical modeling results.
Yang, Ya-Hsu; Teng, Hao-Wei; Lai, Yen-Ting; Li, Szu-Yuan; Lin, Chih-Ching; Yang, Albert C; Chan, Hsiang-Lin; Hsieh, Yi-Hsuan; Lin, Chiao-Fan; Hsu, Fu-Ying; Liu, Chih-Kuang; Liu, Wen-Sheng
2015-01-01
Patients with late-onset depression (LOD) have been reported to run a higher risk of subsequent dementia. The present study was conducted to assess whether statins can reduce the risk of dementia in these patients. We used the data from National Health Insurance of Taiwan during 1996-2009. Standardized Incidence Ratios (SIRs) were calculated for LOD and subsequent dementia. The criteria for LOD diagnoses included age ≥65 years, diagnosis of depression after 65 years of age, at least three service claims, and treatment with antidepressants. The time-dependent Cox proportional hazards model was applied for multivariate analyses. Propensity scores with the one-to-one nearest-neighbor matching model were used to select matching patients for validation studies. Kaplan-Meier curve estimate was used to measure the group of patients with dementia living after diagnosis of LOD. Totally 45,973 patients aged ≥65 years were enrolled. The prevalence of LOD was 12.9% (5,952/45,973). Patients with LOD showed to have a higher incidence of subsequent dementia compared with those without LOD (Odds Ratio: 2.785; 95% CI 2.619-2.958). Among patients with LOD, lipid lowering agent (LLA) users (for at least 3 months) had lower incidence of subsequent dementia than non-users (Hazard Ratio = 0.781, 95% CI 0.685-0.891). Nevertheless, only statins users showed to have reduced risk of dementia (Hazard Ratio = 0.674, 95% CI 0.547-0.832) while other LLAs did not, which was further validated by Kaplan-Meier estimates after we used the propensity scores with the one-to-one nearest-neighbor matching model to control the confounding factors. Statins may reduce the risk of subsequent dementia in patients with LOD.
Show and tell: disclosure and data sharing in experimental pathology.
Schofield, Paul N; Ward, Jerrold M; Sundberg, John P
2016-06-01
Reproducibility of data from experimental investigations using animal models is increasingly under scrutiny because of the potentially negative impact of poor reproducibility on the translation of basic research. Histopathology is a key tool in biomedical research, in particular for the phenotyping of animal models to provide insights into the pathobiology of diseases. Failure to disclose and share crucial histopathological experimental details compromises the validity of the review process and reliability of the conclusions. We discuss factors that affect the interpretation and validation of histopathology data in publications and the importance of making these data accessible to promote replicability in research. © 2016. Published by The Company of Biologists Ltd.
DoSSiER: Database of scientific simulation and experimental results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, Hans; Yarba, Julia; Genser, Krzystof
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
NASA Astrophysics Data System (ADS)
Kartalov, Emil P.; Scherer, Axel; Quake, Stephen R.; Taylor, Clive R.; Anderson, W. French
2007-03-01
A systematic experimental study and theoretical modeling of the device physics of polydimethylsiloxane "pushdown" microfluidic valves are presented. The phase space is charted by 1587 dimension combinations and encompasses 45-295μm lateral dimensions, 16-39μm membrane thickness, and 1-28psi closing pressure. Three linear models are developed and tested against the empirical data, and then combined into a fourth-power-polynomial superposition. The experimentally validated final model offers a useful quantitative prediction for a valve's properties as a function of its dimensions. Typical valves (80-150μm width) are shown to behave like thin springs.
Experimental validation of flexible robot arm modeling and control
NASA Technical Reports Server (NTRS)
Ulsoy, A. Galip
1989-01-01
Flexibility is important for high speed, high precision operation of lightweight manipulators. Accurate dynamic modeling of flexible robot arms is needed. Previous work has mostly been based on linear elasticity with prescribed rigid body motions (i.e., no effect of flexible motion on rigid body motion). Little or no experimental validation of dynamic models for flexible arms is available. Experimental results are also limited for flexible arm control. Researchers include the effects of prismatic as well as revolute joints. They investigate the effect of full coupling between the rigid and flexible motions, and of axial shortening, and consider the control of flexible arms using only additional sensors.
NASA Astrophysics Data System (ADS)
Maghareh, Amin; Silva, Christian E.; Dyke, Shirley J.
2018-05-01
Hydraulic actuators play a key role in experimental structural dynamics. In a previous study, a physics-based model for a servo-hydraulic actuator coupled with a nonlinear physical system was developed. Later, this dynamical model was transformed into controllable canonical form for position tracking control purposes. For this study, a nonlinear device is designed and fabricated to exhibit various nonlinear force-displacement profiles depending on the initial condition and the type of materials used as replaceable coupons. Using this nonlinear system, the controllable canonical dynamical model is experimentally validated for a servo-hydraulic actuator coupled with a nonlinear physical system.
DoSSiER: Database of scientific simulation and experimental results
Wenzel, Hans; Yarba, Julia; Genser, Krzystof; ...
2016-08-01
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas
2016-06-01
Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for maps that explicitly expressed geomorphically implausible relationships indicating that the predictive performance of a model might be misleading in the case a predictor systematically relates to a spatially consistent bias of the inventory. Furthermore, we observed that random forest-based maps displayed spatial artifacts. The most plausible susceptibility map of the study area showed smooth prediction surfaces while the underlying model revealed a high predictive capability and was generated with an accurate landslide inventory and predictors that did not directly describe a bias. However, none of the presented models was found to be completely unbiased. This study showed that high predictive performances cannot be equated with a high plausibility and applicability of subsequent landslide susceptibility maps. We suggest that greater emphasis should be placed on identifying confounding factors and biases in landslide inventories. A joint discussion between modelers and decision makers of the spatial pattern of the final susceptibility maps in the field might increase their acceptance and applicability.
Genome-based prediction of test cross performance in two subsequent breeding cycles.
Hofheinz, Nina; Borchardt, Dietrich; Weissleder, Knuth; Frisch, Matthias
2012-12-01
Genome-based prediction of genetic values is expected to overcome shortcomings that limit the application of QTL mapping and marker-assisted selection in plant breeding. Our goal was to study the genome-based prediction of test cross performance with genetic effects that were estimated using genotypes from the preceding breeding cycle. In particular, our objectives were to employ a ridge regression approach that approximates best linear unbiased prediction of genetic effects, compare cross validation with validation using genetic material of the subsequent breeding cycle, and investigate the prospects of genome-based prediction in sugar beet breeding. We focused on the traits sugar content and standard molasses loss (ML) and used a set of 310 sugar beet lines to estimate genetic effects at 384 SNP markers. In cross validation, correlations >0.8 between observed and predicted test cross performance were observed for both traits. However, in validation with 56 lines from the next breeding cycle, a correlation of 0.8 could only be observed for sugar content, for standard ML the correlation reduced to 0.4. We found that ridge regression based on preliminary estimates of the heritability provided a very good approximation of best linear unbiased prediction and was not accompanied with a loss in prediction accuracy. We conclude that prediction accuracy assessed with cross validation within one cycle of a breeding program can not be used as an indicator for the accuracy of predicting lines of the next cycle. Prediction of lines of the next cycle seems promising for traits with high heritabilities.
1994-08-01
volume H1. Le rapport ext accompagnt5 doun jeo die disqoettex contenant les donn~es appropri~es Li bous let cas d’essai. (’es disqoettes sont disponibles ...GERMANY PURPL’Sb OF THE TESi The tests are part of a larger effort to establish a database of experimental measurements for missile configurations
Chen, Lei; Zhong, Hai-ying; Kuang, Jian-fei; Li, Jian-guo; Lu, Wang-jin; Chen, Jian-ye
2011-08-01
Reverse transcription quantitative real-time PCR (RT-qPCR) is a sensitive technique for quantifying gene expression, but its success depends on the stability of the reference gene(s) used for data normalization. Only a few studies on validation of reference genes have been conducted in fruit trees and none in banana yet. In the present work, 20 candidate reference genes were selected, and their expression stability in 144 banana samples were evaluated and analyzed using two algorithms, geNorm and NormFinder. The samples consisted of eight sample sets collected under different experimental conditions, including various tissues, developmental stages, postharvest ripening, stresses (chilling, high temperature, and pathogen), and hormone treatments. Our results showed that different suitable reference gene(s) or combination of reference genes for normalization should be selected depending on the experimental conditions. The RPS2 and UBQ2 genes were validated as the most suitable reference genes across all tested samples. More importantly, our data further showed that the widely used reference genes, ACT and GAPDH, were not the most suitable reference genes in many banana sample sets. In addition, the expression of MaEBF1, a gene of interest that plays an important role in regulating fruit ripening, under different experimental conditions was used to further confirm the validated reference genes. Taken together, our results provide guidelines for reference gene(s) selection under different experimental conditions and a foundation for more accurate and widespread use of RT-qPCR in banana.
Morsink, Maarten C; Dukers, Danny F
2009-03-01
Animal models have been widely used for studying the physiology and pharmacology of psychiatric and neurological diseases. The concepts of face, construct, and predictive validity are used as indicators to estimate the extent to which the animal model mimics the disease. Currently, we used these three concepts to design a theoretical assignment to integrate the teaching of neurophysiology, neuropharmacology, and experimental design. For this purpose, seven case studies were developed in which animal models for several psychiatric and neurological diseases were described and in which neuroactive drugs used to treat or study these diseases were introduced. Groups of undergraduate students were assigned to one of these case studies and asked to give a classroom presentation in which 1) the disease and underlying pathophysiology are described, 2) face and construct validity of the animal model are discussed, and 3) a pharmacological experiment with the associated neuroactive drug to assess predictive validity is presented. After evaluation of the presentations, we found that the students had gained considerable insight into disease phenomenology, its underlying neurophysiology, and the mechanism of action of the neuroactive drug. Moreover, the assignment was very useful in the teaching of experimental design, allowing an in-depth discussion of experimental control groups and the prediction of outcomes in these groups if the animal model were to display predictive validity. Finally, the highly positive responses in the student evaluation forms indicated that the assignment was of great interest to the students. Hence, the currently developed case studies constitute a very useful tool for teaching neurophysiology, neuropharmacology, and experimental design.
ERIC Educational Resources Information Center
St. Louis, Kenneth O.; Reichel, Isabella K.; Yaruss, J. Scott; Lubker, Bobbie Boyd
2009-01-01
Purpose: Construct validity and concurrent validity were investigated in a prototype survey instrument, the "Public Opinion Survey of Human Attributes-Experimental Edition" (POSHA-E). The POSHA-E was designed to measure public attitudes toward stuttering within the context of eight other attributes, or "anchors," assumed to range from negative…
Demonstrating Experimenter "Ineptitude" as a Means of Teaching Internal and External Validity
ERIC Educational Resources Information Center
Treadwell, Kimberli R.H.
2008-01-01
Internal and external validity are key concepts in understanding the scientific method and fostering critical thinking. This article describes a class demonstration of a "botched" experiment to teach validity to undergraduates. Psychology students (N = 75) completed assessments at the beginning of the semester, prior to and immediately following…
NASA Astrophysics Data System (ADS)
Barchiesi, Emilio; Ganzosch, Gregor; Liebold, Christian; Placidi, Luca; Grygoruk, Roman; Müller, Wolfgang H.
2018-01-01
Due to the latest advancements in 3D printing technology and rapid prototyping techniques, the production of materials with complex geometries has become more affordable than ever. Pantographic structures, because of their attractive features, both in dynamics and statics and both in elastic and inelastic deformation regimes, deserve to be thoroughly investigated with experimental and theoretical tools. Herein, experimental results relative to displacement-controlled large deformation shear loading tests of pantographic structures are reported. In particular, five differently sized samples are analyzed up to first rupture. Results show that the deformation behavior is strongly nonlinear, and the structures are capable of undergoing large elastic deformations without reaching complete failure. Finally, a cutting edge model is validated by means of these experimental results.
Herrero, A; Sanllorente, S; Reguera, C; Ortiz, M C; Sarabia, L A
2016-11-16
A new strategy to approach multiresponse optimization in conjunction to a D-optimal design for simultaneously optimizing a large number of experimental factors is proposed. The procedure is applied to the determination of biogenic amines (histamine, putrescine, cadaverine, tyramine, tryptamine, 2-phenylethylamine, spermine and spermidine) in swordfish by HPLC-FLD after extraction with an acid and subsequent derivatization with dansyl chloride. Firstly, the extraction from a solid matrix and the derivatization of the extract are optimized. Ten experimental factors involved in both stages are studied, seven of them at two levels and the remaining at three levels; the use of a D-optimal design leads to optimize the ten experimental variables, significantly reducing by a factor of 67 the experimental effort needed but guaranteeing the quality of the estimates. A model with 19 coefficients, which includes those corresponding to the main effects and two possible interactions, is fitted to the peak area of each amine. Then, the validated models are used to predict the response (peak area) of the 3456 experiments of the complete factorial design. The variability among peak areas ranges from 13.5 for 2-phenylethylamine to 122.5 for spermine, which shows, to a certain extent, the high and different effect of the pretreatment on the responses. Then the percentiles are calculated from the peak areas of each amine. As the experimental conditions are in conflict, the optimal solution for the multiresponse optimization is chosen from among those which have all the responses greater than a certain percentile for all the amines. The developed procedure reaches decision limits down to 2.5 μg L -1 for cadaverine or 497 μg L -1 for histamine in solvent and 0.07 mg kg -1 and 14.81 mg kg -1 in fish (probability of false positive equal to 0.05), respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hufner, D. R.; Augustine, M. R.
2018-05-01
A novel experimental method was developed to simulate underwater explosion pressure pulses within a laboratory environment. An impact-based experimental apparatus was constructed; capable of generating pressure pulses with basic character similar to underwater explosions, while also allowing the pulse to be tuned to different intensities. Having the capability to vary the shock impulse was considered essential to producing various levels of shock-induced damage without the need to modify the fixture. The experimental apparatus and test method are considered ideal for investigating the shock response of composite material systems and/or experimental validation of new material models. One such test program is presented herein, in which a series of E-glass/Vinylester laminates were subjected to a range of shock pulses that induced varying degrees of damage. Analysis-test correlations were performed using a rate-dependent constitutive model capable of representing anisotropic damage and ultimate yarn failure. Agreement between analytical predictions and experimental results was considered acceptable.
Validation of the Monte Carlo simulator GATE for indium-111 imaging.
Assié, K; Gardin, I; Véra, P; Buvat, I
2005-07-07
Monte Carlo simulations are useful for optimizing and assessing single photon emission computed tomography (SPECT) protocols, especially when aiming at measuring quantitative parameters from SPECT images. Before Monte Carlo simulated data can be trusted, the simulation model must be validated. The purpose of this work was to validate the use of GATE, a new Monte Carlo simulation platform based on GEANT4, for modelling indium-111 SPECT data, the quantification of which is of foremost importance for dosimetric studies. To that end, acquisitions of (111)In line sources in air and in water and of a cylindrical phantom were performed, together with the corresponding simulations. The simulation model included Monte Carlo modelling of the camera collimator and of a back-compartment accounting for photomultiplier tubes and associated electronics. Energy spectra, spatial resolution, sensitivity values, images and count profiles obtained for experimental and simulated data were compared. An excellent agreement was found between experimental and simulated energy spectra. For source-to-collimator distances varying from 0 to 20 cm, simulated and experimental spatial resolution differed by less than 2% in air, while the simulated sensitivity values were within 4% of the experimental values. The simulation of the cylindrical phantom closely reproduced the experimental data. These results suggest that GATE enables accurate simulation of (111)In SPECT acquisitions.
Dasgupta, Annwesa P.; Anderson, Trevor R.
2014-01-01
It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students’ responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students’ experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. PMID:26086658
ERIC Educational Resources Information Center
Harsch, Claudia; Ushloda, Ema; Ladroue, Christophe
2017-01-01
The project examined the predictive validity of the "TOEFL iBT"® test with a focus on the relationship between TOEFL iBT scores and students' subsequent academic success in postgraduate studies in one leading university in the United Kingdom, paying specific attention to the role of linguistic preparedness as perceived by students and…
NASA Technical Reports Server (NTRS)
Seybert, A. F.; Wu, T. W.; Wu, X. F.
1994-01-01
This research report is presented in three parts. In the first part, acoustical analyses were performed on modes of vibration of the housing of a transmission of a gear test rig developed by NASA. The modes of vibration of the transmission housing were measured using experimental modal analysis. The boundary element method (BEM) was used to calculate the sound pressure and sound intensity on the surface of the housing and the radiation efficiency of each mode. The radiation efficiency of each of the transmission housing modes was then compared to theoretical results for a finite baffled plate. In the second part, analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise level radiated from the box. The FEM was used to predict the vibration, while the BEM was used to predict the sound intensity and total radiated sound power using surface vibration as the input data. Vibration predicted by the FEM model was validated by experimental modal analysis; noise predicted by the BEM was validated by measurements of sound intensity. Three types of results are presented for the total radiated sound power: sound power predicted by the BEM model using vibration data measured on the surface of the box; sound power predicted by the FEM/BEM model; and sound power measured by an acoustic intensity scan. In the third part, the structure used in part two was modified. A rib was attached to the top plate of the structure. The FEM and BEM were then used to predict structural vibration and radiated noise respectively. The predicted vibration and radiated noise were then validated through experimentation.
Intrusive images and intrusive thoughts as different phenomena: two experimental studies.
Hagenaars, Muriel A; Brewin, Chris R; van Minnen, Agnes; Holmes, Emily A; Hoogduin, Kees A L
2010-01-01
According to the dual representation theory of PTSD, intrusive trauma images and intrusive verbal thoughts are produced by separate memory systems. In a previous article it was shown that after watching an aversive film, participants in non-movement conditions reported more intrusive images than participants in a free-to-move control condition (Hagenaars, Van Minnen, Holmes, Brewin, & Hoogduin, 2008). The present study investigates whether the experimental conditions of the Hagenaars et al. study had a different effect on intrusive thoughts than on intrusive images. Experiment 2 further investigated the image-thoughts distinction by manipulating stimulus valence (trauma film versus neutral film) and assessing the subsequent development of intrusive images and thoughts. In addition, both experiments studied the impact of peri-traumatic emotions on subsequent intrusive images and thoughts frequency across conditions. Results showed that experimental manipulations (non-movement and trauma film) caused higher levels of intrusive images relative to control conditions (free movement and neutral film) but they did not affect intrusive thoughts. Peri-traumatic anxiety and horror were associated with subsequent higher levels of intrusive images, but not intrusive thoughts. Correlations were inconclusive for anger and sadness. The results suggest intrusive images and thoughts can be manipulated independently and as such can be considered different phenomena.
NASA Astrophysics Data System (ADS)
Alpert, P. A.; Knopf, D. A.
2015-05-01
Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature (T) and relative humidity (RH) at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling rate dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nuclei (IN) all have the same IN surface area (ISA), however the validity of this assumption or the impact it may have on analysis and interpretation of the experimental data is rarely questioned. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses physically observable parameters including the total number of droplets (Ntot) and the heterogeneous ice nucleation rate coefficient, Jhet(T). This model is applied to address if (i) a time and ISA dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time dependent isothermal frozen fractions exhibiting non-exponential behavior with time can be readily explained by this model considering varying ISA. An apparent cooling rate dependence ofJhet is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. The model simulations allow for a quantitative experimental uncertainty analysis for parameters Ntot, T, RH, and the ISA variability. In an idealized cloud parcel model applying variability in ISAs for each droplet, the model predicts enhanced immersion freezing temperatures and greater ice crystal production compared to a case when ISAs are uniform in each droplet. The implications of our results for experimental analysis and interpretation of the immersion freezing process are discussed.
"He Didn't Want Me to Feel Sad": Children's Reactions to Disappointment and Apology
ERIC Educational Resources Information Center
Smith, Craig E.; Harris, Paul L.
2012-01-01
Experimental studies of children's responses to apologies often present participants with hypothetical scenarios. This article reports on an experimental study of children's reactions to experiencing an actual disappointment and subsequent apology. Participants (ages four to seven) were told that another child was supposed to share some attractive…
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, W.T. III
1985-11-04
We have studied two-photon absorption in solids theoretically and experimentally. We have shown that it is possible to use accurate band structure techniques to compute two-photon absorption spectra within 15% of measured values in a wide band-gap material, ZnS. The empirical pseudopotential technique that we used is significantly more accurate than previous models of two-photon absorption in zinc blende materials, including present tunneling theories (which are essentially parabolic-band results in disguise) and the nonparabolic-band formalism of Pidgeon et al. and Weiler. The agreement between our predictions and previous measurements allowed us to use ZnS as a reference material in ordermore » to validate a technique for measuring two-photon absorption that was previously untried in solids, pulsed dual-beam thermal lensing. With the validated technique, we examined nonlinear absorption in one other crystal (rutile) and in several glasses, including silicates, borosilicates, and one phosphate glass. Initially, we believed that the absorption edges of all the materials were comparable; however, subsequent evidence suggested that the effective band-gap energies of the glasses were above the energy of two photons in our measurement. Therefore, we attribute the nonlinear absorption that we observed in glasses to impurities or defects. The measured nonlinear absorption coefficients were of the order of a few cm/TW in the glasses and of the order of 10 cm/GW in the crystals, four orders of magnitude higher than in glasses. 292 refs.« less
NASA Astrophysics Data System (ADS)
El-Didamony, A. M.; Hafeez, S. M.
2016-01-01
Four simple, sensitive spectrophotometric and spectrofluorimetric methods (A-D) for the determination of antibacterial drug lomefloxacin (LMFX) in pharmaceutical formulations have been developed. Method A is based on formation of ternary complex between Pd(II), eosin and LMFX in the presence of methyl cellulose as surfactant and acetate-HCl buffer pH 4.0. Spectrophotometrically, under the optimum conditions, the ternary complex showed absorption maximum at 530 nm. Methods B and C are based on redox reaction between LMFX and KMnO4 in acid and alkaline media. In indirect spectrophotometry method B the drug solution is treated with a known excess of KMnO4 in H2SO4 medium and subsequent determination of unreacted oxidant by reacting it with safronine O in the same medium at λmax = 520 nm. Direct spectrophotometry method C involves treating the alkaline solution of LMFX with KMnO4 and measuring the bluish green product at 604 nm. Method D is based on the chelation of LMFX with Zr(IV) to produce fluorescent chelate. At the optimum reaction conditions, the drug-metal chelate showed excitation maxima at 280 nm and emission maxima at 443 nm. The optimum experimental parameters for the reactions have been studied. The validity of the described procedures was assessed. Statistical analysis of the results has been carried out revealing high accuracy and good precision. The proposed methods were successfully applied for the determination of the selected drug in pharmaceutical preparations with good recoveries.
Simulation and experimental validation of droplet dynamics in microchannels of PEM fuel cells
NASA Astrophysics Data System (ADS)
Ashrafi, Moosa; Shams, Mehrzad; Bozorgnezhad, Ali; Ahmadi, Goodarz
2016-12-01
In this study, dynamics of droplets in the channels of proton exchange membrane fuel cells with straight and serpentine flow-fields was investigated. Tapered and filleted channels were suggested for the straight and serpentine flow-fields respectively in order to improve water removal in channels. Surface tension and wall adhesion forces were applied by using the volume of fluid method. The hydrophilic walls and hydrophobic gas diffusion layer were considered. The mechanism of droplets movement with different diameters was studied by using the Weber and capillary numbers in simple and tapered straight channels. It was illustrated that the flooding was reduced in tapered channel due to increase of water removal rate, and available reaction sites improved subsequently. In addition, film flow was formed in the tapered channel more than the simple channel, so pressure fluctuation was decreased in the tapered channel. Moreover, the water coverage ratio of hydrophilic tapered surface was more than the simple channel, which enhanced water removal from the channel. The filleted serpentine channel was introduced to improve water removal from the simple serpentine channel. It was shown by observation of the unsteady and time-averaged two-phase pressure drop that in the filleted serpentine channels, the two-phase pressure drop was far less than the simple serpentine channel, and also the accumulation of water droplets in the elbows was less leading to lower pressure fluctuation. The numerical simulation results were validated by experiments.
SAGE III on the International Space Station
NASA Astrophysics Data System (ADS)
McCormick, M. P.; Damadeo, R. P.; Hill, C. A.
2017-12-01
A much-improved Stratospheric Aerosol and Gas Experiment (SAGE III) instrument was launched on February 19, 2017 from NASA's Kennedy Space Center aboard the SpaceX CRS-10 Dragon Spacecraft. It subsequently docked with the International Space Station (ISS), completed commissioning on July 1, 2017, and is now in its Mission Operations phase. SAGE III-ISS will combine the experience and capabilities of its successful predecessor satellite instruments SAM II, SAGE, SAGE II, and SAGE III-Meteor-3M to measure aerosol, cloud, O3, H2O, and NO2 profiles from the upper troposphere through the stratosphere. In addition to solar and lunar occultation with vertical resolutions of about 1.0 km, SAGE III-ISS will make limb scattering measurements on the solar side of each orbit greatly expanding the measurement coverage per spacecraft orbit, and tie the very high resolution and precise solar occultation measurements with the limb scattering measurements. The programmable readout array detector enhances its measurement capability and should allow for experimental data products like BrO, and IO, and along with a single photodiode detector, the measurement of larger aerosols. The wavelengths covered by SAGE III-ISS range from 280 to 1050 nm with 1 to 2 nm spectral resolution using a grating spectrometer. The single photodiode extends measurements to 1550 nm. This talk will describe the measurement capabilities of SAGE III, and include early data and validation examples, its additional modes and increased geographical coverage, its calibration and characterization, and data archival and validation approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alpert, Peter A.; Knopf, Daniel A.
Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature, T, and relative humidity, RH, at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling-rate-dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nucleating particles (INPs) all have the same INP surface area (ISA); however, the validity of this assumption or the impact it may have on analysis and interpretation of the experimentalmore » data is rarely questioned. Descriptions of ice active sites and variability of contact angles have been successfully formulated to describe ice nucleation experimental data in previous research; however, we consider the ability of a stochastic freezing model founded on classical nucleation theory to reproduce previous results and to explain experimental uncertainties and data scatter. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses parameters including the total number of droplets, N tot, and the heterogeneous ice nucleation rate coefficient, J het( T). This model is applied to address if (i) a time and ISA-dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous-flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time-dependent isothermal frozen fractions exhibiting non-exponential behavior can be readily explained by this model considering varying ISA. An apparent cooling-rate dependence of J het is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling-rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. Finally, the model simulations allow for a quantitative experimental uncertainty analysis for parameters N tot, T, RH, and the ISA variability. We discuss the implications of our results for experimental analysis and interpretation of the immersion freezing process.« less