2016-05-24
experimental data. However, the time and length scales, and energy deposition rates in the canonical laboratory flames that have been studied over the...is to obtain high-fidelity experimental data critically needed to validate research codes at relevant conditions, and to develop systematic and...validated with experimental data. However, the time and length scales, and energy deposition rates in the canonical laboratory flames that have been
A Systematic Method for Verification and Validation of Gyrokinetic Microstability Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bravenec, Ronald
My original proposal for the period Feb. 15, 2014 through Feb. 14, 2017 called for an integrated validation and verification effort carried out by myself with collaborators. The validation component would require experimental profile and power-balance analysis. In addition, it would require running the gyrokinetic codes varying the input profiles within experimental uncertainties to seek agreement with experiment before discounting a code as invalidated. Therefore, validation would require a major increase of effort over my previous grant periods which covered only code verification (code benchmarking). Consequently, I had requested full-time funding. Instead, I am being funded at somewhat less thanmore » half time (5 calendar months per year). As a consequence, I decided to forego the validation component and to only continue the verification efforts.« less
A Comprehensive Validation Methodology for Sparse Experimental Data
NASA Technical Reports Server (NTRS)
Norman, Ryan B.; Blattnig, Steve R.
2010-01-01
A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.
A new simple local muscle recovery model and its theoretical and experimental validation.
Ma, Liang; Zhang, Wei; Wu, Su; Zhang, Zhanwu
2015-01-01
This study was conducted to provide theoretical and experimental validation of a local muscle recovery model. Muscle recovery has been modeled in different empirical and theoretical approaches to determine work-rest allowance for musculoskeletal disorder (MSD) prevention. However, time-related parameters and individual attributes have not been sufficiently considered in conventional approaches. A new muscle recovery model was proposed by integrating time-related task parameters and individual attributes. Theoretically, this muscle recovery model was compared to other theoretical models mathematically. Experimentally, a total of 20 subjects participated in the experimental validation. Hand grip force recovery and shoulder joint strength recovery were measured after a fatiguing operation. The recovery profile was fitted by using the recovery model, and individual recovery rates were calculated as well after fitting. Good fitting values (r(2) > .8) were found for all the subjects. Significant differences in recovery rates were found among different muscle groups (p < .05). The theoretical muscle recovery model was primarily validated by characterization of the recovery process after fatiguing operation. The determined recovery rate may be useful to represent individual recovery attribute.
Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code
NASA Astrophysics Data System (ADS)
Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.
2015-12-01
WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).
Zhou, Bailing; Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu; Yang, Yuedong; Zhou, Yaoqi; Wang, Jihua
2018-01-04
Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu
2018-01-01
Abstract Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. PMID:28985416
Bayesian cross-entropy methodology for optimal design of validation experiments
NASA Astrophysics Data System (ADS)
Jiang, X.; Mahadevan, S.
2006-07-01
An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.
Interactive experimenters' planning procedures and mission control
NASA Technical Reports Server (NTRS)
Desjardins, R. L.
1973-01-01
The computerized mission control and planning system routinely generates a 24-hour schedule in one hour of operator time by including time dimensions into experimental planning procedures. Planning is validated interactively as it is being generated segment by segment in the frame of specific event times. The planner simply points a light pen at the time mark of interest on the time line for entering specific event times into the schedule.
NASA Technical Reports Server (NTRS)
Hazelton, R. C.; Yadlowsky, E. J.; Churchill, R. J.; Parker, L. W.; Sellers, B.
1981-01-01
The effect differential charging of spacecraft thermal control surfaces is assessed by studying the dynamics of the charging process. A program to experimentally validate a computer model of the charging process was established. Time resolved measurements of the surface potential were obtained for samples of Kapton and Teflon irradiated with a monoenergetic electron beam. Results indicate that the computer model and experimental measurements agree well and that for Teflon, secondary emission is the governing factor. Experimental data indicate that bulk conductivities play a significant role in the charging of Kapton.
NASA Astrophysics Data System (ADS)
Mashayekhi, Somayeh; Miles, Paul; Hussaini, M. Yousuff; Oates, William S.
2018-02-01
In this paper, fractional and non-fractional viscoelastic models for elastomeric materials are derived and analyzed in comparison to experimental results. The viscoelastic models are derived by expanding thermodynamic balance equations for both fractal and non-fractal media. The order of the fractional time derivative is shown to strongly affect the accuracy of the viscoelastic constitutive predictions. Model validation uses experimental data describing viscoelasticity of the dielectric elastomer Very High Bond (VHB) 4910. Since these materials are known for their broad applications in smart structures, it is important to characterize and accurately predict their behavior across a large range of time scales. Whereas integer order viscoelastic models can yield reasonable agreement with data, the model parameters often lack robustness in prediction at different deformation rates. Alternatively, fractional order models of viscoelasticity provide an alternative framework to more accurately quantify complex rate-dependent behavior. Prior research that has considered fractional order viscoelasticity lacks experimental validation and contains limited links between viscoelastic theory and fractional order derivatives. To address these issues, we use fractional order operators to experimentally validate fractional and non-fractional viscoelastic models in elastomeric solids using Bayesian uncertainty quantification. The fractional order model is found to be advantageous as predictions are significantly more accurate than integer order viscoelastic models for deformation rates spanning four orders of magnitude.
Experimental validation of predicted cancer genes using FRET
NASA Astrophysics Data System (ADS)
Guala, Dimitri; Bernhem, Kristoffer; Ait Blal, Hammou; Jans, Daniel; Lundberg, Emma; Brismar, Hjalmar; Sonnhammer, Erik L. L.
2018-07-01
Huge amounts of data are generated in genome wide experiments, designed to investigate diseases with complex genetic causes. Follow up of all potential leads produced by such experiments is currently cost prohibitive and time consuming. Gene prioritization tools alleviate these constraints by directing further experimental efforts towards the most promising candidate targets. Recently a gene prioritization tool called MaxLink was shown to outperform other widely used state-of-the-art prioritization tools in a large scale in silico benchmark. An experimental validation of predictions made by MaxLink has however been lacking. In this study we used Fluorescence Resonance Energy Transfer, an established experimental technique for detection of protein-protein interactions, to validate potential cancer genes predicted by MaxLink. Our results provide confidence in the use of MaxLink for selection of new targets in the battle with polygenic diseases.
3-D and quasi-2-D discrete element modeling of grain commingling in a bucket elevator boot system
USDA-ARS?s Scientific Manuscript database
Unwanted grain commingling impedes new quality-based grain handling systems and has proven to be an expensive and time consuming issue to study experimentally. Experimentally validated models may reduce the time and expense of studying grain commingling while providing additional insight into detail...
Selecting and Improving Quasi-Experimental Designs in Effectiveness and Implementation Research.
Handley, Margaret A; Lyles, Courtney R; McCulloch, Charles; Cattamanchi, Adithya
2018-04-01
Interventional researchers face many design challenges when assessing intervention implementation in real-world settings. Intervention implementation requires holding fast on internal validity needs while incorporating external validity considerations (such as uptake by diverse subpopulations, acceptability, cost, and sustainability). Quasi-experimental designs (QEDs) are increasingly employed to achieve a balance between internal and external validity. Although these designs are often referred to and summarized in terms of logistical benefits, there is still uncertainty about (a) selecting from among various QEDs and (b) developing strategies to strengthen the internal and external validity of QEDs. We focus here on commonly used QEDs (prepost designs with nonequivalent control groups, interrupted time series, and stepped-wedge designs) and discuss several variants that maximize internal and external validity at the design, execution and implementation, and analysis stages.
Summary: Experimental validation of real-time fault-tolerant systems
NASA Technical Reports Server (NTRS)
Iyer, R. K.; Choi, G. S.
1992-01-01
Testing and validation of real-time systems is always difficult to perform since neither the error generation process nor the fault propagation problem is easy to comprehend. There is no better substitute to results based on actual measurements and experimentation. Such results are essential for developing a rational basis for evaluation and validation of real-time systems. However, with physical experimentation, controllability and observability are limited to external instrumentation that can be hooked-up to the system under test. And this process is quite a difficult, if not impossible, task for a complex system. Also, to set up such experiments for measurements, physical hardware must exist. On the other hand, a simulation approach allows flexibility that is unequaled by any other existing method for system evaluation. A simulation methodology for system evaluation was successfully developed and implemented and the environment was demonstrated using existing real-time avionic systems. The research was oriented toward evaluating the impact of permanent and transient faults in aircraft control computers. Results were obtained for the Bendix BDX 930 system and Hamilton Standard EEC131 jet engine controller. The studies showed that simulated fault injection is valuable, in the design stage, to evaluate the susceptibility of computing sytems to different types of failures.
Geist, Rebecca E; DuBois, Chase H; Nichols, Timothy C; Caughey, Melissa C; Merricks, Elizabeth P; Raymer, Robin; Gallippi, Caterina M
2016-09-01
Acoustic radiation force impulse (ARFI) Surveillance of Subcutaneous Hemorrhage (ASSH) has been previously demonstrated to differentiate bleeding phenotype and responses to therapy in dogs and humans, but to date, the method has lacked experimental validation. This work explores experimental validation of ASSH in a poroelastic tissue-mimic and in vivo in dogs. The experimental design exploits calibrated flow rates and infusion durations of evaporated milk in tofu or heparinized autologous blood in dogs. The validation approach enables controlled comparisons of ASSH-derived bleeding rate (BR) and time to hemostasis (TTH) metrics. In tissue-mimicking experiments, halving the calibrated flow rate yielded ASSH-derived BRs that decreased by 44% to 48%. Furthermore, for calibrated flow durations of 5.0 minutes and 7.0 minutes, average ASSH-derived TTH was 5.2 minutes and 7.0 minutes, respectively, with ASSH predicting the correct TTH in 78% of trials. In dogs undergoing calibrated autologous blood infusion, ASSH measured a 3-minute increase in TTH, corresponding to the same increase in the calibrated flow duration. For a measured 5% decrease in autologous infusion flow rate, ASSH detected a 7% decrease in BR. These tissue-mimicking and in vivo preclinical experimental validation studies suggest the ASSH BR and TTH measures reflect bleeding dynamics. © The Author(s) 2015.
Efthimiou, George C; Bartzis, John G; Berbekar, Eva; Hertwig, Denise; Harms, Frank; Leitl, Bernd
2015-06-26
The capability to predict short-term maximum individual exposure is very important for several applications including, for example, deliberate/accidental release of hazardous substances, odour fluctuations or material flammability level exceedance. Recently, authors have proposed a simple approach relating maximum individual exposure to parameters such as the fluctuation intensity and the concentration integral time scale. In the first part of this study (Part I), the methodology was validated against field measurements, which are governed by the natural variability of atmospheric boundary conditions. In Part II of this study, an in-depth validation of the approach is performed using reference data recorded under truly stationary and well documented flow conditions. For this reason, a boundary-layer wind-tunnel experiment was used. The experimental dataset includes 196 time-resolved concentration measurements which detect the dispersion from a continuous point source within an urban model of semi-idealized complexity. The data analysis allowed the improvement of an important model parameter. The model performed very well in predicting the maximum individual exposure, presenting a factor of two of observations equal to 95%. For large time intervals, an exponential correction term has been introduced in the model based on the experimental observations. The new model is capable of predicting all time intervals giving an overall factor of two of observations equal to 100%.
Ahmad, Zaki Uddin; Chao, Bing; Konggidinata, Mas Iwan; Lian, Qiyu; Zappi, Mark E; Gang, Daniel Dianchen
2018-04-27
Numerous research works have been devoted in the adsorption area using experimental approaches. All these approaches are based on trial and error process and extremely time consuming. Molecular simulation technique is a new tool that can be used to design and predict the performance of an adsorbent. This research proposed a simulation technique that can greatly reduce the time in designing the adsorbent. In this study, a new Rhombic ordered mesoporous carbon (OMC) model is proposed and constructed with various pore sizes and oxygen contents using Materials Visualizer Module to optimize the structure of OMC for resorcinol adsorption. The specific surface area, pore volume, small angle X-ray diffraction pattern, and resorcinol adsorption capacity were calculated by Forcite and Sorption module in Materials Studio Package. The simulation results were validated experimentally through synthesizing OMC with different pore sizes and oxygen contents prepared via hard template method employing SBA-15 silica scaffold. Boric acid was used as the pore expanding reagent to synthesize OMC with different pore sizes (from 4.6 to 11.3 nm) and varying oxygen contents (from 11.9% to 17.8%). Based on the simulation and experimental validation, the optimal pore size was found to be 6 nm for maximum adsorption of resorcinol. Copyright © 2018 Elsevier B.V. All rights reserved.
TH-CD-207A-08: Simulated Real-Time Image Guidance for Lung SBRT Patients Using Scatter Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redler, G; Cifter, G; Templeton, A
2016-06-15
Purpose: To develop a comprehensive Monte Carlo-based model for the acquisition of scatter images of patient anatomy in real-time, during lung SBRT treatment. Methods: During SBRT treatment, images of patient anatomy can be acquired from scattered radiation. To rigorously examine the utility of scatter images for image guidance, a model is developed using MCNP code to simulate scatter images of phantoms and lung cancer patients. The model is validated by comparing experimental and simulated images of phantoms of different complexity. The differentiation between tissue types is investigated by imaging objects of known compositions (water, lung, and bone equivalent). A lungmore » tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is used to investigate image noise properties for various quantities of delivered radiation (monitor units(MU)). Patient scatter images are simulated using the validated simulation model. 4DCT patient data is converted to an MCNP input geometry accounting for different tissue composition and densities. Lung tumor phantom images acquired with decreasing imaging time (decreasing MU) are used to model the expected noise amplitude in patient scatter images, producing realistic simulated patient scatter images with varying temporal resolution. Results: Image intensity in simulated and experimental scatter images of tissue equivalent objects (water, lung, bone) match within the uncertainty (∼3%). Lung tumor phantom images agree as well. Specifically, tumor-to-lung contrast matches within the uncertainty. The addition of random noise approximating quantum noise in experimental images to simulated patient images shows that scatter images of lung tumors can provide images in as fast as 0.5 seconds with CNR∼2.7. Conclusions: A scatter imaging simulation model is developed and validated using experimental phantom scatter images. Following validation, lung cancer patient scatter images are simulated. These simulated patient images demonstrate the clinical utility of scatter imaging for real-time tumor tracking during lung SBRT.« less
Experimental validation of thermo-chemical algorithm for a simulation of pultrusion processes
NASA Astrophysics Data System (ADS)
Barkanov, E.; Akishin, P.; Miazza, N. L.; Galvez, S.; Pantelelis, N.
2018-04-01
To provide better understanding of the pultrusion processes without or with temperature control and to support the pultrusion tooling design, an algorithm based on the mixed time integration scheme and nodal control volumes method has been developed. At present study its experimental validation is carried out by the developed cure sensors measuring the electrical resistivity and temperature on the profile surface. By this verification process the set of initial data used for a simulation of the pultrusion process with rod profile has been successfully corrected and finally defined.
Turbine-99 unsteady simulations - Validation
NASA Astrophysics Data System (ADS)
Cervantes, M. J.; Andersson, U.; Lövgren, H. M.
2010-08-01
The Turbine-99 test case, a Kaplan draft tube model, aimed to determine the state of the art within draft tube simulation. Three workshops were organized on the matter in 1999, 2001 and 2005 where the geometry and experimental data were provided as boundary conditions to the participants. Since the last workshop, computational power and flow modelling have been developed and the available data completed with unsteady pressure measurements and phase resolved velocity measurements in the cone. Such new set of data together with the corresponding phase resolved velocity boundary conditions offer new possibilities to validate unsteady numerical simulations in Kaplan draft tube. The present work presents simulation of the Turbine-99 test case with time dependent angular resolved inlet velocity boundary conditions. Different grids and time steps are investigated. The results are compared to experimental time dependent pressure and velocity measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tralshawala, Nilesh; Howard, Don; Knight, Bryon
2008-02-28
In conventional infrared thermography, determination of thermal diffusivity requires thickness information. Recently GE has been experimenting with the use of lateral heat flow to determine thermal diffusivity without thickness information. This work builds on previous work at NASA Langley and Wayne State University but we incorporate thermal time of flight (tof) analysis rather than curve fitting to obtain quantitative information. We have developed appropriate theoretical models and a tof based data analysis framework to experimentally determine all components of thermal diffusivity from the time-temperature measurements. Initial validation was carried out using finite difference simulations. Experimental validation was done using anisotropicmore » carbon fiber reinforced polymer (CFRP) composites. We found that in the CFRP samples used, the in-plane component of diffusivity is about eight times larger than the through-thickness component.« less
Numerical modeling and experimental validation of thermoplastic composites induction welding
NASA Astrophysics Data System (ADS)
Palmieri, Barbara; Nele, Luigi; Galise, Francesco
2018-05-01
In this work, a numerical simulation and experimental test of the induction welding of continuous fibre-reinforced thermoplastic composites (CFRTPCs) was provided. The thermoplastic Polyamide 66 (PA66) with carbon fiber fabric was used. Using a dedicated software (JMag Designer), the influence of the fundamental process parameters such as temperature, current and holding time was investigated. In order to validate the results of the simulations, and therefore the numerical model used, experimental tests were carried out, and the temperature values measured during the tests were compared with the aid of an optical pyrometer, with those provided by the numerical simulation. The mechanical properties of the welded joints were evaluated by single lap shear tests.
Servo-hydraulic actuator in controllable canonical form: Identification and experimental validation
NASA Astrophysics Data System (ADS)
Maghareh, Amin; Silva, Christian E.; Dyke, Shirley J.
2018-02-01
Hydraulic actuators have been widely used to experimentally examine structural behavior at multiple scales. Real-time hybrid simulation (RTHS) is one innovative testing method that largely relies on such servo-hydraulic actuators. In RTHS, interface conditions must be enforced in real time, and controllers are often used to achieve tracking of the desired displacements. Thus, neglecting the dynamics of hydraulic transfer system may result either in system instability or sub-optimal performance. Herein, we propose a nonlinear dynamical model for a servo-hydraulic actuator (a.k.a. hydraulic transfer system) coupled with a nonlinear physical specimen. The nonlinear dynamical model is transformed into controllable canonical form for further tracking control design purposes. Through a number of experiments, the controllable canonical model is validated.
Validation of GC and HPLC systems for residue studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, M.
1995-12-01
For residue studies, GC and HPLC system performance must be validated prior to and during use. One excellent measure of system performance is the standard curve and associated chromatograms used to construct that curve. The standard curve is a model of system response to an analyte over a specific time period, and is prima facia evidence of system performance beginning at the auto sampler and proceeding through the injector, column, detector, electronics, data-capture device, and printer/plotter. This tool measures the performance of the entire chromatographic system; its power negates most of the benefits associated with costly and time-consuming validation ofmore » individual system components. Other measures of instrument and method validation will be discussed, including quality control charts and experimental designs for method validation.« less
On the validity of the Poisson assumption in sampling nanometer-sized aerosols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damit, Brian E; Wu, Dr. Chang-Yu; Cheng, Mengdawn
2014-01-01
A Poisson process is traditionally believed to apply to the sampling of aerosols. For a constant aerosol concentration, it is assumed that a Poisson process describes the fluctuation in the measured concentration because aerosols are stochastically distributed in space. Recent studies, however, have shown that sampling of micrometer-sized aerosols has non-Poissonian behavior with positive correlations. The validity of the Poisson assumption for nanometer-sized aerosols has not been examined and thus was tested in this study. Its validity was tested for four particle sizes - 10 nm, 25 nm, 50 nm and 100 nm - by sampling from indoor air withmore » a DMA- CPC setup to obtain a time series of particle counts. Five metrics were calculated from the data: pair-correlation function (PCF), time-averaged PCF, coefficient of variation, probability of measuring a concentration at least 25% greater than average, and posterior distributions from Bayesian inference. To identify departures from Poissonian behavior, these metrics were also calculated for 1,000 computer-generated Poisson time series with the same mean as the experimental data. For nearly all comparisons, the experimental data fell within the range of 80% of the Poisson-simulation values. Essentially, the metrics for the experimental data were indistinguishable from a simulated Poisson process. The greater influence of Brownian motion for nanometer-sized aerosols may explain the Poissonian behavior observed for smaller aerosols. Although the Poisson assumption was found to be valid in this study, it must be carefully applied as the results here do not definitively prove applicability in all sampling situations.« less
Experimental validation of the DARWIN2.3 package for fuel cycle applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
San-Felice, L.; Eschbach, R.; Bourdot, P.
2012-07-01
The DARWIN package, developed by the CEA and its French partners (AREVA and EDF) provides the required parameters for fuel cycle applications: fuel inventory, decay heat, activity, neutron, {gamma}, {alpha}, {beta} sources and spectrum, radiotoxicity. This paper presents the DARWIN2.3 experimental validation for fuel inventory and decay heat calculations on Pressurized Water Reactor (PWR). In order to validate this code system for spent fuel inventory a large program has been undertaken, based on spent fuel chemical assays. This paper deals with the experimental validation of DARWIN2.3 for the Pressurized Water Reactor (PWR) Uranium Oxide (UOX) and Mixed Oxide (MOX) fuelmore » inventory calculation, focused on the isotopes involved in Burn-Up Credit (BUC) applications and decay heat computations. The calculation - experiment (C/E-1) discrepancies are calculated with the latest European evaluation file JEFF-3.1.1 associated with the SHEM energy mesh. An overview of the tendencies is obtained on a complete range of burn-up from 10 to 85 GWd/t (10 to 60 GWcVt for MOX fuel). The experimental validation of the DARWIN2.3 package for decay heat calculation is performed using calorimetric measurements carried out at the Swedish Interim Spent Fuel Storage Facility for Pressurized Water Reactor (PWR) assemblies, covering a large burn-up (20 to 50 GWd/t) and cooling time range (10 to 30 years). (authors)« less
Seasonal fire danger forecasts for the USA
J. Roads; F. Fujioka; S. Chen; R. Burgan
2005-01-01
The Scripps Experimental Climate Prediction Center has been making experimental, near-real-time, weekly to seasonal fire danger forecasts for the past 5 years. US fire danger forecasts and validations are based on standard indices from the National Fire Danger Rating System (DFDRS), which include the ignition component (IC), energy release component (ER), burning...
The effect of time synchronization of wireless sensors on the modal analysis of structures
NASA Astrophysics Data System (ADS)
Krishnamurthy, V.; Fowler, K.; Sazonov, E.
2008-10-01
Driven by the need to reduce the installation cost and maintenance cost of structural health monitoring (SHM) systems, wireless sensor networks (WSNs) are becoming increasingly popular. Perfect time synchronization amongst the wireless sensors is a key factor enabling the use of low-cost, low-power WSNs for structural health monitoring applications based on output-only modal analysis of structures. In this paper we present a theoretical framework for analysis of the impact created by time delays in the measured system response on the reconstruction of mode shapes using the popular frequency domain decomposition (FDD) technique. This methodology directly estimates the change in mode shape values based on sensor synchronicity. We confirm the proposed theoretical model by experimental validation in modal identification experiments performed on an aluminum beam. The experimental validation was performed using a wireless intelligent sensor and actuator network (WISAN) which allows for close time synchronization between sensors (0.6-10 µs in the tested configuration) and guarantees lossless data delivery under normal conditions. The experimental results closely match theoretical predictions and show that even very small delays in output response impact the mode shapes.
Robust and real-time rotor control with magnetic bearings
NASA Technical Reports Server (NTRS)
Sinha, A.; Wang, K. W.; Mease, K. L.
1991-01-01
This paper deals with the sliding mode control of a rigid rotor via radial magnetic bearings. The digital control algorithm and the results from numerical simulations are presented for an experimental rig. The experimental system which has been set up to digitally implement and validate the sliding mode control algorithm is described. Two methods for the development of control softwares are presented. Experimental results for individual rotor axis are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Justin Leigh; Smith, Curtis Lee; Burns, Douglas Edward
This report describes the development plan for a new multi-partner External Hazards Experimental Group (EHEG) coordinated by Idaho National Laboratory (INL) within the Risk-Informed Safety Margin Characterization (RISMC) technical pathway of the Light Water Reactor Sustainability Program. Currently, there is limited data available for development and validation of the tools and methods being developed in the RISMC Toolkit. The EHEG is being developed to obtain high-quality, small- and large-scale experimental data validation of RISMC tools and methods in a timely and cost-effective way. The group of universities and national laboratories that will eventually form the EHEG (which is ultimately expectedmore » to include both the initial participants and other universities and national laboratories that have been identified) have the expertise and experimental capabilities needed to both obtain and compile existing data archives and perform additional seismic and flooding experiments. The data developed by EHEG will be stored in databases for use within RISMC. These databases will be used to validate the advanced external hazard tools and methods.« less
NASA Technical Reports Server (NTRS)
Geng, Tao; Paxson, Daniel E.; Zheng, Fei; Kuznetsov, Andrey V.; Roberts, William L.
2008-01-01
Pulsed combustion is receiving renewed interest as a potential route to higher performance in air breathing propulsion systems. Pulsejets offer a simple experimental device with which to study unsteady combustion phenomena and validate simulations. Previous computational fluid dynamic (CFD) simulation work focused primarily on the pulsejet combustion and exhaust processes. This paper describes a new inlet sub-model which simulates the fluidic and mechanical operation of a valved pulsejet head. The governing equations for this sub-model are described. Sub-model validation is provided through comparisons of simulated and experimentally measured reed valve motion, and time averaged inlet mass flow rate. The updated pulsejet simulation, with the inlet sub-model implemented, is validated through comparison with experimentally measured combustion chamber pressure, inlet mass flow rate, operational frequency, and thrust. Additionally, the simulated pulsejet exhaust flowfield, which is dominated by a starting vortex ring, is compared with particle imaging velocimetry (PIV) measurements on the bases of velocity, vorticity, and vortex location. The results show good agreement between simulated and experimental data. The inlet sub-model is shown to be critical for the successful modeling of pulsejet operation. This sub-model correctly predicts both the inlet mass flow rate and its phase relationship with the combustion chamber pressure. As a result, the predicted pulsejet thrust agrees very well with experimental data.
NASA Astrophysics Data System (ADS)
Ma, Zhisai; Liu, Li; Zhou, Sida; Naets, Frank; Heylen, Ward; Desmet, Wim
2017-03-01
The problem of linear time-varying(LTV) system modal analysis is considered based on time-dependent state space representations, as classical modal analysis of linear time-invariant systems and current LTV system modal analysis under the "frozen-time" assumption are not able to determine the dynamic stability of LTV systems. Time-dependent state space representations of LTV systems are first introduced, and the corresponding modal analysis theories are subsequently presented via a stability-preserving state transformation. The time-varying modes of LTV systems are extended in terms of uniqueness, and are further interpreted to determine the system's stability. An extended modal identification is proposed to estimate the time-varying modes, consisting of the estimation of the state transition matrix via a subspace-based method and the extraction of the time-varying modes by the QR decomposition. The proposed approach is numerically validated by three numerical cases, and is experimentally validated by a coupled moving-mass simply supported beam experimental case. The proposed approach is capable of accurately estimating the time-varying modes, and provides a new way to determine the dynamic stability of LTV systems by using the estimated time-varying modes.
Experimental evaluation of certification trails using abstract data type validation
NASA Technical Reports Server (NTRS)
Wilson, Dwight S.; Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. Recent experimental work reveals many cases in which a certification-trail approach allows for significantly faster program execution time than a basic time-redundancy approach. Algorithms for answer-validation of abstract data types allow a certification trail approach to be used for a wide variety of problems. An attempt to assess the performance of algorithms utilizing certification trails on abstract data types is reported. Specifically, this method was applied to the following problems: heapsort, Hullman tree, shortest path, and skyline. Previous results used certification trails specific to a particular problem and implementation. The approach allows certification trails to be localized to 'data structure modules,' making the use of this technique transparent to the user of such modules.
Non-Linear System Identification for Aeroelastic Systems with Application to Experimental Data
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2008-01-01
Representation and identification of a non-linear aeroelastic pitch-plunge system as a model of the NARMAX class is considered. A non-linear difference equation describing this aircraft model is derived theoretically and shown to be of the NARMAX form. Identification methods for NARMAX models are applied to aeroelastic dynamics and its properties demonstrated via continuous-time simulations of experimental conditions. Simulation results show that (i) the outputs of the NARMAX model match closely those generated using continuous-time methods and (ii) NARMAX identification methods applied to aeroelastic dynamics provide accurate discrete-time parameter estimates. Application of NARMAX identification to experimental pitch-plunge dynamics data gives a high percent fit for cross-validated data.
ERIC Educational Resources Information Center
Rizvi, Shireen L.; Nock, Matthew K.
2008-01-01
Single-case experimental designs (SCEDs) provide a time- and cost-effective alternative to randomized clinical trials and offer significant advantages in terms of internal and external validity. A brief history and primer on SCEDs is provided, specifically for use in suicide intervention research. Various SCED methodologies, such as AB, ABAB,…
NASA Technical Reports Server (NTRS)
Cognata, Thomas J.; Leimkuehler, Thomas O.; Sheth, Rubik B.; Le,Hung
2012-01-01
The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the model development and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.
NASA Technical Reports Server (NTRS)
Cognata, Thomas J.; Leimkuehler, Thomas; Sheth, Rubik; Le, Hung
2013-01-01
The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the modeling and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.
Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner
NASA Astrophysics Data System (ADS)
Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.
2015-02-01
Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.
Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation
NASA Astrophysics Data System (ADS)
Maiti, Raman
2016-06-01
The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.
Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation
NASA Astrophysics Data System (ADS)
Maiti, Raman
2018-06-01
The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.
Time series modeling of human operator dynamics in manual control tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.
Time Series Modeling of Human Operator Dynamics in Manual Control Tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.
NASA Astrophysics Data System (ADS)
Nir, A.; Doughty, C.; Tsang, C. F.
Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no attempt to validate a specific model, but several models of increasing complexity are compared with experimental results. The outcome is interpreted as a demonstration of the paradigm proposed by van der Heijde, 26 that different constituencies have different objectives for the validation process and therefore their acceptance criteria differ also.
Verification of an Analytical Method for Measuring Crystal Nucleation Rates in Glasses from DTA Data
NASA Technical Reports Server (NTRS)
Ranasinghe, K. S.; Wei, P. F.; Kelton, K. F.; Ray, C. S.; Day, D. E.
2004-01-01
A recently proposed analytical (DTA) method for estimating the nucleation rates in glasses has been evaluated by comparing experimental data with numerically computed nucleation rates for a model lithium disilicate glass. The time and temperature dependent nucleation rates were predicted using the model and compared with those values from an analysis of numerically calculated DTA curves. The validity of the numerical approach was demonstrated earlier by a comparison with experimental data. The excellent agreement between the nucleation rates from the model calculations and fiom the computer generated DTA data demonstrates the validity of the proposed analytical DTA method.
Time-Delayed Two-Step Selective Laser Photodamage of Dye-Biomolecule Complexes
NASA Astrophysics Data System (ADS)
Andreoni, A.; Cubeddu, R.; de Silvestri, S.; Laporta, P.; Svelto, O.
1980-08-01
A scheme is proposed for laser-selective photodamage of biological molecules, based on time-delayed two-step photoionization of a dye molecule bound to the biomolecule. The validity of the scheme is experimentally demonstrated in the case of the dye Proflavine, bound to synthetic polynucleotides.
Alonso-Torres, Beatriz; Hernández-Pérez, José Alfredo; Sierra-Espinoza, Fernando; Schenker, Stefan; Yeretzian, Chahan
2013-01-01
Heat and mass transfer in individual coffee beans during roasting were simulated using computational fluid dynamics (CFD). Numerical equations for heat and mass transfer inside the coffee bean were solved using the finite volume technique in the commercial CFD code Fluent; the software was complemented with specific user-defined functions (UDFs). To experimentally validate the numerical model, a single coffee bean was placed in a cylindrical glass tube and roasted by a hot air flow, using the identical geometrical 3D configuration and hot air flow conditions as the ones used for numerical simulations. Temperature and humidity calculations obtained with the model were compared with experimental data. The model predicts the actual process quite accurately and represents a useful approach to monitor the coffee roasting process in real time. It provides valuable information on time-resolved process variables that are otherwise difficult to obtain experimentally, but critical to a better understanding of the coffee roasting process at the individual bean level. This includes variables such as time-resolved 3D profiles of bean temperature and moisture content, and temperature profiles of the roasting air in the vicinity of the coffee bean.
Hall, Damien; Minton, Allen P
2005-10-15
We report here an examination of the validity of the experimental practice of using solution turbidity to study the polymerization kinetics of microtubule formation. The investigative approach proceeds via numerical solution of model rate equations to yield the time dependence of each microtubule species, followed by the calculation of the time- and wavelength-dependent turbidity generated by the calculated distribution of rod lengths. The wavelength dependence of the turbidity along the time course is analyzed to search for generalized kinetic regimes that satisfy a constant proportionality relationship between the observed turbidity and the weight concentration of polymerized tubulin. An empirical analysis, which permits valid interpretation of turbidity data for distributions of microtubules that are not long relative to the wavelength of incident light, is proposed. The basic correctness of the simulation work is shown by the analysis of the experimental time dependence of the turbidity wavelength exponent for microtubule formation in taxol-supplemented 0.1 M Pipes buffer (1 mM GTP, 1 mM EGTA, 1 mM MgSO4, pH 6.4). We believe that the general findings and principles outlined here are applicable to studies of other fibril-forming systems that use turbidity as a marker of polymerization progress.
Probing the free energy landscape of the FBP28WW domain using multiple techniques.
Periole, Xavier; Allen, Lucy R; Tamiola, Kamil; Mark, Alan E; Paci, Emanuele
2009-05-01
The free-energy landscape of a small protein, the FBP 28 WW domain, has been explored using molecular dynamics (MD) simulations with alternative descriptions of the molecule. The molecular models used range from coarse-grained to all-atom with either an implicit or explicit treatment of the solvent. Sampling of conformation space was performed using both conventional and temperature-replica exchange MD simulations. Experimental chemical shifts and NOEs were used to validate the simulations, and experimental phi values both for validation and as restraints. This combination of different approaches has provided insight into the free energy landscape and barriers encountered by the protein during folding and enabled the characterization of native, denatured and transition states which are compatible with the available experimental data. All the molecular models used stabilize well defined native and denatured basins; however, the degree of agreement with the available experimental data varies. While the most detailed, explicit solvent model predicts the data reasonably accurately, it does not fold despite a simulation time 10 times that of the experimental folding time. The less detailed models performed poorly relative to the explicit solvent model: an implicit solvent model stabilizes a ground state which differs from the experimental native state, and a structure-based model underestimates the size of the barrier between the two states. The use of experimental phi values both as restraints, and to extract structures from unfolding simulations, result in conformations which, although not necessarily true transition states, appear to share the geometrical characteristics of transition state structures. In addition to characterizing the native, transition and denatured states of this particular system in this work, the advantages and limitations of using varying levels of representation are discussed. 2008 Wiley Periodicals, Inc.
Considering RNAi experimental design in parasitic helminths.
Dalzell, Johnathan J; Warnock, Neil D; McVeigh, Paul; Marks, Nikki J; Mousley, Angela; Atkinson, Louise; Maule, Aaron G
2012-04-01
Almost a decade has passed since the first report of RNA interference (RNAi) in a parasitic helminth. Whilst much progress has been made with RNAi informing gene function studies in disparate nematode and flatworm parasites, substantial and seemingly prohibitive difficulties have been encountered in some species, hindering progress. An appraisal of current practices, trends and ideals of RNAi experimental design in parasitic helminths is both timely and necessary for a number of reasons: firstly, the increasing availability of parasitic helminth genome/transcriptome resources means there is a growing need for gene function tools such as RNAi; secondly, fundamental differences and unique challenges exist for parasite species which do not apply to model organisms; thirdly, the inherent variation in experimental design, and reported difficulties with reproducibility undermine confidence. Ideally, RNAi studies of gene function should adopt standardised experimental design to aid reproducibility, interpretation and comparative analyses. Although the huge variations in parasite biology and experimental endpoints make RNAi experimental design standardization difficult or impractical, we must strive to validate RNAi experimentation in helminth parasites. To aid this process we identify multiple approaches to RNAi experimental validation and highlight those which we deem to be critical for gene function studies in helminth parasites.
Nonlinear System Identification for Aeroelastic Systems with Application to Experimental Data
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2008-01-01
Representation and identification of a nonlinear aeroelastic pitch-plunge system as a model of the Nonlinear AutoRegressive, Moving Average eXogenous (NARMAX) class is considered. A nonlinear difference equation describing this aircraft model is derived theoretically and shown to be of the NARMAX form. Identification methods for NARMAX models are applied to aeroelastic dynamics and its properties demonstrated via continuous-time simulations of experimental conditions. Simulation results show that (1) the outputs of the NARMAX model closely match those generated using continuous-time methods, and (2) NARMAX identification methods applied to aeroelastic dynamics provide accurate discrete-time parameter estimates. Application of NARMAX identification to experimental pitch-plunge dynamics data gives a high percent fit for cross-validated data.
Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick; Klein, Vladislav
2011-01-01
Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.
Does the Finger-to-Nose Test measure upper limb coordination in chronic stroke?
Rodrigues, Marcos R M; Slimovitch, Matthew; Chilingaryan, Gevorg; Levin, Mindy F
2017-01-23
We aimed to kinematically validate that the time to perform the Finger-to-Nose Test (FNT) assesses coordination by determining its construct, convergent and discriminant validity. Experimental, criterion standard study. Both clinical and experimental evaluations were done at a research facility in a rehabilitation hospital. Forty individuals (20 individuals with chronic stroke and 20 healthy, age- and gender-matched individuals) participated.. Both groups performed two blocks of 10 to-and-fro pointing movements (non-dominant/affected arm) between a sagittal target and the nose (ReachIn, ReachOut) at a self-paced speed. Time to perform the test was the main outcome. Kinematics (Optotrak, 100Hz) and clinical impairment/activity levels were evaluated. Spatiotemporal coordination was assessed with slope (IJC) and cross-correlation (LAG) between elbow and shoulder movements. Compared to controls, individuals with stroke (Fugl-Meyer Assessment, FMA-UE: 51.9 ± 13.2; Box & Blocks, BBT: 72.1 ± 26.9%) made more curved endpoint trajectories using less shoulder horizontal-abduction. For construct validity, shoulder range (β = 0.127), LAG (β = 0.855) and IJC (β = -0.191) explained 82% of FNT-time variance for ReachIn and LAG (β = 0.971) explained 94% for ReachOut in patients with stroke. In contrast, only LAG explained 62% (β = 0.790) and 79% (β = 0.889) of variance for ReachIn and ReachOut respectively in controls. For convergent validity, FNT-time correlated with FMA-UE (r = -0.67, p < 0.01), FMA-Arm (r = -0.60, p = 0.005), biceps spasticity (r = 0.39, p < 0.05) and BBT (r = -0.56, p < 0.01). A cut-off time of 10.6 s discriminated between mild and moderate-to-severe impairment (discriminant validity). Each additional second represented 42% odds increase of greater impairment. For this version of the FNT, the time to perform the test showed construct, convergent and discriminant validity to measure UL coordination in stroke.
NASA Technical Reports Server (NTRS)
Magee, Todd E.; Wilcox, Peter A.; Fugal, Spencer R.; Acheson, Kurt E.; Adamson, Eric E.; Bidwell, Alicia L.; Shaw, Stephen G.
2013-01-01
This report describes the work conducted by The Boeing Company under American Recovery and Reinvestment Act (ARRA) and NASA funding to experimentally validate the conceptual design of a supersonic airliner feasible for entry into service in the 2018 to 2020 timeframe (NASA N+2 generation). The report discusses the design, analysis and development of a low-boom concept that meets aggressive sonic boom and performance goals for a cruise Mach number of 1.8. The design is achieved through integrated multidisciplinary optimization tools. The report also describes the detailed design and fabrication of both sonic boom and performance wind tunnel models of the low-boom concept. Additionally, a description of the detailed validation wind tunnel testing that was performed with the wind tunnel models is provided along with validation comparisons with pretest Computational Fluid Dynamics (CFD). Finally, the report describes the evaluation of existing NASA sonic boom pressure rail measurement instrumentation and a detailed description of new sonic boom measurement instrumentation that was constructed for the validation wind tunnel testing.
Testing and validating environmental models
Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.
1996-01-01
Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series against data time series, and plotting predicted versus observed values) have little diagnostic power. We propose that it may be more useful to statistically extract the relationships of primary interest from the time series, and test the model directly against them.
NASA Astrophysics Data System (ADS)
Min, Qi; Su, Maogen; Wang, Bo; Cao, Shiquan; Sun, Duixiong; Dong, Chenzhong
2018-05-01
The radiation and dynamics properties of laser-produced carbon plasma in vacuum were studied experimentally with aid of a spatio-temporally resolved emission spectroscopy technique. In addition, a radiation hydrodynamics model based on the fluid dynamic equations and the radiative transfer equation was presented, and calculation of the charge states was performed within the time-dependent collisional radiative model. Detailed temporal and spatial evolution behavior about plasma parameters have been analyzed, such as velocity, electron temperature, charge state distribution, energy level population, and various atomic processes. At the same time, the effects of different atomic processes on the charge state distribution were examined. Finally, the validity of assuming a local thermodynamic equilibrium in the carbon plasma expansion was checked, and the results clearly indicate that the assumption was valid only at the initial (<80 ns) stage of plasma expansion. At longer delay times, it was not applicable near the plasma boundary because of a sharp drop of plasma temperature and electron density.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... experimental conditions for the validation study and subsequent use during decontamination. The following experimental conditions apply for any solvent: (a) Temperature and pressure. Conduct the validation study and...
Code of Federal Regulations, 2012 CFR
2012-07-01
... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... experimental conditions for the validation study and subsequent use during decontamination. The following experimental conditions apply for any solvent: (a) Temperature and pressure. Conduct the validation study and...
Code of Federal Regulations, 2013 CFR
2013-07-01
... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... experimental conditions for the validation study and subsequent use during decontamination. The following experimental conditions apply for any solvent: (a) Temperature and pressure. Conduct the validation study and...
Thermal analysis of electron gun for travelling wave tubes
NASA Astrophysics Data System (ADS)
Bhat, K. S.; Sreedevi, K.; Ravi, M.
2006-11-01
Thermal analysis of a pierce type electron gun using the FEM software ANSYS and its experimental validation are presented in this paper. Thermal analysis of the electron gun structure has been carried out to find out the effect of heater power on steady state temperature and warm-up time. The thermal drain of the supporting structure has also been analyzed for different materials. These results were experimentally verified in an electron gun. The experimental results closely match the ANSYS results.
ERIC Educational Resources Information Center
Barnette, J. Jackson; Wallis, Anne Baber
2005-01-01
We rely a great deal on the schematic descriptions that represent experimental and quasi-experimental design arrangements, as well as the discussions of threats to validity associated with these, provided by Campbell and his associates: Stanley, Cook, and Shadish. Some of these designs include descriptions of treatments removed, removed and then…
Monte Carlo simulation of Ray-Scan 64 PET system and performance evaluation using GATE toolkit
NASA Astrophysics Data System (ADS)
Li, Suying; Zhang, Qiushi; Vuletic, Ivan; Xie, Zhaoheng; Yang, Kun; Ren, Qiushi
2017-02-01
In this study, we aimed to develop a GATE model for the simulation of Ray-Scan 64 PET scanner and model its performance characteristics. A detailed implementation of system geometry and physical process were included in the simulation model. Then we modeled the performance characteristics of Ray-Scan 64 PET system for the first time, based on National Electrical Manufacturers Association (NEMA) NU-2 2007 protocols and validated the model against experimental measurement, including spatial resolution, sensitivity, counting rates and noise equivalent count rate (NECR). Moreover, an accurate dead time module was investigated to simulate the counting rate performance. Overall results showed reasonable agreement between simulation and experimental data. The validation results showed the reliability and feasibility of the GATE model to evaluate major performance of Ray-Scan 64 PET system. It provided a useful tool for a wide range of research applications.
Chen, Lei; Zhong, Hai-ying; Kuang, Jian-fei; Li, Jian-guo; Lu, Wang-jin; Chen, Jian-ye
2011-08-01
Reverse transcription quantitative real-time PCR (RT-qPCR) is a sensitive technique for quantifying gene expression, but its success depends on the stability of the reference gene(s) used for data normalization. Only a few studies on validation of reference genes have been conducted in fruit trees and none in banana yet. In the present work, 20 candidate reference genes were selected, and their expression stability in 144 banana samples were evaluated and analyzed using two algorithms, geNorm and NormFinder. The samples consisted of eight sample sets collected under different experimental conditions, including various tissues, developmental stages, postharvest ripening, stresses (chilling, high temperature, and pathogen), and hormone treatments. Our results showed that different suitable reference gene(s) or combination of reference genes for normalization should be selected depending on the experimental conditions. The RPS2 and UBQ2 genes were validated as the most suitable reference genes across all tested samples. More importantly, our data further showed that the widely used reference genes, ACT and GAPDH, were not the most suitable reference genes in many banana sample sets. In addition, the expression of MaEBF1, a gene of interest that plays an important role in regulating fruit ripening, under different experimental conditions was used to further confirm the validated reference genes. Taken together, our results provide guidelines for reference gene(s) selection under different experimental conditions and a foundation for more accurate and widespread use of RT-qPCR in banana.
NASA Astrophysics Data System (ADS)
Amran, M. A. M.; Idayu, N.; Faizal, K. M.; Sanusi, M.; Izamshah, R.; Shahir, M.
2016-11-01
In this study, the main objective is to determine the percentage difference of part weight between experimental and simulation work. The effect of process parameters on weight of plastic part is also investigated. The process parameters involved were mould temperature, melt temperature, injection time and cooling time. Autodesk Simulation Moldflow software was used to run the simulation of the plastic part. Taguchi method was selected as Design of Experiment to conduct the experiment. Then, the simulation result was validated with the experimental result. It was found that the minimum and maximum percentage of differential of part weight between simulation and experimental work are 0.35 % and 1.43 % respectively. In addition, the most significant parameter that affected part weight is the mould temperature, followed by melt temperature, injection time and cooling time.
Superstatistical fluctuations in time series: Applications to share-price dynamics and turbulence
NASA Astrophysics Data System (ADS)
van der Straeten, Erik; Beck, Christian
2009-09-01
We report a general technique to study a given experimental time series with superstatistics. Crucial for the applicability of the superstatistics concept is the existence of a parameter β that fluctuates on a large time scale as compared to the other time scales of the complex system under consideration. The proposed method extracts the main superstatistical parameters out of a given data set and examines the validity of the superstatistical model assumptions. We test the method thoroughly with surrogate data sets. Then the applicability of the superstatistical approach is illustrated using real experimental data. We study two examples, velocity time series measured in turbulent Taylor-Couette flows and time series of log returns of the closing prices of some stock market indices.
NASA Technical Reports Server (NTRS)
Hong, S. D.; Fedors, R. F.; Schwarzl, F.; Moacanin, J.; Landel, R. F.
1981-01-01
A theoretical analysis of the tensile stress-strain relation of elastomers at constant strain rate is presented which shows that the time and the stress effect are separable if the experimental time scale coincides with a segment of the relaxation modulus that can be described by a single power law. It is also shown that time-strain separability is valid if the strain function is linearly proportional to the Cauchy strain, and that when time-strain separability holds, two strain-dependent quantities can be obtained experimentally. In the case where time and strain effect are not separable, superposition can be achieved only by using temperature and strain-dependent shift factors.
Parsons, Thomas D.
2015-01-01
An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target’s internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences. PMID:26696869
Parsons, Thomas D
2015-01-01
An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target's internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences.
NASA Astrophysics Data System (ADS)
Allred, C. Jeff; Churchill, David; Buckner, Gregory D.
2017-07-01
This paper presents a novel approach to monitoring rotor blade flap, lead-lag and pitch using an embedded gyroscope and symmetrically mounted MEMS accelerometers. The central hypothesis is that differential accelerometer measurements are proportional only to blade motion; fuselage acceleration and blade bending are inherently compensated for. The inverse kinematic relationships (from blade position to acceleration and angular rate) are derived and simulated to validate this hypothesis. An algorithm to solve the forward kinematic relationships (from sensor measurement to blade position) is developed using these simulation results. This algorithm is experimentally validated using a prototype device. The experimental results justify continued development of this kinematic estimation approach.
NASA Technical Reports Server (NTRS)
Noll, Thomas E.; Perry, Boyd, III; Tiffany, Sherwood H.; Cole, Stanley R.; Buttrill, Carey S.; Adams, William M., Jr.; Houck, Jacob A.; Srinathkumar, S.; Mukhopadhyay, Vivek; Pototzky, Anthony S.
1989-01-01
The status of the joint NASA/Rockwell Active Flexible Wing Wind-Tunnel Test Program is described. The objectives are to develop and validate the analysis, design, and test methodologies required to apply multifunction active control technology for improving aircraft performance and stability. Major tasks include designing digital multi-input/multi-output flutter-suppression and rolling-maneuver-load alleviation concepts for a flexible full-span wind-tunnel model, obtaining an experimental data base for the basic model and each control concept and providing comparisons between experimental and analytical results to validate the methodologies. The opportunity is provided to improve real-time simulation techniques and to gain practical experience with digital control law implementation procedures.
Orchestrating TRANSP Simulations for Interpretative and Predictive Tokamak Modeling with OMFIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grierson, B. A.; Yuan, X.; Gorelenkova, M.
TRANSP simulations are being used in the OMFIT work- flow manager to enable a machine independent means of experimental analysis, postdictive validation, and predictive time dependent simulations on the DIII-D, NSTX, JET and C-MOD tokamaks. The procedures for preparing the input data from plasma profile diagnostics and equilibrium reconstruction, as well as processing of the time-dependent heating and current drive sources and assumptions about the neutral recycling, vary across machines, but are streamlined by using a common workflow manager. Settings for TRANSP simulation fidelity are incorporated into the OMFIT framework, contrasting between-shot analysis, power balance, and fast-particle simulations. A previouslymore » established series of data consistency metrics are computed such as comparison of experimental vs. calculated neutron rate, equilibrium stored energy vs. total stored energy from profile and fast-ion pressure, and experimental vs. computed surface loop voltage. Discrepancies between data consistency metrics can indicate errors in input quantities such as electron density profile or Zeff, or indicate anomalous fast-particle transport. Measures to assess the sensitivity of the verification metrics to input quantities are provided by OMFIT, including scans of the input profiles and standardized post-processing visualizations. For predictive simulations, TRANSP uses GLF23 or TGLF to predict core plasma profiles, with user defined boundary conditions in the outer region of the plasma. ITPA validation metrics are provided in post-processing to assess the transport model validity. By using OMFIT to orchestrate the steps for experimental data preparation, selection of operating mode, submission, post-processing and visualization, we have streamlined and standardized the usage of TRANSP.« less
Orchestrating TRANSP Simulations for Interpretative and Predictive Tokamak Modeling with OMFIT
Grierson, B. A.; Yuan, X.; Gorelenkova, M.; ...
2018-02-21
TRANSP simulations are being used in the OMFIT work- flow manager to enable a machine independent means of experimental analysis, postdictive validation, and predictive time dependent simulations on the DIII-D, NSTX, JET and C-MOD tokamaks. The procedures for preparing the input data from plasma profile diagnostics and equilibrium reconstruction, as well as processing of the time-dependent heating and current drive sources and assumptions about the neutral recycling, vary across machines, but are streamlined by using a common workflow manager. Settings for TRANSP simulation fidelity are incorporated into the OMFIT framework, contrasting between-shot analysis, power balance, and fast-particle simulations. A previouslymore » established series of data consistency metrics are computed such as comparison of experimental vs. calculated neutron rate, equilibrium stored energy vs. total stored energy from profile and fast-ion pressure, and experimental vs. computed surface loop voltage. Discrepancies between data consistency metrics can indicate errors in input quantities such as electron density profile or Zeff, or indicate anomalous fast-particle transport. Measures to assess the sensitivity of the verification metrics to input quantities are provided by OMFIT, including scans of the input profiles and standardized post-processing visualizations. For predictive simulations, TRANSP uses GLF23 or TGLF to predict core plasma profiles, with user defined boundary conditions in the outer region of the plasma. ITPA validation metrics are provided in post-processing to assess the transport model validity. By using OMFIT to orchestrate the steps for experimental data preparation, selection of operating mode, submission, post-processing and visualization, we have streamlined and standardized the usage of TRANSP.« less
Validation of reference genes for quantitative gene expression analysis in experimental epilepsy.
Sadangi, Chinmaya; Rosenow, Felix; Norwood, Braxton A
2017-12-01
To grasp the molecular mechanisms and pathophysiology underlying epilepsy development (epileptogenesis) and epilepsy itself, it is important to understand the gene expression changes that occur during these phases. Quantitative real-time polymerase chain reaction (qPCR) is a technique that rapidly and accurately determines gene expression changes. It is crucial, however, that stable reference genes are selected for each experimental condition to ensure that accurate values are obtained for genes of interest. If reference genes are unstably expressed, this can lead to inaccurate data and erroneous conclusions. To date, epilepsy studies have used mostly single, nonvalidated reference genes. This is the first study to systematically evaluate reference genes in male Sprague-Dawley rat models of epilepsy. We assessed 15 potential reference genes in hippocampal tissue obtained from 2 different models during epileptogenesis, 1 model during chronic epilepsy, and a model of noninjurious seizures. Reference gene ranking varied between models and also differed between epileptogenesis and chronic epilepsy time points. There was also some variance between the four mathematical models used to rank reference genes. Notably, we found novel reference genes to be more stably expressed than those most often used in experimental epilepsy studies. The consequence of these findings is that reference genes suitable for one epilepsy model may not be appropriate for others and that reference genes can change over time. It is, therefore, critically important to validate potential reference genes before using them as normalizing factors in expression analysis in order to ensure accurate, valid results. © 2017 Wiley Periodicals, Inc.
Fast scattering simulation tool for multi-energy x-ray imaging
NASA Astrophysics Data System (ADS)
Sossin, A.; Tabary, J.; Rebuffel, V.; Létang, J. M.; Freud, N.; Verger, L.
2015-12-01
A combination of Monte Carlo (MC) and deterministic approaches was employed as a means of creating a simulation tool capable of providing energy resolved x-ray primary and scatter images within a reasonable time interval. Libraries of Sindbad, a previously developed x-ray simulation software, were used in the development. The scatter simulation capabilities of the tool were validated through simulation with the aid of GATE and through experimentation by using a spectrometric CdTe detector. A simple cylindrical phantom with cavities and an aluminum insert was used. Cross-validation with GATE showed good agreement with a global spatial error of 1.5% and a maximum scatter spectrum error of around 6%. Experimental validation also supported the accuracy of the simulations obtained from the developed software with a global spatial error of 1.8% and a maximum error of around 8.5% in the scatter spectra.
Validation of Slosh Modeling Approach Using STAR-CCM+
NASA Technical Reports Server (NTRS)
Benson, David J.; Ng, Wanyi
2018-01-01
Without an adequate understanding of propellant slosh, the spacecraft attitude control system may be inadequate to control the spacecraft or there may be an unexpected loss of science observation time due to higher slosh settling times. Computational fluid dynamics (CFD) is used to model propellant slosh. STAR-CCM+ is a commercially available CFD code. This paper seeks to validate the CFD modeling approach via a comparison between STAR-CCM+ liquid slosh modeling results and experimental, empirically, and analytically derived results. The geometries examined are a bare right cylinder tank and a right cylinder with a single ring baffle.
Crowdsourcing for Cognitive Science – The Utility of Smartphones
Brown, Harriet R.; Zeidman, Peter; Smittenaar, Peter; Adams, Rick A.; McNab, Fiona; Rutledge, Robb B.; Dolan, Raymond J.
2014-01-01
By 2015, there will be an estimated two billion smartphone users worldwide. This technology presents exciting opportunities for cognitive science as a medium for rapid, large-scale experimentation and data collection. At present, cost and logistics limit most study populations to small samples, restricting the experimental questions that can be addressed. In this study we investigated whether the mass collection of experimental data using smartphone technology is valid, given the variability of data collection outside of a laboratory setting. We presented four classic experimental paradigms as short games, available as a free app and over the first month 20,800 users submitted data. We found that the large sample size vastly outweighed the noise inherent in collecting data outside a controlled laboratory setting, and show that for all four games canonical results were reproduced. For the first time, we provide experimental validation for the use of smartphones for data collection in cognitive science, which can lead to the collection of richer data sets and a significant cost reduction as well as provide an opportunity for efficient phenotypic screening of large populations. PMID:25025865
Crowdsourcing for cognitive science--the utility of smartphones.
Brown, Harriet R; Zeidman, Peter; Smittenaar, Peter; Adams, Rick A; McNab, Fiona; Rutledge, Robb B; Dolan, Raymond J
2014-01-01
By 2015, there will be an estimated two billion smartphone users worldwide. This technology presents exciting opportunities for cognitive science as a medium for rapid, large-scale experimentation and data collection. At present, cost and logistics limit most study populations to small samples, restricting the experimental questions that can be addressed. In this study we investigated whether the mass collection of experimental data using smartphone technology is valid, given the variability of data collection outside of a laboratory setting. We presented four classic experimental paradigms as short games, available as a free app and over the first month 20,800 users submitted data. We found that the large sample size vastly outweighed the noise inherent in collecting data outside a controlled laboratory setting, and show that for all four games canonical results were reproduced. For the first time, we provide experimental validation for the use of smartphones for data collection in cognitive science, which can lead to the collection of richer data sets and a significant cost reduction as well as provide an opportunity for efficient phenotypic screening of large populations.
Liu, Huolong; Galbraith, S C; Ricart, Brendon; Stanton, Courtney; Smith-Goettler, Brandye; Verdi, Luke; O'Connor, Thomas; Lee, Sau; Yoon, Seongkyu
2017-06-15
In this study, the influence of key process variables (screw speed, throughput and liquid to solid (L/S) ratio) of a continuous twin screw wet granulation (TSWG) was investigated using a central composite face-centered (CCF) experimental design method. Regression models were developed to predict the process responses (motor torque, granule residence time), granule properties (size distribution, volume average diameter, yield, relative width, flowability) and tablet properties (tensile strength). The effects of the three key process variables were analyzed via contour and interaction plots. The experimental results have demonstrated that all the process responses, granule properties and tablet properties are influenced by changing the screw speed, throughput and L/S ratio. The TSWG process was optimized to produce granules with specific volume average diameter of 150μm and the yield of 95% based on the developed regression models. A design space (DS) was built based on volume average granule diameter between 90 and 200μm and the granule yield larger than 75% with a failure probability analysis using Monte Carlo simulations. Validation experiments successfully validated the robustness and accuracy of the DS generated using the CCF experimental design in optimizing a continuous TSWG process. Copyright © 2017 Elsevier B.V. All rights reserved.
Demekhin, E A; Kalaidin, E N; Kalliadasis, S; Vlaskin, S Yu
2010-09-01
We validate experimentally the Kapitsa-Shkadov model utilized in the theoretical studies by Demekhin [Phys. Fluids 19, 114103 (2007)10.1063/1.2793148; Phys. Fluids 19, 114104 (2007)]10.1063/1.2793149 of surface turbulence on a thin liquid film flowing down a vertical planar wall. For water at 15° , surface turbulence typically occurs at an inlet Reynolds number of ≃40 . Of particular interest is to assess experimentally the predictions of the model for three-dimensional nonlinear localized coherent structures, which represent elementary processes of surface turbulence. For this purpose we devise simple experiments to investigate the instabilities and transitions leading to such structures. Our experimental results are in good agreement with the theoretical predictions of the model. We also perform time-dependent computations for the formation of coherent structures and their interaction with localized structures of smaller amplitude on the surface of the film.
Physical Justification for Negative Remanent Magnetization in Homogeneous Nanoparticles
Gu, Shuo; He, Weidong; Zhang, Ming; Zhuang, Taisen; Jin, Yi; ElBidweihy, Hatem; Mao, Yiwu; Dickerson, James H.; Wagner, Michael J.; Torre, Edward Della; Bennett, Lawrence H.
2014-01-01
The phenomenon of negative remanent magnetization (NRM) has been observed experimentally in a number of heterogeneous magnetic systems and has been considered anomalous. The existence of NRM in homogenous magnetic materials is still in debate, mainly due to the lack of compelling support from experimental data and a convincing theoretical explanation for its thermodynamic validation. Here we resolve the long-existing controversy by presenting experimental evidence and physical justification that NRM is real in a prototype homogeneous ferromagnetic nanoparticle, an europium sulfide nanoparticle. We provide novel insights into major and minor hysteresis behavior that illuminate the true nature of the observed inverted hysteresis and validate its thermodynamic permissibility and, for the first time, present counterintuitive magnetic aftereffect behavior that is consistent with the mechanism of magnetization reversal, possessing unique capability to identify NRM. The origin and conditions of NRM are explained quantitatively via a wasp-waist model, in combination of energy calculations. PMID:25183061
NASA Technical Reports Server (NTRS)
Credeur, Leonard; Houck, Jacob A.; Capron, William R.; Lohr, Gary W.
1990-01-01
A description and results are presented of a study to measure the performance and reaction of airline flight crews, in a full workload DC-9 cockpit, flying in a real-time simulation of an air traffic control (ATC) concept called Traffic Intelligence for the Management of Efficient Runway-scheduling (TIMER). Experimental objectives were to verify earlier fast-time TIMER time-delivery precision results and obtain data for the validation or refinement of existing computer models of pilot/airborne performance. Experimental data indicated a runway threshold, interarrival-time-error standard deviation in the range of 10.4 to 14.1 seconds. Other real-time system performance parameters measured include approach speeds, response time to controller turn instructions, bank angles employed, and ATC controller message delivery-time errors.
Computer aided manual validation of mass spectrometry-based proteomic data.
Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M
2013-06-15
Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.
Validating LES for Jet Aeroacoustics
NASA Technical Reports Server (NTRS)
Bridges, James; Wernet, Mark P.
2011-01-01
Engineers charged with making jet aircraft quieter have long dreamed of being able to see exactly how turbulent eddies produce sound and this dream is now coming true with the advent of large eddy simulation (LES). Two obvious challenges remain: validating the LES codes at the resolution required to see the fluid-acoustic coupling, and the interpretation of the massive datasets that are produced. This paper addresses the former, the use of advanced experimental techniques such as particle image velocimetry (PIV) and Raman and Rayleigh scattering, to validate the computer codes and procedures used to create LES solutions. This paper argues that the issue of accuracy of the experimental measurements be addressed by cross-facility and cross-disciplinary examination of modern datasets along with increased reporting of internal quality checks in PIV analysis. Further, it argues that the appropriate validation metrics for aeroacoustic applications are increasingly complicated statistics that have been shown in aeroacoustic theory to be critical to flow-generated sound, such as two-point space-time velocity correlations. A brief review of data sources available is presented along with examples illustrating cross-facility and internal quality checks required of the data before it should be accepted for validation of LES.
Towards natural language question generation for the validation of ontologies and mappings.
Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos
2016-08-08
The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.
Abarajith, H S; Dhir, V K; Warrier, G; Son, G
2004-11-01
Numerical simulation and experimental validation of the growth and departure of multiple merging bubbles and associated heat transfer on a horizontal heated surface during pool boiling under variable gravity conditions have been performed. A finite difference scheme is used to solve the equations governing mass, momentum, and energy in the vapor liquid phases. The vapor-liquid interface is captured by a level set method that is modified to include the influence of phase change at the liquid-vapor interface. Water is used as test liquid. The effects of reduced gravity condition and orientation of the bubbles on the bubble diameter, interfacial structure, bubble merger time, and departure time, as well as local heat fluxes, are studied. In the experiments, multiple vapor bubbles are produced on artificial cavities in the 2-10 micrometer diameter range, microfabricated on the polished silicon wafer with given spacing. The wafer was heated electrically from the back with miniature strain gage type heating elements in order to control the nucleation superheat. The experiments conducted in normal Earth gravity and in the low gravity environment of KC-135 aircraft are used to validate the numerical simulations.
OECD-NEA Expert Group on Multi-Physics Experimental Data, Benchmarks and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valentine, Timothy; Rohatgi, Upendra S.
High-fidelity, multi-physics modeling and simulation (M&S) tools are being developed and utilized for a variety of applications in nuclear science and technology and show great promise in their abilities to reproduce observed phenomena for many applications. Even with the increasing fidelity and sophistication of coupled multi-physics M&S tools, the underpinning models and data still need to be validated against experiments that may require a more complex array of validation data because of the great breadth of the time, energy and spatial domains of the physical phenomena that are being simulated. The Expert Group on Multi-Physics Experimental Data, Benchmarks and Validationmore » (MPEBV) of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) was formed to address the challenges with the validation of such tools. The work of the MPEBV expert group is shared among three task forces to fulfill its mandate and specific exercises are being developed to demonstrate validation principles for common industrial challenges. This paper describes the overall mission of the group, the specific objectives of the task forces, the linkages among the task forces, and the development of a validation exercise that focuses on a specific reactor challenge problem.« less
Shock compression response of cold-rolled Ni/Al multilayer composites
Specht, Paul E.; Weihs, Timothy P.; Thadhani, Naresh N.
2017-01-06
Uniaxial strain, plate-on-plate impact experiments were performed on cold-rolled Ni/Al multilayer composites and the resulting Hugoniot was determined through time-resolved measurements combined with impedance matching. The experimental Hugoniot agreed with that previously predicted by two dimensional (2D) meso-scale calculations. Additional 2D meso-scale simulations were performed using the same computational method as the prior study to reproduce the experimentally measured free surface velocities and stress profiles. Finally, these simulations accurately replicated the experimental profiles, providing additional validation for the previous computational work.
ROLE OF TIMING IN ASSESSMENT OF NERVE REGENERATION
BRENNER, MICHAEL J.; MORADZADEH, ARASH; MYCKATYN, TERENCE M.; TUNG, THOMAS H. H.; MENDEZ, ALLEN B.; HUNTER, DANIEL A.; MACKINNON, SUSAN E.
2014-01-01
Small animal models are indispensable for research on nerve injury and reconstruction, but their superlative regenerative potential may confound experimental interpretation. This study investigated time-dependent neuroregenerative phenomena in rodents. Forty-six Lewis rats were randomized to three nerve allograft groups treated with 2 mg/(kg day) tacrolimus; 5 mg/(kg day) Cyclosporine A; or placebo injection. Nerves were subjected to histomorphometric and walking track analysis at serial time points. Tacrolimus increased fiber density, percent neural tissue, and nerve fiber count and accelerated functional recovery at 40 days, but these differences were undetectable by 70 days. Serial walking track analysis showed a similar pattern of recovery. A ‘blow-through’ effect is observed in rodents whereby an advancing nerve front overcomes an experimental defect given sufficient time, rendering experimental groups indistinguishable at late time points. Selection of validated time points and corroboration in higher animal models are essential prerequisites for the clinical application of basic research on nerve regeneration. PMID:18381659
NASA Technical Reports Server (NTRS)
Sellers, William L., III; Dwoyer, Douglas L.
1992-01-01
The design of a hypersonic aircraft poses unique challenges to the engineering community. Problems with duplicating flight conditions in ground based facilities have made performance predictions risky. Computational fluid dynamics (CFD) has been proposed as an additional means of providing design data. At the present time, CFD codes are being validated based on sparse experimental data and then used to predict performance at flight conditions with generally unknown levels of uncertainty. This paper will discuss the facility and measurement techniques that are required to support CFD development for the design of hypersonic aircraft. Illustrations are given of recent success in combining experimental and direct numerical simulation in CFD model development and validation for hypersonic perfect gas flows.
Experimental Characterization of the Jet Wiping Process
NASA Astrophysics Data System (ADS)
Mendez, Miguel Alfonso; Enache, Adriana; Gosset, Anne; Buchlin, Jean-Marie
2018-06-01
This paper presents an experimental characterization of the jet wiping process, used in continuous coating applications to control the thickness of a liquid coat using an impinging gas jet. Time Resolved Particle Image Velocimetry (TR-PIV) is used to characterize the impinging gas flow, while an automatic interface detection algorithm is developed to track the liquid interface at the impact. The study of the flow interaction is combined with time resolved 3D thickness measurements of the liquid film remaining after the wiping, via Time Resolved Light Absorption (TR-LAbs). The simultaneous frequency analysis of liquid and gas flows allows to correlate their respective instability, provide an experimental data set for the validation of numerical studies and allows for formulating a working hypothesis on the origin of the coat non-uniformity encountered in many jet wiping processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewsuk, K.G.; Cochran, R.J.; Blackwell, B.F.
The properties and performance of a ceramic component is determined by a combination of the materials from which it was fabricated and how it was processed. Most ceramic components are manufactured by dry pressing a powder/binder system in which the organic binder provides formability and green compact strength. A key step in this manufacturing process is the removal of the binder from the powder compact after pressing. The organic binder is typically removed by a thermal decomposition process in which heating rate, temperature, and time are the key process parameters. Empirical approaches are generally used to design the burnout time-temperaturemore » cycle, often resulting in excessive processing times and energy usage, and higher overall manufacturing costs. Ideally, binder burnout should be completed as quickly as possible without damaging the compact, while using a minimum of energy. Process and computational modeling offer one means to achieve this end. The objective of this study is to develop an experimentally validated computer model that can be used to better understand, control, and optimize binder burnout from green ceramic compacts.« less
NASA Astrophysics Data System (ADS)
Sun, A. Y.; Islam, A.; Lu, J.
2017-12-01
Time-lapse oscillatory pumping test (OPT) has been introduced recently as a pressure-based monitoring technique for detecting potential leakage in geologic repositories. By routinely conducting OPT at a number of pulsing frequencies, a site operator may identify the potential anomalies in the frequency domain, alleviating the ambiguity caused by reservoir noise and improving the signal-to-noise ratio. Building on previous theoretical and field studies, this work performed a series of laboratory experiments to validate the concept of time-lapse OPT using a custom made, stainless steel tank under relatively high pressures ( 120psi). The experimental configuration simulates a miniature geologic storage repository consisting of three layers (i.e., injection zone, caprock, and above-zone aquifer). Results show that leakage in the injection zone led to deviations in the power spectrum of observed pressure data, and the amplitude of which also increases with decreasing pulsing frequencies. The experimental results were further analyzed by developing a 3D flow model, using which the model parameters were estimated through frequency domain inversion.
Caldeira, A Teresa; Arteiro, José M; Roseiro, José C; Neves, José; Vicente, H
2011-01-01
The combined effect of incubation time (IT) and aspartic acid concentration (AA) on the predicted biomass concentration (BC), Bacillus sporulation (BS) and anti-fungal activity of compounds (AFA) produced by Bacillus amyloliquefaciens CCMI 1051, was studied using Artificial Neural Networks (ANNs). The values predicted by ANN were in good agreement with experimental results, and were better than those obtained when using Response Surface Methodology. The database used to train and validate ANNs contains experimental data of B. amyloliquefaciens cultures (AFA, BS and BC) with different incubation times (1-9 days) using aspartic acid (3-42 mM) as nitrogen source. After the training and validation stages, the 2-7-6-3 neural network results showed that maximum AFA can be achieved with 19.5 mM AA on day 9; however, maximum AFA can also be obtained with an incubation time as short as 6 days with 36.6 mM AA. Furthermore, the model results showed two distinct behaviors for AFA, depending on IT. Copyright © 2010 Elsevier Ltd. All rights reserved.
CFD Modeling Needs and What Makes a Good Supersonic Combustion Validation Experiment
NASA Technical Reports Server (NTRS)
Gaffney, Richard L., Jr.; Cutler, Andrew D.
2005-01-01
If a CFD code/model developer is asked what experimental data he wants to validate his code or numerical model, his answer will be: "Everything, everywhere, at all times." Since this is not possible, practical, or even reasonable, the developer must understand what can be measured within the limits imposed by the test article, the test location, the test environment and the available diagnostic equipment. At the same time, it is important for the expermentalist/diagnostician to understand what the CFD developer needs (as opposed to wants) in order to conduct a useful CFD validation experiment. If these needs are not known, it is possible to neglect easily measured quantities at locations needed by the developer, rendering the data set useless for validation purposes. It is also important for the experimentalist/diagnostician to understand what the developer is trying to validate so that the experiment can be designed to isolate (as much as possible) the effects of a particular physical phenomena that is associated with the model to be validated. The probability of a successful validation experiment can be greatly increased if the two groups work together, each understanding the needs and limitations of the other.
Development and Validation of a Constitutive Model for Dental Composites during the Curing Process
NASA Astrophysics Data System (ADS)
Wickham Kolstad, Lauren
Debonding is a critical failure of a dental composites used for dental restorations. Debonding of dental composites can be determined by comparing the shrinkage stress of to the debonding strength of the adhesive that bonds it to the tooth surface. It is difficult to measure shrinkage stress experimentally. In this study, finite element analysis is used to predict the stress in the composite during cure. A new constitutive law is presented that will allow composite developers to evaluate composite shrinkage stress at early stages in the material development. Shrinkage stress and shrinkage strain experimental data were gathered for three dental resins, Z250, Z350, and P90. Experimental data were used to develop a constitutive model for the Young's modulus as a function of time of the dental composite during cure. A Maxwell model, spring and dashpot in series, was used to simulate the composite. The compliance of the shrinkage stress device was also taken into account by including a spring in series with the Maxwell model. A coefficient of thermal expansion was also determined for internal loading of the composite by dividing shrinkage strain by time. Three FEA models are presented. A spring-disk model validates that the constitutive law is self-consistent. A quarter cuspal deflection model uses separate experimental data to verify that the constitutive law is valid. Finally, an axisymmetric tooth model is used to predict interfacial stresses in the composite. These stresses are compared to the debonding strength to check if the composite debonds. The new constitutive model accurately predicted cuspal deflection data. Predictions for interfacial bond stress in the tooth model compare favorably with debonding characteristics observed in practice for dental resins.
Mass transfer coefficient in ginger oil extraction by microwave hydrotropic solution
NASA Astrophysics Data System (ADS)
Handayani, Dwi; Ikhsan, Diyono; Yulianto, Mohamad Endy; Dwisukma, Mandy Ayulia
2015-12-01
This research aims to obtain mass transfer coefficient data on the extraction of ginger oil using microwave hydrotropic solvent as an alternative to increase zingiberene. The innovation of this study is extraction with microwave heater and hydrotropic solvent,which able to shift the phase equilibrium, and the increasing rate of the extraction process and to improve the content of ginger oil zingiberene. The experiment was conducted at the Laboratory of Separation Techniques at Chemical Engineering Department of Diponegoro University. The research activities carried out in two stages, namely experimental and modeling work. Preparation of the model postulated, then lowered to obtain equations that were tested and validated using data obtained from experimental. Measurement of experimental data was performed using microwave power (300 W), extraction temperature of 90 ° C and the independent variable, i.e.: type of hydrotropic, the volume of solvent and concentration in order, to obtain zingiberen levels as a function of time. Measured data was used as a tool to validate the postulation, in order to obtain validation of models and empirical equations. The results showed that the mass transfer coefficient (Kla) on zingiberene mass transfer models ginger oil extraction at various hydrotropic solution attained more 14 ± 2 Kla value than its reported on the extraction with electric heating. The larger value of Kla, the faster rate of mass transfer on the extraction process. To obtain the same yields, the microwave-assisted extraction required one twelfth time shorter.
Kobayashi, T.; Itoh, K.; Ido, T.; Kamiya, K.; Itoh, S.-I.; Miura, Y.; Nagashima, Y.; Fujisawa, A.; Inagaki, S.; Ida, K.; Hoshino, K.
2016-01-01
Self-regulation between structure and turbulence, which is a fundamental process in the complex system, has been widely regarded as one of the central issues in modern physics. A typical example of that in magnetically confined plasmas is the Low confinement mode to High confinement mode (L-H) transition, which is intensely studied for more than thirty years since it provides a confinement improvement necessary for the realization of the fusion reactor. An essential issue in the L-H transition physics is the mechanism of the abrupt “radial” electric field generation in toroidal plasmas. To date, several models for the L-H transition have been proposed but the systematic experimental validation is still challenging. Here we report the systematic and quantitative model validations of the radial electric field excitation mechanism for the first time, using a data set of the turbulence and the radial electric field having a high spatiotemporal resolution. Examining time derivative of Poisson’s equation, the sum of the loss-cone loss current and the neoclassical bulk viscosity current is found to behave as the experimentally observed radial current that excites the radial electric field within a few factors of magnitude. PMID:27489128
Prediction and validation of blowout limits of co-flowing jet diffusion flames -- effect of dilution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karbasi, M.; Wierzba, I.
1996-10-01
The blowout limits of a co-flowing turbulent methane jet diffusion flame with addition of diluent in either jet fuel or surrounding air stream is studied both analytically and experimentally. Helium, nitrogen and carbon dioxide were employed as the diluents. Experiments indicated that an addition of diluents to the jet fuel or surrounding air stream decreased the stability limit of the jet diffusion flames. The strongest effect was observed with carbon dioxide as the diluent followed by nitrogen and then by helium. A model of extinction based on recognized criterion of the mixing time scale to characteristic combustion time scale ratiomore » using experimentally derived correlations is proposed. It is capable of predicting the large reduction of the jet blowout velocity due to a relatively small increase in the co-flow stream velocity along with an increase in the concentration of diluent in either the jet fuel or surrounding air stream. Experiments were carried out to validate the model. The predicted blowout velocities of turbulent jet diffusion flames obtained using this model are in good agreement with the corresponding experimental data.« less
Experimental evaluation of the certification-trail method
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.; Itoh, Mamoru; Smith, Warren W.; Kay, Jonathan S.
1993-01-01
Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. A comprehensive attempt to assess experimentally the performance and overall value of the method is reported. The method is applied to algorithms for the following problems: huffman tree, shortest path, minimum spanning tree, sorting, and convex hull. Our results reveal many cases in which an approach using certification-trails allows for significantly faster overall program execution time than a basic time redundancy-approach. Algorithms for the answer-validation problem for abstract data types were also examined. This kind of problem provides a basis for applying the certification-trail method to wide classes of algorithms. Answer-validation solutions for two types of priority queues were implemented and analyzed. In both cases, the algorithm which performs answer-validation is substantially faster than the original algorithm for computing the answer. Next, a probabilistic model and analysis which enables comparison between the certification-trail method and the time-redundancy approach were presented. The analysis reveals some substantial and sometimes surprising advantages for ther certification-trail method. Finally, the work our group performed on the design and implementation of fault injection testbeds for experimental analysis of the certification trail technique is discussed. This work employs two distinct methodologies, software fault injection (modification of instruction, data, and stack segments of programs on a Sun Sparcstation ELC and on an IBM 386 PC) and hardware fault injection (control, address, and data lines of a Motorola MC68000-based target system pulsed at logical zero/one values). Our results indicate the viability of the certification trail technique. It is also believed that the tools developed provide a solid base for additional exploration.
Testability of evolutionary game dynamics based on experimental economics data
NASA Astrophysics Data System (ADS)
Wang, Yijia; Chen, Xiaojie; Wang, Zhijian
In order to better understand the dynamic processes of a real game system, we need an appropriate dynamics model, so to evaluate the validity of a model is not a trivial task. Here, we demonstrate an approach, considering the dynamical macroscope patterns of angular momentum and speed as the measurement variables, to evaluate the validity of various dynamics models. Using the data in real time Rock-Paper-Scissors (RPS) games experiments, we obtain the experimental dynamic patterns, and then derive the related theoretical dynamic patterns from a series of typical dynamics models respectively. By testing the goodness-of-fit between the experimental and theoretical patterns, the validity of the models can be evaluated. One of the results in our study case is that, among all the nonparametric models tested, the best-known Replicator dynamics model performs almost worst, while the Projection dynamics model performs best. Besides providing new empirical macroscope patterns of social dynamics, we demonstrate that the approach can be an effective and rigorous tool to test game dynamics models. Fundamental Research Funds for the Central Universities (SSEYI2014Z) and the National Natural Science Foundation of China (Grants No. 61503062).
Chen, Weixin; Chen, Jianye; Lu, Wangjin; Chen, Lei; Fu, Danwen
2012-01-01
Real-time reverse transcription PCR (RT-qPCR) is a preferred method for rapid and accurate quantification of gene expression studies. Appropriate application of RT-qPCR requires accurate normalization though the use of reference genes. As no single reference gene is universally suitable for all experiments, thus reference gene(s) validation under different experimental conditions is crucial for RT-qPCR analysis. To date, only a few studies on reference genes have been done in other plants but none in papaya. In the present work, we selected 21 candidate reference genes, and evaluated their expression stability in 246 papaya fruit samples using three algorithms, geNorm, NormFinder and RefFinder. The samples consisted of 13 sets collected under different experimental conditions, including various tissues, different storage temperatures, different cultivars, developmental stages, postharvest ripening, modified atmosphere packaging, 1-methylcyclopropene (1-MCP) treatment, hot water treatment, biotic stress and hormone treatment. Our results demonstrated that expression stability varied greatly between reference genes and that different suitable reference gene(s) or combination of reference genes for normalization should be validated according to the experimental conditions. In general, the internal reference genes EIF (Eukaryotic initiation factor 4A), TBP1 (TATA binding protein 1) and TBP2 (TATA binding protein 2) genes had a good performance under most experimental conditions, whereas the most widely present used reference genes, ACTIN (Actin 2), 18S rRNA (18S ribosomal RNA) and GAPDH (Glyceraldehyde-3-phosphate dehydrogenase) were not suitable in many experimental conditions. In addition, two commonly used programs, geNorm and Normfinder, were proved sufficient for the validation. This work provides the first systematic analysis for the selection of superior reference genes for accurate transcript normalization in papaya under different experimental conditions. PMID:22952972
Predictive searching algorithm for Fourier ptychography
NASA Astrophysics Data System (ADS)
Li, Shunkai; Wang, Yifan; Wu, Weichen; Liang, Yanmei
2017-12-01
By capturing a set of low-resolution images under different illumination angles and stitching them together in the Fourier domain, Fourier ptychography (FP) is capable of providing high-resolution image with large field of view. Despite its validity, long acquisition time limits its real-time application. We proposed an incomplete sampling scheme in this paper, termed the predictive searching algorithm to shorten the acquisition and recovery time. Informative sub-regions of the sample’s spectrum are searched and the corresponding images of the most informative directions are captured for spectrum expansion. Its effectiveness is validated by both simulated and experimental results, whose data requirement is reduced by ˜64% to ˜90% without sacrificing image reconstruction quality compared with the conventional FP method.
A model of fluid and solute exchange in the human: validation and implications.
Bert, J L; Gyenge, C C; Bowen, B D; Reed, R K; Lund, T
2000-11-01
In order to understand better the complex, dynamic behaviour of the redistribution and exchange of fluid and solutes administered to normal individuals or to those with acute hypovolemia, mathematical models are used in addition to direct experimental investigation. Initial validation of a model developed by our group involved data from animal experiments (Gyenge, C.C., Bowen, B.D., Reed, R.K. & Bert, J.L. 1999b. Am J Physiol 277 (Heart Circ Physiol 46), H1228-H1240). For a first validation involving humans, we compare the results of simulations with a wide range of different types of data from two experimental studies. These studies involved administration of normal saline or hypertonic saline with Dextran to both normal and 10% haemorrhaged subjects. We compared simulations with data including the dynamic changes in plasma and interstitial fluid volumes VPL and VIT respectively, plasma and interstitial colloid osmotic pressures PiPL and PiIT respectively, haematocrit (Hct), plasma solute concentrations and transcapillary flow rates. The model predictions were overall in very good agreement with the wide range of experimental results considered. Based on the conditions investigated, the model was also validated for humans. We used the model both to investigate mechanisms associated with the redistribution and transport of fluid and solutes administered following a mild haemorrhage and to speculate on the relationship between the timing and amount of fluid infusions and subsequent blood volume expansion.
Hartwig, Jason; Mittal, Gaurav; Kumar, Kamal; Sung, Chih-Jen
2018-04-01
This paper presents a set of system validation experiments that can be used to qualify either static or flow experimental systems for gathering tracer photophysical data or conducting laser diagnostics at high pressure and temperature in order to establish design and operation limits and reduce uncertainty in data interpretation. Tests demonstrated here quantify the effect of tracer absorption at the test cell walls, stratification, photolysis, pyrolysis, adequacy of mixing and seeding, and reabsorption of laser light using acetone as the tracer and 282 nm excitation. Results show that acetone exhibits a 10% decrease in fluorescence signal over 36 000 shots at 127.4 mJ/cm 2 , and photolysis is negligible below 1000 shots collected. Meanwhile, appropriately chosen gas residence times can mitigate risks due to pyrolysis and inadequate mixing and seeding; for the current work 100 ms residence time ensured <0.5% alteration of tracer number density due to thermal destruction. Experimental results here are compared to theoretical values from the literature.
Soto, Marcelo A; Lu, Xin; Martins, Hugo F; Gonzalez-Herraez, Miguel; Thévenaz, Luc
2015-09-21
In this paper a technique to measure the distributed birefringence profile along optical fibers is proposed and experimentally validated. The method is based on the spectral correlation between two sets of orthogonally-polarized measurements acquired using a phase-sensitive optical time-domain reflectometer (ϕOTDR). The correlation between the two measured spectra gives a resonance (correlation) peak at a frequency detuning that is proportional to the local refractive index difference between the two orthogonal polarization axes of the fiber. In this way the method enables local phase birefringence measurements at any position along optical fibers, so that any longitudinal fluctuation can be precisely evaluated with metric spatial resolution. The method has been experimentally validated by measuring fibers with low and high birefringence, such as standard single-mode fibers as well as conventional polarization-maintaining fibers. The technique has potential applications in the characterization of optical fibers for telecommunications as well as in distributed optical fiber sensing.
Soto, Marcelo A; Ricchiuti, Amelia Lavinia; Zhang, Liang; Barrera, David; Sales, Salvador; Thévenaz, Luc
2014-11-17
A technique to enhance the response and performance of Brillouin distributed fiber sensors is proposed and experimentally validated. The method consists in creating a multi-frequency pump pulse interacting with a matching multi-frequency continuous-wave probe. To avoid nonlinear cross-interaction between spectral lines, the method requires that the distinct pump pulse components and temporal traces reaching the photo-detector are subject to wavelength-selective delaying. This way the total pump and probe powers launched into the fiber can be incrementally boosted beyond the thresholds imposed by nonlinear effects. As a consequence of the multiplied pump-probe Brillouin interactions occurring along the fiber, the sensor response can be enhanced in exact proportion to the number of spectral components. The method is experimentally validated in a 50 km-long distributed optical fiber sensor augmented to 3 pump-probe spectral pairs, demonstrating a signal-to-noise ratio enhancement of 4.8 dB.
Experimental validation of structural optimization methods
NASA Technical Reports Server (NTRS)
Adelman, Howard M.
1992-01-01
The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.
Validating internal controls for quantitative plant gene expression studies.
Brunner, Amy M; Yakovlev, Igor A; Strauss, Steven H
2004-08-18
Real-time reverse transcription PCR (RT-PCR) has greatly improved the ease and sensitivity of quantitative gene expression studies. However, accurate measurement of gene expression with this method relies on the choice of a valid reference for data normalization. Studies rarely verify that gene expression levels for reference genes are adequately consistent among the samples used, nor compare alternative genes to assess which are most reliable for the experimental conditions analyzed. Using real-time RT-PCR to study the expression of 10 poplar (genus Populus) housekeeping genes, we demonstrate a simple method for determining the degree of stability of gene expression over a set of experimental conditions. Based on a traditional method for analyzing the stability of varieties in plant breeding, it defines measures of gene expression stability from analysis of variance (ANOVA) and linear regression. We found that the potential internal control genes differed widely in their expression stability over the different tissues, developmental stages and environmental conditions studied. Our results support that quantitative comparisons of candidate reference genes are an important part of real-time RT-PCR studies that seek to precisely evaluate variation in gene expression. The method we demonstrated facilitates statistical and graphical evaluation of gene expression stability. Selection of the best reference gene for a given set of experimental conditions should enable detection of biologically significant changes in gene expression that are too small to be revealed by less precise methods, or when highly variable reference genes are unknowingly used in real-time RT-PCR experiments.
Sahoo, Debasis; Deck, Caroline; Yoganandan, Narayan; Willinger, Rémy
2013-12-01
A composite material model for skull, taking into account damage is implemented in the Strasbourg University finite element head model (SUFEHM) in order to enhance the existing skull mechanical constitutive law. The skull behavior is validated in terms of fracture patterns and contact forces by reconstructing 15 experimental cases. The new SUFEHM skull model is capable of reproducing skull fracture precisely. The composite skull model is validated not only for maximum forces, but also for lateral impact against actual force time curves from PMHS for the first time. Skull strain energy is found to be a pertinent parameter to predict the skull fracture and based on statistical (binary logistical regression) analysis it is observed that 50% risk of skull fracture occurred at skull strain energy of 544.0mJ. © 2013 Elsevier Ltd. All rights reserved.
Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin
2011-01-01
Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292
Creep of plain weave polymer matrix composites
NASA Astrophysics Data System (ADS)
Gupta, Abhishek
Polymer matrix composites are increasingly used in various industrial sectors to reduce structural weight and improve performance. Woven (also known as textile) composites are one class of polymer matrix composites with increasing market share mostly due to their lightweight, their flexibility to form into desired shape, their mechanical properties and toughness. Due to the viscoelasticity of the polymer matrix, time-dependent degradation in modulus (creep) and strength (creep rupture) are two of the major mechanical properties required by engineers to design a structure reliably when using these materials. Unfortunately, creep and creep rupture of woven composites have received little attention by the research community and thus, there is a dire need to generate additional knowledge and prediction models, given the increasing market share of woven composites in load bearing structural applications. Currently, available creep models are limited in scope and have not been validated for any loading orientation and time period beyond the experimental time window. In this thesis, an analytical creep model, namely the Modified Equivalent Laminate Model (MELM), was developed to predict tensile creep of plain weave composites for any orientation of the load with respect to the orientation of the fill and warp fibers, using creep of unidirectional composites. The ability of the model to predict creep for any orientation of the load is a "first" in this area. The model was validated using an extensive experimental involving the tensile creep of plain weave composites under varying loading orientation and service conditions. Plain weave epoxy (F263)/ carbon fiber (T300) composite, currently used in aerospace applications, was procured as fabrics from Hexcel Corporation. Creep tests were conducted under two loading conditions: on-axis loading (0°) and off-axis loading (45°). Constant load creep, in the temperature range of 80-240°C and stress range of 1-70% UTS of the composites, was experimentally evaluated for time periods ranging from 1--120 hours under both loading conditions. The composite showed increase in creep with increase in temperature and stress. Creep of composite increased with increase in angle of loading, from 1% under on-axis loading to 31% under off-axis loading, within the tested time window. The experimental creep data for plain weave composites were superposed using TTSP (Time Temperature Superposition Principle) to obtain a master curve of experimental data extending to several years and was compared with model predictions to validate the model. The experimental and model results were found in good agreement within an error range of +/-1-3% under both loading conditions. A parametric study was also conducted to understand the effect of microstructure of plain weave composites on its on-axis and off-axis creep. Generation of knowledge in this area is also "first". Additionally, this thesis generated knowledge on time-dependent damage m woven composites and its effect on creep and tensile properties and their prediction.
A joint-space numerical model of metabolic energy expenditure for human multibody dynamic system.
Kim, Joo H; Roberts, Dustyn
2015-09-01
Metabolic energy expenditure (MEE) is a critical performance measure of human motion. In this study, a general joint-space numerical model of MEE is derived by integrating the laws of thermodynamics and principles of multibody system dynamics, which can evaluate MEE without the limitations inherent in experimental measurements (phase delays, steady state and task restrictions, and limited range of motion) or muscle-space models (complexities and indeterminacies from excessive DOFs, contacts and wrapping interactions, and reliance on in vitro parameters). Muscle energetic components are mapped to the joint space, in which the MEE model is formulated. A constrained multi-objective optimization algorithm is established to estimate the model parameters from experimental walking data also used for initial validation. The joint-space parameters estimated directly from active subjects provide reliable MEE estimates with a mean absolute error of 3.6 ± 3.6% relative to validation values, which can be used to evaluate MEE for complex non-periodic tasks that may not be experimentally verifiable. This model also enables real-time calculations of instantaneous MEE rate as a function of time for transient evaluations. Although experimental measurements may not be completely replaced by model evaluations, predicted quantities can be used as strong complements to increase reliability of the results and yield unique insights for various applications. Copyright © 2015 John Wiley & Sons, Ltd.
Jayaswal, Vivek; Lutherborrow, Mark; Ma, David D F; Hwa Yang, Yee
2009-05-01
Over the past decade, a class of small RNA molecules called microRNAs (miRNAs) has been shown to regulate gene expression at the post-transcription stage. While early work focused on the identification of miRNAs using a combination of experimental and computational techniques, subsequent studies have focused on identification of miRNA-target mRNA pairs as each miRNA can have hundreds of mRNA targets. The experimental validation of some miRNAs as oncogenic has provided further motivation for research in this area. In this article we propose an odds-ratio (OR) statistic for identification of regulatory miRNAs. It is based on integrative analysis of matched miRNA and mRNA time-course microarray data. The OR-statistic was used for (i) identification of miRNAs with regulatory potential, (ii) identification of miRNA-target mRNA pairs and (iii) identification of time lags between changes in miRNA expression and those of its target mRNAs. We applied the OR-statistic to a cancer data set and identified a small set of miRNAs that were negatively correlated to mRNAs. A literature survey revealed that some of the miRNAs that were predicted to be regulatory, were indeed oncogenic or tumor suppressors. Finally, some of the predicted miRNA targets have been shown to be experimentally valid.
NASA Astrophysics Data System (ADS)
Bevilacqua, R.; Lehmann, T.; Romano, M.
2011-04-01
This work introduces a novel control algorithm for close proximity multiple spacecraft autonomous maneuvers, based on hybrid linear quadratic regulator/artificial potential function (LQR/APF), for applications including autonomous docking, on-orbit assembly and spacecraft servicing. Both theoretical developments and experimental validation of the proposed approach are presented. Fuel consumption is sub-optimized in real-time through re-computation of the LQR at each sample time, while performing collision avoidance through the APF and a high level decisional logic. The underlying LQR/APF controller is integrated with a customized wall-following technique and a decisional logic, overcoming problems such as local minima. The algorithm is experimentally tested on a four spacecraft simulators test bed at the Spacecraft Robotics Laboratory of the Naval Postgraduate School. The metrics to evaluate the control algorithm are: autonomy of the system in making decisions, successful completion of the maneuver, required time, and propellant consumption.
Colour cyclic code for Brillouin distributed sensors
NASA Astrophysics Data System (ADS)
Le Floch, Sébastien; Sauser, Florian; Llera, Miguel; Rochat, Etienne
2015-09-01
For the first time, a colour cyclic coding (CCC) is theoretically and experimentally demonstrated for Brillouin optical time-domain analysis (BOTDA) distributed sensors. Compared to traditional intensity-modulated cyclic codes, the code presents an additional gain of √2 while keeping the same number of sequences as for a colour coding. A comparison with a standard BOTDA sensor is realized and validates the theoretical coding gain.
Shock compression response of cold-rolled Ni/Al multilayer composites
NASA Astrophysics Data System (ADS)
Specht, Paul E.; Weihs, Timothy P.; Thadhani, Naresh N.
2017-01-01
Uniaxial strain, plate-on-plate impact experiments were performed on cold-rolled Ni/Al multilayer composites and the resulting Hugoniot was determined through time-resolved measurements combined with impedance matching. The experimental Hugoniot agreed with that previously predicted by two dimensional (2D) meso-scale calculations [Specht et al., J. Appl. Phys. 111, 073527 (2012)]. Additional 2D meso-scale simulations were performed using the same computational method as the prior study to reproduce the experimentally measured free surface velocities and stress profiles. These simulations accurately replicated the experimental profiles, providing additional validation for the previous computational work.
Flow optimization study of a batch microfluidics PET tracer synthesizing device
Elizarov, Arkadij M.; Meinhart, Carl; van Dam, R. Michael; Huang, Jiang; Daridon, Antoine; Heath, James R.; Kolb, Hartmuth C.
2010-01-01
We present numerical modeling and experimental studies of flow optimization inside a batch microfluidic micro-reactor used for synthesis of human-scale doses of Positron Emission Tomography (PET) tracers. Novel techniques are used for mixing within, and eluting liquid out of, the coin-shaped reaction chamber. Numerical solutions of the general incompressible Navier Stokes equations along with time-dependent elution scalar field equation for the three dimensional coin-shaped geometry were obtained and validated using fluorescence imaging analysis techniques. Utilizing the approach presented in this work, we were able to identify optimized geometrical and operational conditions for the micro-reactor in the absence of radioactive material commonly used in PET related tracer production platforms as well as evaluate the designed and fabricated micro-reactor using numerical and experimental validations. PMID:21072595
Cuesta-Gragera, Ana; Navarro-Fontestad, Carmen; Mangas-Sanjuan, Victor; González-Álvarez, Isabel; García-Arieta, Alfredo; Trocóniz, Iñaki F; Casabó, Vicente G; Bermejo, Marival
2015-07-10
The objective of this paper is to apply a previously developed semi-physiologic pharmacokinetic model implemented in NONMEM to simulate bioequivalence trials (BE) of acetyl salicylic acid (ASA) in order to validate the model performance against ASA human experimental data. ASA is a drug with first-pass hepatic and intestinal metabolism following Michaelis-Menten kinetics that leads to the formation of two main metabolites in two generations (first and second generation metabolites). The first aim was to adapt the semi-physiological model for ASA in NOMMEN using ASA pharmacokinetic parameters from literature, showing its sequential metabolism. The second aim was to validate this model by comparing the results obtained in NONMEM simulations with published experimental data at a dose of 1000 mg. The validated model was used to simulate bioequivalence trials at 3 dose schemes (100, 1000 and 3000 mg) and with 6 test formulations with decreasing in vivo dissolution rate constants versus the reference formulation (kD 8-0.25 h (-1)). Finally, the third aim was to determine which analyte (parent drug, first generation or second generation metabolite) was more sensitive to changes in formulation performance. The validation results showed that the concentration-time curves obtained with the simulations reproduced closely the published experimental data, confirming model performance. The parent drug (ASA) was the analyte that showed to be more sensitive to the decrease in pharmaceutical quality, with the highest decrease in Cmax and AUC ratio between test and reference formulations. Copyright © 2015 Elsevier B.V. All rights reserved.
Funding for the 2ND IAEA technical meeting on fusion data processing, validation and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenwald, Martin
The International Atomic Energy Agency (IAEA) will organize the second Technical Meeting on Fusion Da Processing, Validation and Analysis from 30 May to 02 June, 2017, in Cambridge, MA USA. The meeting w be hosted by the MIT Plasma Science and Fusion Center (PSFC). The objective of the meeting is to provide a platform where a set of topics relevant to fusion data processing, validation and analysis are discussed with the view of extrapolation needs to next step fusion devices such as ITER. The validation and analysis of experimental data obtained from diagnostics used to characterize fusion plasmas are crucialmore » for a knowledge based understanding of the physical processes governing the dynamics of these plasmas. The meeting will aim at fostering, in particular, discussions of research and development results that set out or underline trends observed in the current major fusion confinement devices. General information on the IAEA, including its mission and organization, can be found at the IAEA websit Uncertainty quantification (UQ) Model selection, validation, and verification (V&V) Probability theory and statistical analysis Inverse problems & equilibrium reconstru ction Integrated data analysis Real time data analysis Machine learning Signal/image proc essing & pattern recognition Experimental design and synthetic diagnostics Data management« less
Expansion of transient operating data
NASA Astrophysics Data System (ADS)
Chipman, Christopher; Avitabile, Peter
2012-08-01
Real time operating data is very important to understand actual system response. Unfortunately, the amount of physical data points typically collected is very small and often interpretation of the data is difficult. Expansion techniques have been developed using traditional experimental modal data to augment this limited set of data. This expansion process allows for a much improved description of the real time operating response. This paper presents the results from several different structures to show the robustness of the technique. Comparisons are made to a more complete set of measured data to validate the approach. Both analytical simulations and actual experimental data are used to illustrate the usefulness of the technique.
Improvements and validation of the erythropoiesis control model for bed rest simulation
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
The most significant improvement in the model is the explicit formulation of separate elements representing erythropoietin production and red cell production. Other modifications include bone marrow time-delays, capability to shift oxyhemoglobin affinity and an algorithm for entering experimental data as time-varying driving functions. An area of model development is suggested by applying the model to simulating onset, diagnosis and treatment of a hematologic disorder. Recommendations for further improvements in the model and suggestions for experimental application are also discussed. A detailed analysis of the hematologic response to bed rest including simulation of the recent Baylor Medical College bed rest studies is also presented.
NASA Astrophysics Data System (ADS)
Mucchi, E.; Dalpiaz, G.
2015-01-01
This work concerns external gear pumps for automotive applications, which operate at high speed and low pressure. In previous works of the authors (Part I and II, [1,2]), a non-linear lumped-parameter kineto-elastodynamic model for the prediction of the dynamic behaviour of external gear pumps was presented. It takes into account the most important phenomena involved in the operation of this kind of machine. The two main sources of noise and vibration are considered: pressure pulsation and gear meshing. The model has been used in order to foresee the influence of working conditions and design modifications on vibration generation. The model's experimental validation is a difficult task. Thus, Part III proposes a novel methodology for the validation carried out by the comparison of simulations and experimental results concerning forces and moments: it deals with the external and inertial components acting on the gears, estimated by the model, and the reactions and inertial components on the pump casing and the test plate, obtained by measurements. The validation is carried out comparing the level of the time synchronous average in the time domain and the waterfall maps in the frequency domain, with particular attention to identify system resonances. The validation results are satisfactory globally, but discrepancies are still present. Moreover, the assessed model has been properly modified for the application to a new virtual pump prototype with helical gears in order to foresee gear accelerations and dynamic forces. Part IV is focused on improvements in the modelling and analysis of the phenomena bound to the pressure evolution around the gears in order to achieve results closer to the measured values. As a matter of fact, the simulation results have shown that a variable meshing stiffness has a notable contribution on the dynamic behaviour of the pump but this is not as important as the pressure phenomena. As a consequence, the original model was modified with the aim at improving the calculation of pressure forces and torques. The improved pressure formulation includes several phenomena not considered in the previous one, such as the variable pressure evolution at input and output ports, as well as an accurate description of the trapped volume and its connections with high and low pressure chambers. The importance of these improvements are highlighted by comparison with experimental results, showing satisfactory matching.
Tignon, Marylène; Gallardo, Carmina; Iscaro, Carmen; Hutet, Evelyne; Van der Stede, Yves; Kolbasov, Denis; De Mia, Gian Mario; Le Potier, Marie-Frédérique; Bishop, Richard P; Arias, Marisa; Koenen, Frank
2011-12-01
A real-time polymerase chain reaction (PCR) assay for the rapid detection of African swine fever virus (ASFV), multiplexed for simultaneous detection of swine beta-actin as an endogenous control, has been developed and validated by four National Reference Laboratories of the European Union for African swine fever (ASF) including the European Union Reference Laboratory. Primers and a TaqMan(®) probe specific for ASFV were selected from conserved regions of the p72 gene. The limit of detection of the new real-time PCR assay is 5.7-57 copies of the ASFV genome. High accuracy, reproducibility and robustness of the PCR assay (CV ranging from 0.7 to 5.4%) were demonstrated both within and between laboratories using different real-time PCR equipments. The specificity of virus detection was validated using a panel of 44 isolates collected over many years in various geographical locations in Europe, Africa and America, including recent isolates from the Caucasus region, Sardinia, East and West Africa. Compared to the OIE-prescribed conventional and real-time PCR assays, the sensitivity of the new assay with internal control was improved, as demonstrated by testing 281 field samples collected in recent outbreaks and surveillance areas in Europe and Africa (170 samples) together with samples obtained through experimental infections (111 samples). This is particularly evident in the early days following experimental infection and during the course of the disease in pigs sub-clinically infected with strains of low virulence (from 35 up to 70dpi). The specificity of the assay was also confirmed on 150 samples from uninfected pigs and wild boar from ASF-free areas. Measured on the total of 431 tested samples, the positive deviation of the new assay reaches 21% or 26% compared to PCR and real-time PCR methods recommended by OIE. This improved and rigorously validated real-time PCR assay with internal control will provide a rapid, sensitive and reliable molecular tool for ASFV detection in pigs in newly infected areas, control in endemic areas and surveillance in ASF-free areas. Copyright © 2011 Elsevier B.V. All rights reserved.
Experimental Evaluation of Processing Time for the Synchronization of XML-Based Business Objects
NASA Astrophysics Data System (ADS)
Ameling, Michael; Wolf, Bernhard; Springer, Thomas; Schill, Alexander
Business objects (BOs) are data containers for complex data structures used in business applications such as Supply Chain Management and Customer Relationship Management. Due to the replication of application logic, multiple copies of BOs are created which have to be synchronized and updated. This is a complex and time consuming task because BOs rigorously vary in their structure according to the distribution, number and size of elements. Since BOs are internally represented as XML documents, the parsing of XML is one major cost factor which has to be considered for minimizing the processing time during synchronization. The prediction of the parsing time for BOs is an significant property for the selection of an efficient synchronization mechanism. In this paper, we present a method to evaluate the influence of the structure of BOs on their parsing time. The results of our experimental evaluation incorporating four different XML parsers examine the dependencies between the distribution of elements and the parsing time. Finally, a general cost model will be validated and simplified according to the results of the experimental setup.
Miller, Andrew; Villegas, Arturo; Diez, F Javier
2015-03-01
The solution to the startup transient EOF in an arbitrary rectangular microchannel is derived analytically and validated experimentally. This full 2D transient solution describes the evolution of the flow through five distinct periods until reaching a final steady state. The derived analytical velocity solution is validated experimentally for different channel sizes and aspect ratios under time-varying pressure gradients. The experiments used a time resolved micro particle image velocimetry technique to calculate the startup transient velocity profiles. The measurements captured the effect of time-varying pressure gradient fields derived in the analytical solutions. This is tested by using small reservoirs at both ends of the channel which allowed a time-varying pressure gradient to develop with a time scale on the order of the transient EOF. Results showed that under these common conditions, the effect of the pressure build up in the reservoirs on the temporal development of the transient startup EOF in the channels cannot be neglected. The measurements also captured the analytical predictions for channel walls made of different materials (i.e., zeta potentials). This was tested in channels that had three PDMS and one quartz wall, resulting in a flow with an asymmetric velocity profile due to variations in the zeta potential between the walls. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Motion control of 7-DOF arms - The configuration control approach
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Long, Mark K.; Lee, Thomas S.
1993-01-01
Graphics simulation and real-time implementation of configuration control schemes for a redundant 7-DOF Robotics Research arm are described. The arm kinematics and motion control schemes are described briefly. This is followed by a description of a graphics simulation environment for 7-DOF arm control on the Silicon Graphics IRIS Workstation. Computer simulation results are presented to demonstrate elbow control, collision avoidance, and optimal joint movement as redundancy resolution goals. The laboratory setup for experimental validation of motion control of the 7-DOF Robotics Research arm is then described. The configuration control approach is implemented on a Motorola-68020/VME-bus-based real-time controller, with elbow positioning for redundancy resolution. Experimental results demonstrate the efficacy of configuration control for real-time control.
Asymmetric band flipping for time-of-flight neutron diffraction data
Whitfield, Pamela S.; Coelho, Alan A.
2016-08-24
Charge flipping with powder diffraction data is known to produce a result more reliably with high-resolution data,i.e.visible reflections at smalldspacings. This data are readily accessible with the neutron time-of-flight technique but the assumption that negative scattering density is nonphysical is no longer valid where elements with negative scattering lengths are present. The concept of band flipping was introduced in the literature, where a negative threshold is used in addition to a positive threshold during the flipping. But, it was not tested with experimental data at the time. Finallly, band flipping has been implemented inTOPAStogether with the band modification of low-densitymore » elimination and tested with experimental powder and Laue single-crystal neutron data.« less
Crea, Simona; Cipriani, Christian; Donati, Marco; Carrozza, Maria Chiara; Vitiello, Nicola
2015-03-01
Here we describe a novel wearable feedback apparatus for lower-limb amputees. The system is based on three modules: a pressure-sensitive insole for the measurement of the plantar pressure distribution under the prosthetic foot during gait, a computing unit for data processing and gait segmentation, and a set of vibrating elements placed on the thigh skin. The feedback strategy relies on the detection of specific gait-phase transitions of the amputated leg. Vibrating elements are activated in a time-discrete manner, simultaneously with the occurrence of the detected gait-phase transitions. Usability and effectiveness of the apparatus were successfully assessed through an experimental validation involving ten healthy volunteers.
NASA Astrophysics Data System (ADS)
Thomas, L.; Tremblais, B.; David, L.
2014-03-01
Optimization of multiplicative algebraic reconstruction technique (MART), simultaneous MART and block iterative MART reconstruction techniques was carried out on synthetic and experimental data. Different criteria were defined to improve the preprocessing of the initial images. Knowledge of how each reconstruction parameter influences the quality of particle volume reconstruction and computing time is the key in Tomo-PIV. These criteria were applied to a real case, a jet in cross flow, and were validated.
Threats to the Internal Validity of Experimental and Quasi-Experimental Research in Healthcare.
Flannelly, Kevin J; Flannelly, Laura T; Jankowski, Katherine R B
2018-01-01
The article defines, describes, and discusses the seven threats to the internal validity of experiments discussed by Donald T. Campbell in his classic 1957 article: history, maturation, testing, instrument decay, statistical regression, selection, and mortality. These concepts are said to be threats to the internal validity of experiments because they pose alternate explanations for the apparent causal relationship between the independent variable and dependent variable of an experiment if they are not adequately controlled. A series of simple diagrams illustrate three pre-experimental designs and three true experimental designs discussed by Campbell in 1957 and several quasi-experimental designs described in his book written with Julian C. Stanley in 1966. The current article explains why each design controls for or fails to control for these seven threats to internal validity.
NASA Astrophysics Data System (ADS)
Almiron Bonnin, Rubens Eduardo
The development of an experimental synchrophasors network and application of synchrophasors for real-time transmission line parameter monitoring are presented in this thesis. In the laboratory setup, a power system is simulated in a RTDS real-time digital simulator, and the simulated voltages and currents are input to hardware phasor measurement units (PMUs) through the analog outputs of the simulator. Time synchronizing signals for the PMU devices are supplied from a common GPS clock. The real time data collected from PMUs are sent to a phasor data concentrator (PDC) through Ethernet using the TCP/IP protocol. A real-time transmission line parameter monitoring application program that uses the synchrophasor data provided by the PDC is implemented and validated. The experimental synchrophasor network developed in this thesis is expected to be used in research on synchrophasor applications as well as in graduate and undergraduate teaching.
A three-stage experimental strategy to evaluate and validate an interplate IC50 format.
Rodrigues, Daniel J; Lyons, Richard; Laflin, Philip; Pointon, Wayne; Kammonen, Juha
2007-12-01
The serial dilution of compounds to establish potency against target enzymes or receptors can at times be a rate-limiting step in project progression. We have investigated the possibility of running 50% inhibitory concentration experiments in an interplate format, with dose ranges constructed across plates. The advantages associated with this format include a faster reformatting time for the compounds while also increasing the number of doses that can be potentially generated. These two factors, in particular, would lend themselves to a higher-throughput and more timely testing of compounds, while also maximizing chances to capture fully developed dose-response curves. The key objective from this work was to establish a strategy to assess the feasibility of an interplate format to ensure that the quality of data generated would be equivalent to historical formats used. A three-stage approach was adopted to assess and validate running an assay in an interplate format, compared to an intraplate format. Although the three-stage strategy was tested with two different assay formats, it would be necessary to investigate the feasibility for other assay types. The recommendation is that the three-stage experimental strategy defined here is used to assess feasibility of other assay formats used.
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Lu, Jiemin; Islam, Akand
2017-05-01
Geologic repositories are extensively used for disposing byproducts in mineral and energy industries. The safety and reliability of these repositories are a primary concern to environmental regulators and the public. Time-lapse oscillatory pumping test (OPT) has been introduced recently as a pressure-based technique for detecting potential leakage in geologic repositories. By routinely conducting OPT at a number of pulsing frequencies, an operator may identify the potential repository anomalies in the frequency domain, alleviating the ambiguity caused by reservoir noise and improving the signal-to-noise ratio. Building on previous theoretical and field studies, this work performed a series of laboratory experiments to validate the concept of time-lapse OPT using a custom made, stainless steel tank under relatively high pressures. The experimental configuration simulates a miniature geologic storage repository consisting of three layers (i.e., injection zone, caprock, and above-zone aquifer). Results show that leakage in the injection zone led to deviations in the power spectrum of observed pressure data, and the amplitude of which also increases with decreasing pulsing frequencies. The experimental results are further analyzed by developing a 3D flow model, using which the model parameters are estimated through frequency domain inversion.
Modeling ventilation time in forage tower silos.
Bahloul, A; Chavez, M; Reggio, M; Roberge, B; Goyer, N
2012-10-01
The fermentation process in forage tower silos produces a significant amount of gases, which can easily reach dangerous concentrations and constitute a hazard for silo operators. To maintain a non-toxic environment, silo ventilation is applied. Literature reviews show that the fermentation gases reach high concentrations in the headspace of a silo and flow down the silo from the chute door to the feed room. In this article, a detailed parametric analysis of forced ventilation scenarios built via numerical simulation was performed. The methodology is based on the solution of the Navier-Stokes equations, coupled with transport equations for the gas concentrations. Validation was achieved by comparing the numerical results with experimental data obtained from a scale model silo using the tracer gas testing method for O2 and CO2 concentrations. Good agreement was found between the experimental and numerical results. The set of numerical simulations made it possible to establish a simple analytical model to predict the minimum time required to ventilate a silo to make it safe to enter. This ventilation time takes into account the headspace above the forage, the airflow rate, and the initial concentrations of O2 and CO2. The final analytical model was validated with available results from the literature.
DeFelice, Thomas P.; Lloyd, D.; Meyer, D.J.; Baltzer, T. T.; Piraina, P.
2003-01-01
An atmospheric correction algorithm developed for the 1 km Advanced Very High Resolution Radiometer (AVHRR) global land dataset was modified to include a near real-time total column water vapour data input field to account for the natural variability of atmospheric water vapour. The real-time data input field used for this study is the Television and Infrared Observational Satellite (TIROS) Operational Vertical Sounder (TOVS) Pathfinder A global total column water vapour dataset. It was validated prior to its use in the AVHRR atmospheric correction process using two North American AVHRR scenes, namely 13 June and 28 November 1996. The validation results are consistent with those reported by others and entail a comparison between TOVS, radiosonde, experimental sounding, microwave radiometer, and data from a hand-held sunphotometer. The use of this data layer as input to the AVHRR atmospheric correction process is discussed.
Evaluation and Validation of the Messinger Freezing Fraction
NASA Technical Reports Server (NTRS)
Anderson, David N.; Tsao, Jen-Ching
2005-01-01
One of the most important non-dimensional parameters used in ice-accretion modeling and scaling studies is the freezing fraction defined by the heat-balance analysis of Messinger. For fifty years this parameter has been used to indicate how rapidly freezing takes place when super-cooled water strikes a solid body. The value ranges from 0 (no freezing) to 1 (water freezes immediately on impact), and the magnitude has been shown to play a major role in determining the physical appearance of the accreted ice. Because of its importance to ice shape, this parameter and the physics underlying the expressions used to calculate it have been questioned from time to time. Until now, there has been no strong evidence either validating or casting doubt on the current expressions. This paper presents experimental measurements of the leading-edge thickness of a number of ice shapes for a variety of test conditions with nominal freezing fractions from 0.3 to 1.0. From these thickness measurements, experimental freezing fractions were calculated and compared with values found from the Messinger analysis as applied by Ruff. Within the experimental uncertainty of measuring the leading-edge thickness, agreement of the experimental and analytical freezing fraction was very good. It is also shown that values of analytical freezing fraction were entirely consistent with observed ice shapes at and near rime conditions: At an analytical freezing fraction of unity, experimental ice shapes displayed the classic rime shape, while for conditions producing analytical freezing fractions slightly lower than unity, glaze features started to appear.
2013-03-21
10 2.3 Time Series Response Data ................................................................................. 12 2.4 Comparison of Response...to 12 evaluating the efficiency of the parameter estimates. In the past, the most popular form of response surface design used the D-optimality...as well. A model can refer to almost anything in math , statistics, or computer science. It can be any “physical, mathematical, or logical
Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Conboy, Barbara (Technical Monitor)
1999-01-01
This separation has been logical thus far; however, as launch of AM-1 approaches, it must be recognized that many of these activities will shift emphasis from algorithm development to validation. For example, the second, third, and fifth bullets will become almost totally validation-focussed activities in the post-launch era, providing the core of our experimental validation effort. Work under the first bullet will continue into the post-launch time frame, driven in part by algorithm deficiencies revealed as a result of validation activities. Prior to the start of the 1999 fiscal year (FY99) we were requested to prepare a brief plan for our FY99 activities. This plan is included as Appendix 1. The present report describes the progress made on our planned activities.
Fang, Jiansong; Wu, Zengrui; Cai, Chuipu; Wang, Qi; Tang, Yun; Cheng, Feixiong
2017-11-27
Natural products with diverse chemical scaffolds have been recognized as an invaluable source of compounds in drug discovery and development. However, systematic identification of drug targets for natural products at the human proteome level via various experimental assays is highly expensive and time-consuming. In this study, we proposed a systems pharmacology infrastructure to predict new drug targets and anticancer indications of natural products. Specifically, we reconstructed a global drug-target network with 7,314 interactions connecting 751 targets and 2,388 natural products and built predictive network models via a balanced substructure-drug-target network-based inference approach. A high area under receiver operating characteristic curve of 0.96 was yielded for predicting new targets of natural products during cross-validation. The newly predicted targets of natural products (e.g., resveratrol, genistein, and kaempferol) with high scores were validated by various literature studies. We further built the statistical network models for identification of new anticancer indications of natural products through integration of both experimentally validated and computationally predicted drug-target interactions of natural products with known cancer proteins. We showed that the significantly predicted anticancer indications of multiple natural products (e.g., naringenin, disulfiram, and metformin) with new mechanism-of-action were validated by various published experimental evidence. In summary, this study offers powerful computational systems pharmacology approaches and tools for the development of novel targeted cancer therapies by exploiting the polypharmacology of natural products.
Vlachos, Ioannis S; Paraskevopoulou, Maria D; Karagkouni, Dimitra; Georgakilas, Georgios; Vergoulis, Thanasis; Kanellos, Ilias; Anastasopoulos, Ioannis-Laertis; Maniou, Sofia; Karathanou, Konstantina; Kalfakakou, Despina; Fevgas, Athanasios; Dalamagas, Theodore; Hatzigeorgiou, Artemis G
2015-01-01
microRNAs (miRNAs) are short non-coding RNA species, which act as potent gene expression regulators. Accurate identification of miRNA targets is crucial to understanding their function. Currently, hundreds of thousands of miRNA:gene interactions have been experimentally identified. However, this wealth of information is fragmented and hidden in thousands of manuscripts and raw next-generation sequencing data sets. DIANA-TarBase was initially released in 2006 and it was the first database aiming to catalog published experimentally validated miRNA:gene interactions. DIANA-TarBase v7.0 (http://www.microrna.gr/tarbase) aims to provide for the first time hundreds of thousands of high-quality manually curated experimentally validated miRNA:gene interactions, enhanced with detailed meta-data. DIANA-TarBase v7.0 enables users to easily identify positive or negative experimental results, the utilized experimental methodology, experimental conditions including cell/tissue type and treatment. The new interface provides also advanced information ranging from the binding site location, as identified experimentally as well as in silico, to the primer sequences used for cloning experiments. More than half a million miRNA:gene interactions have been curated from published experiments on 356 different cell types from 24 species, corresponding to 9- to 250-fold more entries than any other relevant database. DIANA-TarBase v7.0 is freely available. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
2012-08-01
U0=15m/s, Lv =350m Cloud Wind and Clear Sky Gust Simulation Using Dryden PSD* Harvested Energy from Normal Vibration (Red) to...energy control law based on limited energy constraints 4) Experimentally validated simultaneous energy harvesting and vibration control Summary...Experimental Characterization and Validation of Simultaneous Gust Alleviation and Energy Harvesting for Multifunctional Wing Spars AFOSR
Modelling of polymer photodegradation for solar cell modules
NASA Technical Reports Server (NTRS)
Somersall, A. C.; Guillet, J. E.
1981-01-01
A computer program developed to model and calculate by numerical integration the varying concentrations of chemical species formed during photooxidation of a polymeric material over time, using as input data a choice set of elementary reactions, corresponding rate constants and a convenient set of starting conditions is evaluated. Attempts were made to validate the proposed mechanism by experimentally monitoring the photooxidation products of small liquid alkane which are useful starting models for ethylene segments of polymers like EVA. The model system proved in appropriate for the intended purposes. Another validation model is recommended.
Validating internal controls for quantitative plant gene expression studies
Brunner, Amy M; Yakovlev, Igor A; Strauss, Steven H
2004-01-01
Background Real-time reverse transcription PCR (RT-PCR) has greatly improved the ease and sensitivity of quantitative gene expression studies. However, accurate measurement of gene expression with this method relies on the choice of a valid reference for data normalization. Studies rarely verify that gene expression levels for reference genes are adequately consistent among the samples used, nor compare alternative genes to assess which are most reliable for the experimental conditions analyzed. Results Using real-time RT-PCR to study the expression of 10 poplar (genus Populus) housekeeping genes, we demonstrate a simple method for determining the degree of stability of gene expression over a set of experimental conditions. Based on a traditional method for analyzing the stability of varieties in plant breeding, it defines measures of gene expression stability from analysis of variance (ANOVA) and linear regression. We found that the potential internal control genes differed widely in their expression stability over the different tissues, developmental stages and environmental conditions studied. Conclusion Our results support that quantitative comparisons of candidate reference genes are an important part of real-time RT-PCR studies that seek to precisely evaluate variation in gene expression. The method we demonstrated facilitates statistical and graphical evaluation of gene expression stability. Selection of the best reference gene for a given set of experimental conditions should enable detection of biologically significant changes in gene expression that are too small to be revealed by less precise methods, or when highly variable reference genes are unknowingly used in real-time RT-PCR experiments. PMID:15317655
Zhang, Yunlong; Li, Ruoming; Shi, Yuechun; Zhang, Jintao; Chen, Xiangfei; Liu, Shengchun
2015-06-01
A novel fiber Bragg grating aided fiber loop ringdown (FLRD) sensor array and the wavelength-time multiplexing based interrogation technique for the FLRD sensors array are proposed. The interrogation frequency of the system is formulated and the interrelationships among the parameters of the system are analyzed. To validate the performance of the proposed system, a five elements array is experimentally demonstrated, and the system shows the capability of real time monitoring every FLRD element with interrogation frequency of 125.5 Hz.
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; Fleming, Gary A.
2007-01-01
Virtual Diagnostics Interface technology, or ViDI, is a suite of techniques utilizing image processing, data handling and three-dimensional computer graphics. These techniques aid in the design, implementation, and analysis of complex aerospace experiments. LiveView3D is a software application component of ViDI used to display experimental wind tunnel data in real-time within an interactive, three-dimensional virtual environment. The LiveView3D software application was under development at NASA Langley Research Center (LaRC) for nearly three years. LiveView3D recently was upgraded to perform real-time (as well as post-test) comparisons of experimental data with pre-computed Computational Fluid Dynamics (CFD) predictions. This capability was utilized to compare experimental measurements with CFD predictions of the surface pressure distribution of the NASA Ares I Crew Launch Vehicle (CLV) - like vehicle when tested in the NASA LaRC Unitary Plan Wind Tunnel (UPWT) in December 2006 - January 2007 timeframe. The wind tunnel tests were conducted to develop a database of experimentally-measured aerodynamic performance of the CLV-like configuration for validation of CFD predictive codes.
Spatial Variation of Pressure in the Lyophilization Product Chamber Part 1: Computational Modeling.
Ganguly, Arnab; Varma, Nikhil; Sane, Pooja; Bogner, Robin; Pikal, Michael; Alexeenko, Alina
2017-04-01
The flow physics in the product chamber of a freeze dryer involves coupled heat and mass transfer at different length and time scales. The low-pressure environment and the relatively small flow velocities make it difficult to quantify the flow structure experimentally. The current work presents the three-dimensional computational fluid dynamics (CFD) modeling for vapor flow in a laboratory scale freeze dryer validated with experimental data and theory. The model accounts for the presence of a non-condensable gas such as nitrogen or air using a continuum multi-species model. The flow structure at different sublimation rates, chamber pressures, and shelf-gaps are systematically investigated. Emphasis has been placed on accurately predicting the pressure variation across the subliming front. At a chamber set pressure of 115 mtorr and a sublimation rate of 1.3 kg/h/m 2 , the pressure variation reaches about 9 mtorr. The pressure variation increased linearly with sublimation rate in the range of 0.5 to 1.3 kg/h/m 2 . The dependence of pressure variation on the shelf-gap was also studied both computationally and experimentally. The CFD modeling results are found to agree within 10% with the experimental measurements. The computational model was also compared to analytical solution valid for small shelf-gaps. Thus, the current work presents validation study motivating broader use of CFD in optimizing freeze-drying process and equipment design.
2017-09-01
VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY
Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; McCrea, Andrew C.
2009-01-01
The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.
Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; McCrea, Andrew C.
2010-01-01
The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.
Experimental evaluation and thermodynamic system modeling of thermoelectric heat pump clothes dryer
Patel, Viral K.; Gluesenkamp, Kyle R.; Goodman, Dakota; ...
2018-02-28
Electric clothes dryers consume about 6% of US residential electricity consumption. Using a solid-state technology without refrigerant, thermoelectric (TE) heat pump dryers have the potential to be more efficient than units based on electric resistance and less expensive than units based on vapor compression. This study presents a steady state TE dryer model, and validates the model against results from an experimental prototype. The system model is composed of a TE heat pump element model coupled with a psychrometric dryer sub-model. Experimental results had energy factors (EFs) of up to 2.95 kg of dry cloth per kWh (6.51 lb c/kWh),more » with a dry time of 159 min. A faster dry time of 96 min was also achieved at an EF of 2.54 kg c/kWh (5.60 lb c/kWh). The model was able to replicate the experimental results within 5% of EF and 5% of dry time values. Finally, the results are used to identify important parameters that affect dryer performance, such as relative humidity of air leaving the drum.« less
Experimental evaluation and thermodynamic system modeling of thermoelectric heat pump clothes dryer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Viral K.; Gluesenkamp, Kyle R.; Goodman, Dakota
Electric clothes dryers consume about 6% of US residential electricity consumption. Using a solid-state technology without refrigerant, thermoelectric (TE) heat pump dryers have the potential to be more efficient than units based on electric resistance and less expensive than units based on vapor compression. This study presents a steady state TE dryer model, and validates the model against results from an experimental prototype. The system model is composed of a TE heat pump element model coupled with a psychrometric dryer sub-model. Experimental results had energy factors (EFs) of up to 2.95 kg of dry cloth per kWh (6.51 lb c/kWh),more » with a dry time of 159 min. A faster dry time of 96 min was also achieved at an EF of 2.54 kg c/kWh (5.60 lb c/kWh). The model was able to replicate the experimental results within 5% of EF and 5% of dry time values. Finally, the results are used to identify important parameters that affect dryer performance, such as relative humidity of air leaving the drum.« less
Al-Dhabi, Naif Abdullah; Ponmurugan, Karuppiah; Maran Jeganathan, Prakash
2017-01-01
In this current work, Box-Behnken statistical experimental design (BBD) was adopted to evaluate and optimize USLE (ultrasound-assisted solid-liquid extraction) of phytochemicals from spent coffee grounds. Factors employed in this study are ultrasonic power, temperature, time and solid-liquid (SL) ratio. Individual and interactive effect of independent variables over the extraction yield was depicted through mathematical models, which are generated from the experimental data. Determined optimum process conditions are 244W of ultrasonic power, 40°C of temperature, 34min of time and 1:17g/ml of SL ratio. The predicted values were in correlation with experimental values with 95% confidence level, under the determined optimal conditions. This indicates the significance of selected method for USLE of phytochemicals from SCG. Copyright © 2016 Elsevier B.V. All rights reserved.
Quasi-experimental study designs series-paper 7: assessing the assumptions.
Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Ebert, Cara; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian
2017-09-01
Quasi-experimental designs are gaining popularity in epidemiology and health systems research-in particular for the evaluation of health care practice, programs, and policy-because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions. Copyright © 2017 Elsevier Inc. All rights reserved.
Masonry structures built with fictile tubules: Experimental and numerical analyses
NASA Astrophysics Data System (ADS)
Tiberti, Simone; Scuro, Carmelo; Codispoti, Rosamaria; Olivito, Renato S.; Milani, Gabriele
2017-11-01
Masonry structures with fictile tubules were a distinctive building technique of the Mediterranean area. This technique dates back to Roman and early Christian times, used to build vaulted constructions and domes with various geometrical forms by virtue of their modular structure. In the present work, experimental tests were carried out to identify the mechanical properties of hollow clay fictile tubules and a possible reinforcing technique for existing buildings employing such elements. The experimental results were then validated by devising and analyzing numerical models with the FE software Abaqus, also aimed at investigating the structural behavior of an arch via linear and nonlinear static analyses.
Experimental analysis of large capacity MR dampers with short- and long-stroke
NASA Astrophysics Data System (ADS)
Zemp, René; de la Llera, Juan Carlos; Weber, Felix
2014-12-01
The purpose of this article is to study and characterize experimentally two magneto-rheological dampers with short- and long-stroke, denoted hereafter as MRD-S and MRD-L. The latter was designed to improve the Earthquake performance of a 21-story reinforced concrete building equipped with two 160 ton tuned pendular masses. The MRD-L has a nominal force capacity of 300 kN and a stroke of ±1 m; the MRD-S has a nominal force capacity of 150 kN, and a stroke of ±0.1 m. The MRD-S was tested with two different magneto-rheological and one viscous fluid. Due to the presence of Eddy currents, both dampers show a time lag between current intensity and damper force as the magnetization on the damper changes in time. Experimental results from the MRD-L show a force drop-off behavior. A decrease in active-mode forces due to temperature increase is also analyzed for the MRD-S and the different fluids. Moreover, the observed increase in internal damper pressure due to energy dissipation is evaluated for the different fluids in both dampers. An analytical model to predict internal pressure increase in the damper is proposed that includes as a parameter the concentration of magnetic particles inside the fluid. Analytical dynamic pressure results are validated using the experimental tests. Finally, an extended Bingham fluid model, which considers compressibility of the fluid, is also proposed and validated using damper tests.
Zhuang, Jinda; Ju, Y Sungtaek
2015-09-22
The deformation and rupture of axisymmetric liquid bridges being stretched between two fully wetted coaxial disks are studied experimentally and theoretically. We numerically solve the time-dependent Navier-Stokes equations while tracking the deformation of the liquid-air interface using the arbitrary Lagrangian-Eulerian (ALE) moving mesh method to fully account for the effects of inertia and viscous forces on bridge dynamics. The effects of the stretching velocity, liquid properties, and liquid volume on the dynamics of liquid bridges are systematically investigated to provide direct experimental validation of our numerical model for stretching velocities as high as 3 m/s. The Ohnesorge number (Oh) of liquid bridges is a primary factor governing the dynamics of liquid bridge rupture, especially the dependence of the rupture distance on the stretching velocity. The rupture distance generally increases with the stretching velocity, far in excess of the static stability limit. For bridges with low Ohnesorge numbers, however, the rupture distance stay nearly constant or decreases with the stretching velocity within certain velocity windows due to the relative rupture position switching and the thread shape change. Our work provides an experimentally validated modeling approach and experimental data to help establish foundation for systematic further studies and applications of liquid bridges.
Closed-loop control of renal perfusion pressure in physiological experiments.
Campos-Delgado, D U; Bonilla, I; Rodríguez-Martínez, M; Sánchez-Briones, M E; Ruiz-Hernández, E
2013-07-01
This paper presents the design, experimental modeling, and control of a pump-driven renal perfusion pressure (RPP)-regulatory system to implement precise and relatively fast RPP regulation in rats. The mechatronic system is a simple, low-cost, and reliable device to automate the RPP regulation process based on flow-mediated occlusion. Hence, the regulated signal is the RPP measured in the left femoral artery of the rat, and the manipulated variable is the voltage applied to a dc motor that controls the occlusion of the aorta. The control system is implemented in a PC through the LabView software, and a data acquisition board NI USB-6210. A simple first-order linear system is proposed to approximate the dynamics in the experiment. The parameters of the model are chosen to minimize the error between the predicted and experimental output averaged from eight input/output datasets at different RPP operating conditions. A closed-loop servocontrol system based on a pole-placement PD controller plus dead-zone compensation was proposed for this purpose. First, the feedback structure was validated in simulation by considering parameter uncertainty, and constant and time-varying references. Several experimental tests were also conducted to validate in real time the closed-loop performance for stepwise and fast switching references, and the results show the effectiveness of the proposed automatic system to regulate the RPP in the rat, in a precise, accurate (mean error less than 2 mmHg) and relatively fast mode (10-15 s of response time).
Spectral reconstruction analysis for enhancing signal-to-noise in time-resolved spectroscopies
NASA Astrophysics Data System (ADS)
Wilhelm, Michael J.; Smith, Jonathan M.; Dai, Hai-Lung
2015-09-01
We demonstrate a new spectral analysis for the enhancement of the signal-to-noise ratio (SNR) in time-resolved spectroscopies. Unlike the simple linear average which produces a single representative spectrum with enhanced SNR, this Spectral Reconstruction analysis (SRa) improves the SNR (by a factor of ca. 0 . 6 √{ n } ) for all n experimentally recorded time-resolved spectra. SRa operates by eliminating noise in the temporal domain, thereby attenuating noise in the spectral domain, as follows: Temporal profiles at each measured frequency are fit to a generic mathematical function that best represents the temporal evolution; spectra at each time are then reconstructed with data points from the fitted profiles. The SRa method is validated with simulated control spectral data sets. Finally, we apply SRa to two distinct experimentally measured sets of time-resolved IR emission spectra: (1) UV photolysis of carbonyl cyanide and (2) UV photolysis of vinyl cyanide.
Complex Water Impact Visitor Information Validation and Qualification Sciences Experimental Complex Our the problem space. The Validation and Qualification Sciences Experimental Complex (VQSEC) at Sandia
Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2
NASA Technical Reports Server (NTRS)
Muylaert, Jean (Editor); Kumar, Ajay (Editor); Dujarric, Christian (Editor)
1998-01-01
The results of the phase 2 effort conducted under AGARD Working Group 18 on Hypersonic Experimental and Computational Capability, Improvement and Validation are presented in this report. The first volume, published in May 1996, mainly focused on the design methodology, plans and some initial results of experiments that had been conducted to serve as validation benchmarks. The current volume presents the detailed experimental and computational data base developed during this effort.
Experimental Design and Some Threats to Experimental Validity: A Primer
ERIC Educational Resources Information Center
Skidmore, Susan
2008-01-01
Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…
False Dichotomies and Health Policy Research Designs: Randomized Trials Are Not Always the Answer.
Soumerai, Stephen B; Ceccarelli, Rachel; Koppel, Ross
2017-02-01
Some medical scientists argue that only data from randomized controlled trials (RCTs) are trustworthy. They claim data from natural experiments and administrative data sets are always spurious and cannot be used to evaluate health policies and other population-wide phenomena in the real world. While many acknowledge biases caused by poor study designs, in this article we argue that several valid designs using administrative data can produce strong findings, particularly the interrupted time series (ITS) design. Many policy studies neither permit nor require an RCT for cause-and-effect inference. Framing our arguments using Campbell and Stanley's classic research design monograph, we show that several "quasi-experimental" designs, especially interrupted time series (ITS), can estimate valid effects (or non-effects) of health interventions and policies as diverse as public insurance coverage, speed limits, hospital safety programs, drug abuse regulation and withdrawal of drugs from the market. We further note the recent rapid uptake of ITS and argue for expanded training in quasi-experimental designs in medical and graduate schools and in post-doctoral curricula.
Uncertainty Analysis in 3D Equilibrium Reconstruction
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
2018-02-21
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Lebon, G S Bruno; Tzanakis, I; Djambazov, G; Pericleous, K; Eskin, D G
2017-07-01
To address difficulties in treating large volumes of liquid metal with ultrasound, a fundamental study of acoustic cavitation in liquid aluminium, expressed in an experimentally validated numerical model, is presented in this paper. To improve the understanding of the cavitation process, a non-linear acoustic model is validated against reference water pressure measurements from acoustic waves produced by an immersed horn. A high-order method is used to discretize the wave equation in both space and time. These discretized equations are coupled to the Rayleigh-Plesset equation using two different time scales to couple the bubble and flow scales, resulting in a stable, fast, and reasonably accurate method for the prediction of acoustic pressures in cavitating liquids. This method is then applied to the context of treatment of liquid aluminium, where it predicts that the most intense cavitation activity is localised below the vibrating horn and estimates the acoustic decay below the sonotrode with reasonable qualitative agreement with experimental data. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Uncertainty Analysis in 3D Equilibrium Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
The Effects of Magnetic Nozzle Configurations on Plasma Thrusters
NASA Technical Reports Server (NTRS)
Turchi, P. J.
1997-01-01
Over the course of eight years, the Ohio State University has performed research in support of electric propulsion development efforts at the NASA Lewis Research Center, Cleveland, OH. This research has been largely devoted to plasma propulsion systems including MagnetoPlasmaDynamic (MPD) thrusters with externally-applied, solenoidal magnetic fields, hollow cathodes, and Pulsed Plasma Microthrusters (PPT's). Both experimental and theoretical work has been performed, as documented in four master's theses, two doctoral dissertations, and numerous technical papers. The present document is the final report for the grant period 5 December 1987 to 31 December 1995, and summarizes all activities. Detailed discussions of each area of activity are provided in appendices: Appendix 1 - Experimental studies of magnetic nozzle effects on plasma thrusters; Appendix 2 - Numerical modeling of applied-field MPD thrusters; Appendix 3 - Theoretical and experimental studies of hollow cathodes; and Appendix 4 -Theoretical, numerical and experimental studies of pulsed plasma thrusters. Especially notable results include the efficacy of using a solenoidal magnetic field downstream of a plasma thruster to collimate the exhaust flow, the development of a new understanding of applied-field MPD thrusters (based on experimentally-validated results from state-of-the art, numerical simulation) leading to predictions of improved performance, an experimentally-validated, first-principles model for orificed, hollow-cathode behavior, and the first time-dependent, two-dimensional calculations of ablation-fed, pulsed plasma thrusters.
Experimental validation of pulsed column inventory estimators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyerlein, A.L.; Geldard, J.F.; Weh, R.
Near-real-time accounting (NRTA) for reprocessing plants relies on the timely measurement of all transfers through the process area and all inventory in the process. It is difficult to measure the inventory of the solvent contractors; therefore, estimation techniques are considered. We have used experimental data obtained at the TEKO facility in Karlsruhe and have applied computer codes developed at Clemson University to analyze this data. For uranium extraction, the computer predictions agree to within 15% of the measured inventories. We believe this study is significant in demonstrating that using theoretical models with a minimum amount of process data may bemore » an acceptable approach to column inventory estimation for NRTA. 15 refs., 7 figs.« less
Validating the simulation of large-scale parallel applications using statistical characteristics
Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...
2016-03-01
Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less
NASA Astrophysics Data System (ADS)
Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo
2018-05-01
Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.
Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo
2018-05-14
Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.
Note: Tesla based pulse generator for electrical breakdown study of liquid dielectrics
NASA Astrophysics Data System (ADS)
Veda Prakash, G.; Kumar, R.; Patel, J.; Saurabh, K.; Shyam, A.
2013-12-01
In the process of studying charge holding capability and delay time for breakdown in liquids under nanosecond (ns) time scales, a Tesla based pulse generator has been developed. Pulse generator is a combination of Tesla transformer, pulse forming line, a fast closing switch, and test chamber. Use of Tesla transformer over conventional Marx generators makes the pulse generator very compact, cost effective, and requires less maintenance. The system has been designed and developed to deliver maximum output voltage of 300 kV and rise time of the order of tens of nanoseconds. The paper deals with the system design parameters, breakdown test procedure, and various experimental results. To validate the pulse generator performance, experimental results have been compared with PSPICE simulation software and are in good agreement with simulation results.
Experimental validation of solid rocket motor damping models
NASA Astrophysics Data System (ADS)
Riso, Cristina; Fransen, Sebastiaan; Mastroddi, Franco; Coppotelli, Giuliano; Trequattrini, Francesco; De Vivo, Alessio
2017-12-01
In design and certification of spacecraft, payload/launcher coupled load analyses are performed to simulate the satellite dynamic environment. To obtain accurate predictions, the system damping properties must be properly taken into account in the finite element model used for coupled load analysis. This is typically done using a structural damping characterization in the frequency domain, which is not applicable in the time domain. Therefore, the structural damping matrix of the system must be converted into an equivalent viscous damping matrix when a transient coupled load analysis is performed. This paper focuses on the validation of equivalent viscous damping methods for dynamically condensed finite element models via correlation with experimental data for a realistic structure representative of a slender launch vehicle with solid rocket motors. A second scope of the paper is to investigate how to conveniently choose a single combination of Young's modulus and structural damping coefficient—complex Young's modulus—to approximate the viscoelastic behavior of a solid propellant material in the frequency band of interest for coupled load analysis. A scaled-down test article inspired to the Z9-ignition Vega launcher configuration is designed, manufactured, and experimentally tested to obtain data for validation of the equivalent viscous damping methods. The Z9-like component of the test article is filled with a viscoelastic material representative of the Z9 solid propellant that is also preliminarily tested to investigate the dependency of the complex Young's modulus on the excitation frequency and provide data for the test article finite element model. Experimental results from seismic and shock tests performed on the test configuration are correlated with numerical results from frequency and time domain analyses carried out on its dynamically condensed finite element model to assess the applicability of different equivalent viscous damping methods to describe damping properties of slender launch vehicles in payload/launcher coupled load analysis.
Experimental validation of solid rocket motor damping models
NASA Astrophysics Data System (ADS)
Riso, Cristina; Fransen, Sebastiaan; Mastroddi, Franco; Coppotelli, Giuliano; Trequattrini, Francesco; De Vivo, Alessio
2018-06-01
In design and certification of spacecraft, payload/launcher coupled load analyses are performed to simulate the satellite dynamic environment. To obtain accurate predictions, the system damping properties must be properly taken into account in the finite element model used for coupled load analysis. This is typically done using a structural damping characterization in the frequency domain, which is not applicable in the time domain. Therefore, the structural damping matrix of the system must be converted into an equivalent viscous damping matrix when a transient coupled load analysis is performed. This paper focuses on the validation of equivalent viscous damping methods for dynamically condensed finite element models via correlation with experimental data for a realistic structure representative of a slender launch vehicle with solid rocket motors. A second scope of the paper is to investigate how to conveniently choose a single combination of Young's modulus and structural damping coefficient—complex Young's modulus—to approximate the viscoelastic behavior of a solid propellant material in the frequency band of interest for coupled load analysis. A scaled-down test article inspired to the Z9-ignition Vega launcher configuration is designed, manufactured, and experimentally tested to obtain data for validation of the equivalent viscous damping methods. The Z9-like component of the test article is filled with a viscoelastic material representative of the Z9 solid propellant that is also preliminarily tested to investigate the dependency of the complex Young's modulus on the excitation frequency and provide data for the test article finite element model. Experimental results from seismic and shock tests performed on the test configuration are correlated with numerical results from frequency and time domain analyses carried out on its dynamically condensed finite element model to assess the applicability of different equivalent viscous damping methods to describe damping properties of slender launch vehicles in payload/launcher coupled load analysis.
Dynamic measurement of speed of sound in n-Heptane by ultrasonics during fuel injections.
Minnetti, Elisa; Pandarese, Giuseppe; Evangelisti, Piersavio; Verdugo, Francisco Rodriguez; Ungaro, Carmine; Bastari, Alessandro; Paone, Nicola
2017-11-01
The paper presents a technique to measure the speed of sound in fuels based on pulse-echo ultrasound. The method is applied inside the test chamber of a Zeuch-type instrument used for indirect measurement of the injection rate (Mexus). The paper outlines the pulse-echo method, considering probe installation, ultrasound beam propagation inside the test chamber, typical signals obtained, as well as different processing algorithms. The method is validated in static conditions by comparing the experimental results to the NIST database both for water and n-Heptane. The ultrasonic system is synchronized to the injector so that time resolved samples of speed of sound can be successfully acquired during a series of injections. Results at different operating conditions in n-Heptane are shown. An uncertainty analysis supports the analysis of results and allows to validate the method. Experimental results show that the speed of sound variation during an injection event is less than 1%, so the Mexus model assumption to consider it constant during the injection is valid. Copyright © 2017 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Comparison Study for... experimental conditions for the validation study and subsequent use during decontamination. The following...
Code of Federal Regulations, 2010 CFR
2010-07-01
... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Comparison Study for... experimental conditions for the validation study and subsequent use during decontamination. The following...
Methodological convergence of program evaluation designs.
Chacón-Moscoso, Salvador; Anguera, M Teresa; Sanduvete-Chaves, Susana; Sánchez-Martín, Milagrosa
2014-01-01
Nowadays, the confronting dichotomous view between experimental/quasi-experimental and non-experimental/ethnographic studies still exists but, despite the extensive use of non-experimental/ethnographic studies, the most systematic work on methodological quality has been developed based on experimental and quasi-experimental studies. This hinders evaluators and planners' practice of empirical program evaluation, a sphere in which the distinction between types of study is changing continually and is less clear. Based on the classical validity framework of experimental/quasi-experimental studies, we carry out a review of the literature in order to analyze the convergence of design elements in methodological quality in primary studies in systematic reviews and ethnographic research. We specify the relevant design elements that should be taken into account in order to improve validity and generalization in program evaluation practice in different methodologies from a practical methodological and complementary view. We recommend ways to improve design elements so as to enhance validity and generalization in program evaluation practice.
Validation of the mean radiant temperature simulated by the RayMan software in urban environments.
Lee, Hyunjung; Mayer, Helmut
2016-11-01
The RayMan software is worldwide applied in investigations on different issues in human-biometeorology. However, only the simulated mean radiant temperature (T mrt ) has been validated so far in a few case studies. They are based on T mrt values, which were experimentally determined in urban environments by use of a globe thermometer or applying the six-directional method. This study analyses previous T mrt validations in a comparative manner. Their results are extended by a recent validation of T mrt in an urban micro-environment in Freiburg (southwest Germany), which can be regarded as relatively heterogeneous due to different shading intensities by tree crowns. In addition, a validation of the physiologically equivalent temperature (PET) simulated by RayMan is conducted for the first time. The validations are based on experimentally determined T mrt and PET values, which were calculated from measured meteorological variables in the daytime of a clear-sky summer day. In total, the validation results show that RayMan is capable of simulating T mrt satisfactorily under relatively homogeneous site conditions. However, the inaccuracy of simulated T mrt is increasing with lower sun elevation and growing heterogeneity of the simulation site. As T mrt represents the meteorological variable that mostly governs PET in the daytime of clear-sky summer days, the accuracy of simulated T mrt is mainly responsible for the accuracy of simulated PET. The T mrt validations result in some recommendations, which concern an update of physical principles applied in the RayMan software to simulate the short- and long-wave radiant flux densities, especially from vertical building walls and tree crowns.
Experimental Validation Techniques for the Heleeos Off-Axis Laser Propagation Model
2010-03-01
EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER PROPAGATION MODEL THESIS John Haiducek, 1st Lt, USAF AFIT/GAP/ENP/10-M07 DEPARTMENT...Department of Defense, or the United States Government. AFIT/GAP/ENP/10-M07 EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER ...BS, Physics 1st Lt, USAF March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT/GAP/ENP/10-M07 Abstract The High Energy Laser End-to-End
Tritium environmental transport studies at TFTR
NASA Astrophysics Data System (ADS)
Ritter, P. D.; Dolan, T. J.; Longhurst, G. R.
1993-06-01
Environmental tritium concentrations will be measured near the Tokamak Fusion Test Reactor (TFTR) to help validate dynamic models of tritium transport in the environment. For model validation the database must contain sequential measurements of tritium concentrations in key environmental compartments. Since complete containment of tritium is an operational goal, the supplementary monitoring program should be able to glean useful data from an unscheduled acute release. Portable air samplers will be used to take samples automatically every 4 hours for a week after an acute release, thus obtaining the time resolution needed for code validation. Samples of soil, vegetation, and foodstuffs will be gathered daily at the same locations as the active air monitors. The database may help validate the plant/soil/air part of tritium transport models and enhance environmental tritium transport understanding for the International Thermonuclear Experimental Reactor (ITER).
Wimberley, Catriona J; Fischer, Kristina; Reilhac, Anthonin; Pichler, Bernd J; Gregoire, Marie Claude
2014-10-01
The partial saturation approach (PSA) is a simple, single injection experimental protocol that will estimate both B(avail) and appK(D) without the use of blood sampling. This makes it ideal for use in longitudinal studies of neurodegenerative diseases in the rodent. The aim of this study was to increase the range and applicability of the PSA by developing a data driven strategy for determining reliable regional estimates of receptor density (B(avail)) and in vivo affinity (1/appK(D)), and validate the strategy using a simulation model. The data driven method uses a time window guided by the dynamic equilibrium state of the system as opposed to using a static time window. To test the method, simulations of partial saturation experiments were generated and validated against experimental data. The experimental conditions simulated included a range of receptor occupancy levels and three different B(avail) and appK(D) values to mimic diseases states. Also the effect of using a reference region and typical PET noise on the stability and accuracy of the estimates was investigated. The investigations showed that the parameter estimates in a simulated healthy mouse, using the data driven method were within 10±30% of the simulated input for the range of occupancy levels simulated. Throughout all experimental conditions simulated, the accuracy and robustness of the estimates using the data driven method were much improved upon the typical method of using a static time window, especially at low receptor occupancy levels. Introducing a reference region caused a bias of approximately 10% over the range of occupancy levels. Based on extensive simulated experimental conditions, it was shown the data driven method provides accurate and precise estimates of B(avail) and appK(D) for a broader range of conditions compared to the original method. Copyright © 2014 Elsevier Inc. All rights reserved.
A reconfigurable visual-programming library for real-time closed-loop cellular electrophysiology
Biró, István; Giugliano, Michele
2015-01-01
Most of the software platforms for cellular electrophysiology are limited in terms of flexibility, hardware support, ease of use, or re-configuration and adaptation for non-expert users. Moreover, advanced experimental protocols requiring real-time closed-loop operation to investigate excitability, plasticity, dynamics, are largely inaccessible to users without moderate to substantial computer proficiency. Here we present an approach based on MATLAB/Simulink, exploiting the benefits of LEGO-like visual programming and configuration, combined to a small, but easily extendible library of functional software components. We provide and validate several examples, implementing conventional and more sophisticated experimental protocols such as dynamic-clamp or the combined use of intracellular and extracellular methods, involving closed-loop real-time control. The functionality of each of these examples is demonstrated with relevant experiments. These can be used as a starting point to create and support a larger variety of electrophysiological tools and methods, hopefully extending the range of default techniques and protocols currently employed in experimental labs across the world. PMID:26157385
Experimental identification of closely spaced modes using NExT-ERA
NASA Astrophysics Data System (ADS)
Hosseini Kordkheili, S. A.; Momeni Massouleh, S. H.; Hajirezayi, S.; Bahai, H.
2018-01-01
This article presents a study on the capability of the time domain OMA method, NExT-ERA, to identify closely spaced structural dynamic modes. A survey in the literature reveals that few experimental studies have been conducted on the effectiveness of the NExT-ERA methodology in case of closely spaced modes specifically. In this paper we present the formulation for NExT-ERA. This formulation is then implemented in an algorithm and a code, developed in house to identify the modal parameters of different systems using their generated time history data. Some numerical models are firstly investigated to validate the code. Two different case studies involving a plate with closely spaced modes and a pulley ring with greater extent of closeness in repeated modes are presented. Both structures are excited by random impulses under the laboratory condition. The resulting time response acceleration data are then used as input in the developed code to extract modal parameters of the structures. The accuracy of the results is checked against those obtained from experimental tests.
NASA Astrophysics Data System (ADS)
Märk, Julia; Theiss, Christoph; Schmitt, Franz-Josef; Laufer, Jan
2015-03-01
Fluorophores, such as exogenous dyes and genetically expressed proteins, exhibit radiative relaxation with long excited state lifetimes. This can be exploited for PA detection based on dual wavelength excitation using pump and probe wavelengths that coincide with the absorption and emission spectra, respectively. While the pump pulse raises the fluorophore to a long-lived excited state, simultaneous illumination with the probe pulse reduces the excited state lifetime due to stimulated emission (SE).This leads to a change in thermalized energy, and hence PA signal amplitude, compared to single wavelength illumination. By introducing a time delay between pump and probe pulses, the change in PA amplitude can be modulated. Since the effect is not observed in endogenous chromophores, it provides a contrast mechanism for the detection of fluorophores via PA difference imaging. In this study, a theoretical model of the PA signal generation in fluorophores was developed and experimentally validated. The model is based on a system of coupled rate equations, which describe the spatial and temporal changes in the population of the molecular energy levels of a fluorophore as a function of pump-probe energy and concentration. This allows the prediction of the thermalized energy distribution, and hence the time-resolved PA signal amplitude. The model was validated by comparing its predictions to PA signals measured in solutions of rhodamine 6G, a well-known laser dye, and Atto680, a NIR fluorophore.
Numerical and experimental study of Lamb wave propagation in a two-dimensional acoustic black hole
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Shiling; Shen, Zhonghua, E-mail: shenzh@njust.edu.cn; Lomonosov, Alexey M.
2016-06-07
The propagation of laser-generated Lamb waves in a two-dimensional acoustic black-hole structure was studied numerically and experimentally. The geometrical acoustic theory has been applied to calculate the beam trajectories in the region of the acoustic black hole. The finite element method was also used to study the time evolution of propagating waves. An optical system based on the laser-Doppler vibration method was assembled. The effect of the focusing wave and the reduction in wave speed of the acoustic black hole has been validated.
Elementary Particles and Weak Interactions
DOE R&D Accomplishments Database
Lee, T. D.; Yang, C. N.
1957-01-01
Some general patterns of interactions between various elementary particles are reviewed and some general questions concerning the symmetry properties of these particles are studied. Topics are included on the theta-tau puzzle, experimental limits on the validity of parity conservation, some general discussions on the consequences due to possible non-invariance under P, C, and T, various possible experimental tests on invariance under P, C, and T, a two-component theory of the neutrino, a possible law of conservation of leptons and the universal Fermi interactions, and time reversal invariance and Mach's principle. (M.H.R.)
Schütte, Judith; Wang, Huange; Antoniou, Stella; Jarratt, Andrew; Wilson, Nicola K; Riepsaame, Joey; Calero-Nieto, Fernando J; Moignard, Victoria; Basilico, Silvia; Kinston, Sarah J; Hannah, Rebecca L; Chan, Mun Chiang; Nürnberg, Sylvia T; Ouwehand, Willem H; Bonzanni, Nicola; de Bruijn, Marella FTR; Göttgens, Berthold
2016-01-01
Transcription factor (TF) networks determine cell-type identity by establishing and maintaining lineage-specific expression profiles, yet reconstruction of mammalian regulatory network models has been hampered by a lack of comprehensive functional validation of regulatory interactions. Here, we report comprehensive ChIP-Seq, transgenic and reporter gene experimental data that have allowed us to construct an experimentally validated regulatory network model for haematopoietic stem/progenitor cells (HSPCs). Model simulation coupled with subsequent experimental validation using single cell expression profiling revealed potential mechanisms for cell state stabilisation, and also how a leukaemogenic TF fusion protein perturbs key HSPC regulators. The approach presented here should help to improve our understanding of both normal physiological and disease processes. DOI: http://dx.doi.org/10.7554/eLife.11469.001 PMID:26901438
IFMIF: overview of the validation activities
NASA Astrophysics Data System (ADS)
Knaster, J.; Arbeiter, F.; Cara, P.; Favuzza, P.; Furukawa, T.; Groeschel, F.; Heidinger, R.; Ibarra, A.; Matsumoto, H.; Mosnier, A.; Serizawa, H.; Sugimoto, M.; Suzuki, H.; Wakai, E.
2013-11-01
The Engineering Validation and Engineering Design Activities (EVEDA) for the International Fusion Materials Irradiation Facility (IFMIF), an international collaboration under the Broader Approach Agreement between Japan Government and EURATOM, aims at allowing a rapid construction phase of IFMIF in due time with an understanding of the cost involved. The three main facilities of IFMIF (1) the Accelerator Facility, (2) the Target Facility and (3) the Test Facility are the subject of validation activities that include the construction of either full scale prototypes or smartly devised scaled down facilities that will allow a straightforward extrapolation to IFMIF needs. By July 2013, the engineering design activities of IFMIF matured with the delivery of an Intermediate IFMIF Engineering Design Report (IIEDR) supported by experimental results. The installation of a Linac of 1.125 MW (125 mA and 9 MeV) of deuterons started in March 2013 in Rokkasho (Japan). The world's largest liquid Li test loop is running in Oarai (Japan) with an ambitious experimental programme for the years ahead. A full scale high flux test module that will house ∼1000 small specimens developed jointly in Europe and Japan for the Fusion programme has been constructed by KIT (Karlsruhe) together with its He gas cooling loop. A full scale medium flux test module to carry out on-line creep measurement has been validated by CRPP (Villigen).
Three-dimensional numerical and experimental studies on transient ignition of hybrid rocket motor
NASA Astrophysics Data System (ADS)
Tian, Hui; Yu, Ruipeng; Zhu, Hao; Wu, Junfeng; Cai, Guobiao
2017-11-01
This paper presents transient simulations and experimental studies of the ignition process of the hybrid rocket motors (HRMs) using 90% hydrogen peroxide (HP) as the oxidizer and polymethyl methacrylate (PMMA) and Polyethylene (PE) as fuels. A fluid-solid coupling numerically method is established based on the conserved form of the three-dimensional unsteady Navier-Stokes (N-S) equations, considering gas fluid with chemical reactions and heat transfer between the fluid and solid region. Experiments are subsequently conducted using high-speed camera to record the ignition process. The flame propagation, chamber pressurizing process and average fuel regression rate of the numerical simulation results show good agreement with the experimental ones, which demonstrates the validity of the simulations in this study. The results also indicate that the flame propagation time is mainly affected by fluid dynamics and it increases with an increasing grain port area. The chamber pressurizing process begins when the flame propagation completes in the grain port. Furthermore, the chamber pressurizing time is about 4 times longer than the time of flame propagation.
Howerton, Christopher L; Garner, Joseph P; Mench, Joy A
2012-07-30
Pre-clinical investigation of human CNS disorders relies heavily on mouse models. However these show low predictive validity for translational success to humans, partly due to the extensive use of rapid, high-throughput behavioral assays. Improved assays to monitor rodent behavior over longer time scales in a variety of contexts while still maintaining the efficiency of data collection associated with high-throughput assays are needed. We developed an apparatus that uses radio frequency identification device (RFID) technology to facilitate long-term automated monitoring of the behavior of mice in socially or structurally complex cage environments. Mice that were individually marked and implanted with transponders were placed in pairs in the apparatus, and their locations continuously tracked for 24 h. Video observation was used to validate the RFID readings. The apparatus and its associated software accurately tracked the locations of all mice, yielding information about each mouse's location over time, its diel activity patterns, and the amount of time it was in the same location as the other mouse in the pair. The information that can be efficiently collected in this apparatus has a variety of applications for pre-clinical research on human CNS disorders, for example major depressive disorder and autism spectrum disorder, in that it can be used to quantify validated endophenotypes or biomarkers of these disorders using rodent models. While the specific configuration of the apparatus described here was designed to answer particular experimental questions, it can be modified in various ways to accommodate different experimental designs. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Underwood, Thomas; Loebner, Keith; Cappelli, Mark
2015-11-01
Detailed measurements of the thermodynamic and electrodynamic plasma state variables within the plume of a pulsed plasma accelerator are presented. A quadruple Langmuir probe operating in current-saturation mode is used to obtain time resolved measurements of the plasma density, temperature, potential, and velocity along the central axis of the accelerator. This data is used in conjunction with a fast-framing, intensified CCD camera to develop and validate a model predicting the existence of two distinct types of ionization waves corresponding to the upper and lower solution branches of the Hugoniot curve. A deviation of less than 8% is observed between the quasi-steady, one-dimensional theoretical model and the experimentally measured plume velocity. This work is supported by the U.S. Department of Energy Stewardship Science Academic Program in addition to the National Defense Science Engineering Graduate Fellowship.
Optimal coordination and control of posture and movements.
Johansson, Rolf; Fransson, Per-Anders; Magnusson, Måns
2009-01-01
This paper presents a theoretical model of stability and coordination of posture and locomotion, together with algorithms for continuous-time quadratic optimization of motion control. Explicit solutions to the Hamilton-Jacobi equation for optimal control of rigid-body motion are obtained by solving an algebraic matrix equation. The stability is investigated with Lyapunov function theory and it is shown that global asymptotic stability holds. It is also shown how optimal control and adaptive control may act in concert in the case of unknown or uncertain system parameters. The solution describes motion strategies of minimum effort and variance. The proposed optimal control is formulated to be suitable as a posture and movement model for experimental validation and verification. The combination of adaptive and optimal control makes this algorithm a candidate for coordination and control of functional neuromuscular stimulation as well as of prostheses. Validation examples with experimental data are provided.
Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian
2014-01-01
A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis.
Experimental study of an adaptive elastic metamaterial controlled by electric circuits
NASA Astrophysics Data System (ADS)
Zhu, R.; Chen, Y. Y.; Barnhart, M. V.; Hu, G. K.; Sun, C. T.; Huang, G. L.
2016-01-01
The ability to control elastic wave propagation at a deep subwavelength scale makes locally resonant elastic metamaterials very relevant. A number of abilities have been demonstrated such as frequency filtering, wave guiding, and negative refraction. Unfortunately, few metamaterials develop into practical devices due to their lack of tunability for specific frequencies. With the help of multi-physics numerical modeling, experimental validation of an adaptive elastic metamaterial integrated with shunted piezoelectric patches has been performed in a deep subwavelength scale. The tunable bandgap capacity, as high as 45%, is physically realized by using both hardening and softening shunted circuits. It is also demonstrated that the effective mass density of the metamaterial can be fully tailored by adjusting parameters of the shunted electric circuits. Finally, to illustrate a practical application, transient wave propagation tests of the adaptive metamaterial subjected to impact loads are conducted to validate their tunable wave mitigation abilities in real-time.
Fiber Optic Thermo-Hygrometers for Soil Moisture Monitoring.
Leone, Marco; Principe, Sofia; Consales, Marco; Parente, Roberto; Laudati, Armando; Caliro, Stefano; Cutolo, Antonello; Cusano, Andrea
2017-06-20
This work deals with the fabrication, prototyping, and experimental validation of a fiber optic thermo-hygrometer-based soil moisture sensor, useful for rainfall-induced landslide prevention applications. In particular, we recently proposed a new generation of fiber Bragg grating (FBGs)-based soil moisture sensors for irrigation purposes. This device was realized by integrating, inside a customized aluminum protection package, a FBG thermo-hygrometer with a polymer micro-porous membrane. Here, we first verify the limitations, in terms of the volumetric water content (VWC) measuring range, of this first version of the soil moisture sensor for its exploitation in landslide prevention applications. Successively, we present the development, prototyping, and experimental validation of a novel, optimized version of a soil VWC sensor, still based on a FBG thermo-hygrometer, but able to reliably monitor, continuously and in real-time, VWC values up to 37% when buried in the soil.
Fiber Optic Thermo-Hygrometers for Soil Moisture Monitoring
Leone, Marco; Principe, Sofia; Consales, Marco; Parente, Roberto; Laudati, Armando; Caliro, Stefano; Cutolo, Antonello; Cusano, Andrea
2017-01-01
This work deals with the fabrication, prototyping, and experimental validation of a fiber optic thermo-hygrometer-based soil moisture sensor, useful for rainfall-induced landslide prevention applications. In particular, we recently proposed a new generation of fiber Bragg grating (FBGs)-based soil moisture sensors for irrigation purposes. This device was realized by integrating, inside a customized aluminum protection package, a FBG thermo-hygrometer with a polymer micro-porous membrane. Here, we first verify the limitations, in terms of the volumetric water content (VWC) measuring range, of this first version of the soil moisture sensor for its exploitation in landslide prevention applications. Successively, we present the development, prototyping, and experimental validation of a novel, optimized version of a soil VWC sensor, still based on a FBG thermo-hygrometer, but able to reliably monitor, continuously and in real-time, VWC values up to 37% when buried in the soil. PMID:28632172
Development and Validation of a Mathematical Model for Olive Oil Oxidation
NASA Astrophysics Data System (ADS)
Rahmouni, K.; Bouhafa, H.; Hamdi, S.
2009-03-01
A mathematical model describing the stability or the susceptibility to oxidation of extra virgin olive oil has been developed. The model has been resolved by an iterative method using differential finite method. It was validated by experimental data of extra virgin olive oil (EVOO) oxidation. EVOO stability was tested by using a Rancimat at four different temperatures 60, 70, 80 and 90° C until peroxide accumulation reached 20 [meq/kg]. Peroxide formation is speed relatively slow; fits zero order reaction with linear regression coefficients varying from 0, 98 to 0, 99. The mathematical model was used to predict the shelf life of bulk conditioned olive oil. This model described peroxide accumulation inside a container in excess of oxygen as a function of time at various positions from the interface air/oil. Good correlations were obtained between theoretical and experimental values.
Investigating Mechanisms of Chronic Kidney Disease in Mouse Models
Eddy, Allison A.; Okamura, Daryl M.; Yamaguchi, Ikuyo; López-Guisa, Jesús M.
2011-01-01
Animal models of chronic kidney disease (CKD) are important experimental tools that are used to investigate novel mechanistic pathways and to validate potential new therapeutic interventions prior to pre-clinical testing in humans. Over the past several years, mouse CKD models have been extensively used for these purposes. Despite significant limitations, the model of unilateral ureteral obstruction (UUO) has essentially become the high throughput in vivo model, as it recapitulates the fundamental pathogenetic mechanisms that typify all forms of CKD in a relatively short time span. In addition, several alternative mouse models are available that can be used to validate new mechanistic paradigms and/or novel therapies. Several models are reviewed – both genetic and experimentally induced – that provide investigators with an opportunity to include renal functional study end-points together with quantitative measures of fibrosis severity, something that is not possible with the UUO model. PMID:21695449
Adaptive neural network motion control of manipulators with experimental evaluations.
Puga-Guzmán, S; Moreno-Valenzuela, J; Santibáñez, V
2014-01-01
A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller.
Adaptive Neural Network Motion Control of Manipulators with Experimental Evaluations
Puga-Guzmán, S.; Moreno-Valenzuela, J.; Santibáñez, V.
2014-01-01
A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller. PMID:24574910
Asati, Ankita; Satyanarayana, G N V; Patel, Devendra K
2017-04-01
An efficient and inexpensive method using vortex-assisted surfactant-enhanced emulsification microextraction (VASEME) based on solidification of floating organic droplet coupled with ultraperformance liquid chromatography-tandem mass spectrometry is proposed for the analysis of glucocorticoids in water samples (river water and hospital wastewater). VASEME was optimized by the experimental validation of Plackett-Burman design and central composite design, which has been co-related to experimental design. Plackett-Burman design showed that factors such as vortex time, surfactant concentration, and pH significantly affect the extraction efficiency of the method. Method validation was characterized by an acceptable calibration range of 1-1000 ng L -1 , and the limit of detection was in the range from 2.20 to 8.12 ng L -1 for glucocorticoids. The proposed method was applied to determine glucocorticoids in river water and hospital wastewater in Lucknow, India. It is reliable and rapid and has potential application for analysis of glucocorticoids in environmental aqueous samples. Graphical Abstract Low density based extraction of gluococorticoids by using design of experiment.
NASA Astrophysics Data System (ADS)
Islam, Md Mahbubul; Strachan, Alejandro
A detailed atomistic-level understanding of the ultrafast chemistry of detonation processes of high energy materials is crucial to understand their performance and safety. Recent advances in laser shocks and ultra-fast spectroscopy is yielding the first direct experimental evidence of chemistry at extreme conditions. At the same time, reactive molecular dynamics (MD) in current high-performance computing platforms enable an atomic description of shock-induced chemistry with length and timescales approaching those of experiments. We use MD simulations with the reactive force field ReaxFF to investigate the shock-induced chemical decomposition mechanisms of polyvinyl nitrate (PVN) and nitromethane (NM). The effect of shock pressure on chemical reaction mechanisms and kinetics of both the materials are investigated. For direct comparison of our simulation results with experimentally derived IR absorption data, we performed spectral analysis using atomistic velocity at various shock conditions. The combination of reactive MD simulations and ultrafast spectroscopy enables both the validation of ReaxFF at extreme conditions and contributes to the interpretation of the experimental data relating changes in spectral features to atomic processes. Office of Naval Research MURI program.
Numerical Simulation and Experimental Validation of Failure Caused by Vibration of a Fan
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Han, Wu; Feng, Jianmei; Jia, Xiaohan; Peng, Xueyuan
2017-08-01
This paper presents the root cause analysis of an unexpected fracture occurred on the blades of a motor fan used in a natural gas reciprocating compressor unit. A finite element model was established to investigate the natural frequencies and modal shapes of the fan, and a modal test was performed to verify the numerical results. It was indicated that the numerical results agreed well with experimental data. The third order natural frequency was close to the six times excitation frequency, and the corresponding modal shape was the combination of bending and torsional vibration, which consequently contributed to low-order resonance and fracture failure of the fan. The torsional moment obtained by a torsional vibration analysis of the compressor shaft system was exerted on the numerical model of the fan to evaluate the dynamic stress response of the fan. The results showed that the stress concentration regions on the numerical model were consistent with the location of fractures on the fan. Based on the numerical simulation and experimental validation, some recommendations were given to improve the reliability of the motor fan.
Chen, Jin; Venugopal, Vivek; Intes, Xavier
2011-01-01
Time-resolved fluorescence optical tomography allows 3-dimensional localization of multiple fluorophores based on lifetime contrast while providing a unique data set for improved resolution. However, to employ the full fluorescence time measurements, a light propagation model that accurately simulates weakly diffused and multiple scattered photons is required. In this article, we derive a computationally efficient Monte Carlo based method to compute time-gated fluorescence Jacobians for the simultaneous imaging of two fluorophores with lifetime contrast. The Monte Carlo based formulation is validated on a synthetic murine model simulating the uptake in the kidneys of two distinct fluorophores with lifetime contrast. Experimentally, the method is validated using capillaries filled with 2.5nmol of ICG and IRDye™800CW respectively embedded in a diffuse media mimicking the average optical properties of mice. Combining multiple time gates in one inverse problem allows the simultaneous reconstruction of multiple fluorophores with increased resolution and minimal crosstalk using the proposed formulation. PMID:21483610
A PCR primer bank for quantitative gene expression analysis.
Wang, Xiaowei; Seed, Brian
2003-12-15
Although gene expression profiling by microarray analysis is a useful tool for assessing global levels of transcriptional activity, variability associated with the data sets usually requires that observed differences be validated by some other method, such as real-time quantitative polymerase chain reaction (real-time PCR). However, non-specific amplification of non-target genes is frequently observed in the latter, confounding the analysis in approximately 40% of real-time PCR attempts when primer-specific labels are not used. Here we present an experimentally validated algorithm for the identification of transcript-specific PCR primers on a genomic scale that can be applied to real-time PCR with sequence-independent detection methods. An online database, PrimerBank, has been created for researchers to retrieve primer information for their genes of interest. PrimerBank currently contains 147 404 primers encompassing most known human and mouse genes. The primer design algorithm has been tested by conventional and real-time PCR for a subset of 112 primer pairs with a success rate of 98.2%.
NASA Astrophysics Data System (ADS)
Aspden, Reuben S.; Tasca, Daniel S.; Forbes, Andrew; Boyd, Robert W.; Padgett, Miles J.
2014-04-01
The Klyshko advanced-wave picture is a well-known tool useful in the conceptualisation of parametric down-conversion (SPDC) experiments. Despite being well-known and understood, there have been few experimental demonstrations illustrating its validity. Here, we present an experimental demonstration of this picture using a time-gated camera in an image-based coincidence measurement. We show an excellent agreement between the spatial distributions as predicted by the Klyshko picture and those obtained using the SPDC photon pairs. An interesting speckle feature is present in the Klyshko predictive images due to the spatial coherence of the back-propagated beam in the multi-mode fibre. This effect can be removed by mechanically twisting the fibre, thus degrading the spatial coherence of the beam and time-averaging the speckle pattern, giving an accurate correspondence between the predictive and SPDC images.
Keromnes, Alan; Metcalfe, Wayne K.; Heufer, Karl A.; ...
2013-03-12
The oxidation of syngas mixtures was investigated experimentally and simulated with an updated chemical kinetic model. Ignition delay times for H 2/CO/O 2/N 2/Ar mixtures have been measured using two rapid compression machines and shock tubes at pressures from 1 to 70 bar, over a temperature range of 914–2220 K and at equivalence ratios from 0.1 to 4.0. Results show a strong dependence of ignition times on temperature and pressure at the end of the compression; ignition delays decrease with increasing temperature, pressure, and equivalence ratio. The reactivity of the syngas mixtures was found to be governed by hydrogen chemistrymore » for CO concentrations lower than 50% in the fuel mixture. For higher CO concentrations, an inhibiting effect of CO was observed. Flame speeds were measured in helium for syngas mixtures with a high CO content and at elevated pressures of 5 and 10 atm using the spherically expanding flame method. A detailed chemical kinetic mechanism for hydrogen and H 2/CO (syngas) mixtures has been updated, rate constants have been adjusted to reflect new experimental information obtained at high pressures and new rate constant values recently published in the literature. Experimental results for ignition delay times and flame speeds have been compared with predictions using our newly revised chemical kinetic mechanism, and good agreement was observed. In the mechanism validation, particular emphasis is placed on predicting experimental data at high pressures (up to 70 bar) and intermediate- to high-temperature conditions, particularly important for applications in internal combustion engines and gas turbines. The reaction sequence H 2 + HO˙ 2 ↔ H˙+H 2O 2 followed by H 2O 2(+M) ↔ O˙H+O˙H(+M) was found to play a key role in hydrogen ignition under high-pressure and intermediate-temperature conditions. The rate constant for H 2+HO˙ 2 showed strong sensitivity to high-pressure ignition times and has considerable uncertainty, based on literature values. As a result, a rate constant for this reaction is recommended based on available literature values and on our mechanism validation.« less
Experimental Validation of the Half-Length Force Concept Inventory
ERIC Educational Resources Information Center
Han, Jing; Koenig, Kathleen; Cui, Lili; Fritchman, Joseph; Li, Dan; Sun, Wanyi; Fu, Zhao; Bao, Lei
2016-01-01
In a recent study, the 30-question Force Concept Inventory (FCI) was theoretically split into two 14-question "half-length" tests (HFCIs) covering the same set of concepts and producing mean scores that can be equated to those of the original FCI. The HFCIs require less administration time and reduce test-retest issues when different…
The Factorial Survey: Design Selection and its Impact on Reliability and Internal Validity
ERIC Educational Resources Information Center
Dülmer, Hermann
2016-01-01
The factorial survey is an experimental design consisting of varying situations (vignettes) that have to be judged by respondents. For more complex research questions, it quickly becomes impossible for an individual respondent to judge all vignettes. To overcome this problem, random designs are recommended most of the time, whereas quota designs…
Reading in Examination-Type Situations: The Effects of Text Layout on Performance
ERIC Educational Resources Information Center
Lonsdale, Maria dos Santos; Dyson, Mary C.; Reynolds, Linda
2006-01-01
Examinations are conventionally used to measure candidates' achievement in a limited time period. However, the influence of text layout on performance may compromise the construct validity of the examination. An experimental study looked at the effects of the text layout on the speed and accuracy of a reading task in an examination-type situation.…
Reduced and Validated Kinetic Mechanisms for Hydrogen-CO-sir Combustion in Gas Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yiguang Ju; Frederick Dryer
2009-02-07
Rigorous experimental, theoretical, and numerical investigation of various issues relevant to the development of reduced, validated kinetic mechanisms for synthetic gas combustion in gas turbines was carried out - including the construction of new radiation models for combusting flows, improvement of flame speed measurement techniques, measurements and chemical kinetic analysis of H{sub 2}/CO/CO{sub 2}/O{sub 2}/diluent mixtures, revision of the H{sub 2}/O{sub 2} kinetic model to improve flame speed prediction capabilities, and development of a multi-time scale algorithm to improve computational efficiency in reacting flow simulations.
Validation of hydrogen gas stratification and mixing models
Wu, Hsingtzu; Zhao, Haihua
2015-05-26
Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for amore » large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.« less
Validation of Cross Sections for Monte Carlo Simulation of the Photoelectric Effect
NASA Astrophysics Data System (ADS)
Han, Min Cheol; Kim, Han Sung; Pia, Maria Grazia; Basaglia, Tullio; Batič, Matej; Hoff, Gabriela; Kim, Chan Hyeong; Saracco, Paolo
2016-04-01
Several total and partial photoionization cross section calculations, based on both theoretical and empirical approaches, are quantitatively evaluated with statistical analyses using a large collection of experimental data retrieved from the literature to identify the state of the art for modeling the photoelectric effect in Monte Carlo particle transport. Some of the examined cross section models are available in general purpose Monte Carlo systems, while others have been implemented and subjected to validation tests for the first time to estimate whether they could improve the accuracy of particle transport codes. The validation process identifies Scofield's 1973 non-relativistic calculations, tabulated in the Evaluated Photon Data Library (EPDL), as the one best reproducing experimental measurements of total cross sections. Specialized total cross section models, some of which derive from more recent calculations, do not provide significant improvements. Scofield's non-relativistic calculations are not surpassed regarding the compatibility with experiment of K and L shell photoionization cross sections either, although in a few test cases Ebel's parameterization produces more accurate results close to absorption edges. Modifications to Biggs and Lighthill's parameterization implemented in Geant4 significantly reduce the accuracy of total cross sections at low energies with respect to its original formulation. The scarcity of suitable experimental data hinders a similar extensive analysis for the simulation of the photoelectron angular distribution, which is limited to a qualitative appraisal.
Shin, Sangmun; Choi, Du Hyung; Truong, Nguyen Khoa Viet; Kim, Nam Ah; Chu, Kyung Rok; Jeong, Seong Hoon
2011-04-04
A new experimental design methodology was developed by integrating the response surface methodology and the time series modeling. The major purposes were to identify significant factors in determining swelling and release rate from matrix tablets and their relative factor levels for optimizing the experimental responses. Properties of tablet swelling and drug release were assessed with ten factors and two default factors, a hydrophilic model drug (terazosin) and magnesium stearate, and compared with target values. The selected input control factors were arranged in a mixture simplex lattice design with 21 experimental runs. The obtained optimal settings for gelation were PEO, LH-11, Syloid, and Pharmacoat with weight ratios of 215.33 (88.50%), 5.68 (2.33%), 19.27 (7.92%), and 3.04 (1.25%), respectively. The optimal settings for drug release were PEO and citric acid with weight ratios of 191.99 (78.91%) and 51.32 (21.09%), respectively. Based on the results of matrix swelling and drug release, the optimal solutions, target values, and validation experiment results over time were similar and showed consistent patterns with very small biases. The experimental design methodology could be a very promising experimental design method to obtain maximum information with limited time and resources. It could also be very useful in formulation studies by providing a systematic and reliable screening method to characterize significant factors in the sustained release matrix tablet. Copyright © 2011 Elsevier B.V. All rights reserved.
CFD Validation Experiment of a Mach 2.5 Axisymmetric Shock-Wave/Boundary-Layer Interaction
NASA Technical Reports Server (NTRS)
Davis, David O.
2015-01-01
Experimental investigations of specific flow phenomena, e.g., Shock Wave Boundary-Layer Interactions (SWBLI), provide great insight to the flow behavior but often lack the necessary details to be useful as CFD validation experiments. Reasons include: 1.Undefined boundary conditions Inconsistent results 2.Undocumented 3D effects (CL only measurements) 3.Lack of uncertainty analysis While there are a number of good subsonic experimental investigations that are sufficiently documented to be considered test cases for CFD and turbulence model validation, the number of supersonic and hypersonic cases is much less. This was highlighted by Settles and Dodsons [1] comprehensive review of available supersonic and hypersonic experimental studies. In all, several hundred studies were considered for their database.Of these, over a hundred were subjected to rigorous acceptance criteria. Based on their criteria, only 19 (12 supersonic, 7 hypersonic) were considered of sufficient quality to be used for validation purposes. Aeschliman and Oberkampf [2] recognized the need to develop a specific methodology for experimental studies intended specifically for validation purposes.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-31
... factors as the approved models, are validated by experimental test data, and receive the Administrator's... stage of the MEP involves applying the model against a database of experimental test cases including..., particularly the requirement for validation by experimental test data. That guidance is based on the MEP's...
NASA Astrophysics Data System (ADS)
Pandey, Saurabh; Majhi, Somanath; Ghorai, Prasenjit
2017-07-01
In this paper, the conventional relay feedback test has been modified for modelling and identification of a class of real-time dynamical systems in terms of linear transfer function models with time-delay. An ideal relay and unknown systems are connected through a negative feedback loop to bring the sustained oscillatory output around the non-zero setpoint. Thereafter, the obtained limit cycle information is substituted in the derived mathematical equations for accurate identification of unknown plants in terms of overdamped, underdamped, critically damped second-order plus dead time and stable first-order plus dead time transfer function models. Typical examples from the literature are included for the validation of the proposed identification scheme through computer simulations. Subsequently, the comparisons between estimated model and true system are drawn through integral absolute error criterion and frequency response plots. Finally, the obtained output responses through simulations are verified experimentally on real-time liquid level control system using Yokogawa Distributed Control System CENTUM CS3000 set up.
Optimal Control for Aperiodic Dual-Rate Systems With Time-Varying Delays
Salt, Julián; Guinaldo, María; Chacón, Jesús
2018-01-01
In this work, we consider a dual-rate scenario with slow input and fast output. Our objective is the maximization of the decay rate of the system through the suitable choice of the n-input signals between two measures (periodic sampling) and their times of application. The optimization algorithm is extended for time-varying delays in order to make possible its implementation in networked control systems. We provide experimental results in an air levitation system to verify the validity of the algorithm in a real plant. PMID:29747441
Optimal Control for Aperiodic Dual-Rate Systems With Time-Varying Delays.
Aranda-Escolástico, Ernesto; Salt, Julián; Guinaldo, María; Chacón, Jesús; Dormido, Sebastián
2018-05-09
In this work, we consider a dual-rate scenario with slow input and fast output. Our objective is the maximization of the decay rate of the system through the suitable choice of the n -input signals between two measures (periodic sampling) and their times of application. The optimization algorithm is extended for time-varying delays in order to make possible its implementation in networked control systems. We provide experimental results in an air levitation system to verify the validity of the algorithm in a real plant.
Development and validation of a low-cost mobile robotics testbed
NASA Astrophysics Data System (ADS)
Johnson, Michael; Hayes, Martin J.
2012-03-01
This paper considers the design, construction and validation of a low-cost experimental robotic testbed, which allows for the localisation and tracking of multiple robotic agents in real time. The testbed system is suitable for research and education in a range of different mobile robotic applications, for validating theoretical as well as practical research work in the field of digital control, mobile robotics, graphical programming and video tracking systems. It provides a reconfigurable floor space for mobile robotic agents to operate within, while tracking the position of multiple agents in real-time using the overhead vision system. The overall system provides a highly cost-effective solution to the topical problem of providing students with practical robotics experience within severe budget constraints. Several problems encountered in the design and development of the mobile robotic testbed and associated tracking system, such as radial lens distortion and the selection of robot identifier templates are clearly addressed. The testbed performance is quantified and several experiments involving LEGO Mindstorm NXT and Merlin System MiaBot robots are discussed.
TIMES-SS--recent refinements resulting from an industrial skin sensitisation consortium.
Patlewicz, G; Kuseva, C; Mehmed, A; Popova, Y; Dimitrova, G; Ellis, G; Hunziker, R; Kern, P; Low, L; Ringeissen, S; Roberts, D W; Mekenyan, O
2014-01-01
The TImes MEtabolism Simulator platform for predicting Skin Sensitisation (TIMES-SS) is a hybrid expert system, first developed at Bourgas University using funding and data from a consortium of industry and regulators. TIMES-SS encodes structure-toxicity and structure-skin metabolism relationships through a number of transformations, some of which are underpinned by mechanistic 3D QSARs. The model estimates semi-quantitative skin sensitisation potency classes and has been developed with the aim of minimising animal testing, and also to be scientifically valid in accordance with the OECD principles for (Q)SAR validation. In 2007 an external validation exercise was undertaken to fully address these principles. In 2010, a new industry consortium was established to coordinate research efforts in three specific areas: refinement of abiotic reactions in the skin (namely autoxidation) in the skin, refinement of the manner in which chemical reactivity was captured in terms of structure-toxicity rules (inclusion of alert reliability parameters) and defining the domain based on the underlying experimental data (study of discrepancies between local lymph node assay Local Lymph Node Assay (LLNA) and Guinea Pig Maximisation Test (GPMT)). The present paper summarises the progress of these activities and explains how the insights derived have been translated into refinements, resulting in increased confidence and transparency in the robustness of the TIMES-SS predictions.
Short- and long-time diffusion and dynamic scaling in suspensions of charged colloidal particles
NASA Astrophysics Data System (ADS)
Banchio, Adolfo J.; Heinen, Marco; Holmqvist, Peter; Nägele, Gerhard
2018-04-01
We report on a comprehensive theory-simulation-experimental study of collective and self-diffusion in concentrated suspensions of charge-stabilized colloidal spheres. In theory and simulation, the spheres are assumed to interact directly by a hard-core plus screened Coulomb effective pair potential. The intermediate scattering function, fc(q, t), is calculated by elaborate accelerated Stokesian dynamics (ASD) simulations for Brownian systems where many-particle hydrodynamic interactions (HIs) are fully accounted for, using a novel extrapolation scheme to a macroscopically large system size valid for all correlation times. The study spans the correlation time range from the colloidal short-time to the long-time regime. Additionally, Brownian Dynamics (BD) simulation and mode-coupling theory (MCT) results of fc(q, t) are generated where HIs are neglected. Using these results, the influence of HIs on collective and self-diffusion and the accuracy of the MCT method are quantified. It is shown that HIs enhance collective and self-diffusion at intermediate and long times. At short times self-diffusion, and for wavenumbers outside the structure factor peak region also collective diffusion, are slowed down by HIs. MCT significantly overestimates the slowing influence of dynamic particle caging. The dynamic scattering functions obtained in the ASD simulations are in overall good agreement with our dynamic light scattering (DLS) results for a concentration series of charged silica spheres in an organic solvent mixture, in the experimental time window and wavenumber range. From the simulation data for the time derivative of the width function associated with fc(q, t), there is indication of long-time exponential decay of fc(q, t), for wavenumbers around the location of the static structure factor principal peak. The experimental scattering functions in the probed time range are consistent with a time-wavenumber factorization scaling behavior of fc(q, t) that was first reported by Segrè and Pusey [Phys. Rev. Lett. 77, 771 (1996)] for suspensions of hard spheres. Our BD simulation and MCT results predict a significant violation of exact factorization scaling which, however, is approximately restored according to the ASD results when HIs are accounted for, consistent with the experimental findings for fc(q, t). Our study of collective diffusion is amended by simulation and theoretical results for the self-intermediate scattering function, fs(q, t), and its non-Gaussian parameter α2(t) and for the particle mean squared displacement W(t) and its time derivative. Since self-diffusion properties are not assessed in standard DLS measurements, a method to deduce W(t) approximately from fc(q, t) is theoretically validated.
Experimental investigation of an RNA sequence space
NASA Technical Reports Server (NTRS)
Lee, Youn-Hyung; Dsouza, Lisa; Fox, George E.
1993-01-01
Modern rRNAs are the historic consequence of an ongoing evolutionary exploration of a sequence space. These extant sequences belong to a special subset of the sequence space that is comprised only of those primary sequences that can validly perform the biological function(s) required of the particular RNA. If it were possible to readily identify all such valid sequences, stochastic predictions could be made about the relative likelihood of various evolutionary pathways available to an RNA. Herein an experimental system which can assess whether a particular sequence is likely to have validity as a eubacterial 5S rRNA is described. A total of ten naturally occurring, and hence known to be valid, sequences and two point mutants of unknown validity were used to test the usefulness of the approach. Nine of the ten valid sequences tested positive whereas both mutants tested as clearly defective. The tenth valid sequence gave results that would be interpreted as reflecting a borderline status were the answer not known. These results demonstrate that it is possible to experimentally determine which sequences in local regions of the sequence space are potentially valid 5S rRNAs.
Detection of tunnel excavation using fiber optic reflectometry: experimental validation
NASA Astrophysics Data System (ADS)
Linker, Raphael; Klar, Assaf
2013-06-01
Cross-border smuggling tunnels enable unmonitored movement of people and goods, and pose a severe threat to homeland security. In recent years, we have been working on the development of a system based on fiber- optic Brillouin time domain reflectometry (BOTDR) for detecting tunnel excavation. In two previous SPIE publications we have reported the initial development of the system as well as its validation using small-scale experiments. This paper reports, for the first time, results of full-scale experiments and discusses the system performance. The results confirm that distributed measurement of strain profiles in fiber cables buried at shallow depth enable detection of tunnel excavation, and by proper data processing, these measurements enable precise localization of the tunnel, as well as reasonable estimation of its depth.
2014-04-15
SINGLE CYLINDER DIESEL ENGINE Amit Shrestha, Umashankar Joshi, Ziliang Zheng, Tamer Badawy, Naeim A. Henein, Wayne State University, Detroit, MI, USA...13-03-2014 4. TITLE AND SUBTITLE EXPERIMENTAL VALIDATION AND COMBUSTION MODELING OF A JP-8 SURROGATE IN A SINGLE CYLINDER DIESEL ENGINE 5a...INTERNATIONAL UNCLASSIFIED • Validate a two-component JP-8 surrogate in a single cylinder diesel engine. Validation parameters include – Ignition delay
NASA Astrophysics Data System (ADS)
Most, S.; Nowak, W.; Bijeljic, B.
2014-12-01
Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.
Quick, Virginia; Martin-Biggers, Jennifer; Povis, Gayle Alleman; Worobey, John; Hongu, Nobuko; Byrd-Bredbenner, Carol
2018-05-01
This study examined long-term follow-up effects of participation in the HomeStyles RCT, using Social Cognitive Theory constructs, on physical activity cognitions, home environment, and lifestyle behavioral practices of families with preschool children (ages 2 to 5 years). Parents were systematically randomized to experimental or attention control group at baseline. Those completing all surveys that comprised of valid, reliable measures were the analytic sample (n = 61 experimental, n = 63 control; mean age 32.8 ± 5.9SD years). Repeated measures ANCOVA, controlling for prognostic variables (e.g., parent sex) revealed that variables assessing modeling of physical activity for children increased significantly (P ≤ .01) in both groups with no significant time by group effects. Paired t-tests indicated the experimental group's self-efficacy for keeping children's weight healthy and performing health promoting behaviors increased significantly over time whereas the control group did not but with no significant time by group effects. Self-regulation paired t-test findings indicated that total screentime the experimental group allowed children decreased significantly over time with no significant time by group effect. The value parents placed on physical activity for children increased over time in both groups with a significant time effect. The experimental group over time had significantly greater increases in the availability of physical activity space and supports inside the home than the control group. Improvements noted have the potential to help protect children and parents from excess weight gain, yet findings indicate considerable opportunity for continued improvement as well as elucidation of factors affecting concomitant changes in both study groups. Copyright © 2018 Elsevier Inc. All rights reserved.
Health Auctions: a Valuation Experiment (HAVE) study protocol.
Kularatna, Sanjeewa; Petrie, Dennis; Scuffham, Paul A; Byrnes, Joshua
2016-04-07
Quality-adjusted life years are derived using health state utility weights which adjust for the relative value of living in each health state compared with living in perfect health. Various techniques are used to estimate health state utility weights including time-trade-off and standard gamble. These methods have exhibited limitations in terms of complexity, validity and reliability. A new composite approach using experimental auctions to value health states is introduced in this protocol. A pilot study will test the feasibility and validity of using experimental auctions to value health states in monetary terms. A convenient sample (n=150) from a population of university staff and students will be invited to participate in 30 auction sets with a group of 5 people in each set. The 9 health states auctioned in each auction set will come from the commonly used EQ-5D-3L instrument. At most participants purchase 2 health states, and the participant who acquires the 2 'best' health states on average will keep the amount of money they do not spend in acquiring those health states. The value (highest bid and average bid) of each of the 24 health states will be compared across auctions to test for reliability across auction groups and across auctioneers. A test retest will be conducted for 10% of the sample to assess reliability of responses for health states auctions. Feasibility of conducting experimental auctions to value health states will also be examined. The validity of estimated health states values will be compared with published utility estimates from other methods. This pilot study will explore the feasibility, reliability and validity in using experimental auction for valuing health states. Ethical clearance was obtained from Griffith University ethics committee. The results will be disseminated in peer-reviewed journals and major international conferences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Modeling and Experimental Validation for 3D mm-wave Radar Imaging
NASA Astrophysics Data System (ADS)
Ghazi, Galia
As the problem of identifying suicide bombers wearing explosives concealed under clothing becomes increasingly important, it becomes essential to detect suspicious individuals at a distance. Systems which employ multiple sensors to determine the presence of explosives on people are being developed. Their functions include observing and following individuals with intelligent video, identifying explosives residues or heat signatures on the outer surface of their clothing, and characterizing explosives using penetrating X-rays, terahertz waves, neutron analysis, or nuclear quadrupole resonance. At present, mm-wave radar is the only modality that can both penetrate and sense beneath clothing at a distance of 2 to 50 meters without causing physical harm. Unfortunately, current mm-wave radar systems capable of performing high-resolution, real-time imaging require using arrays with a large number of transmitting and receiving modules; therefore, these systems present undesired large size, weight and power consumption, as well as extremely complex hardware architecture. The overarching goal of this thesis is the development and experimental validation of a next generation inexpensive, high-resolution radar system that can distinguish security threats hidden on individuals located at 2-10 meters range. In pursuit of this goal, this thesis proposes the following contributions: (1) Development and experimental validation of a new current-based, high-frequency computational method to model large scattering problems (hundreds of wavelengths) involving lossy, penetrable and multi-layered dielectric and conductive structures, which is needed for an accurate characterization of the wave-matter interaction and EM scattering in the target region; (2) Development of combined Norm-1, Norm-2 regularized imaging algorithms, which are needed for enhancing the resolution of the images while using a minimum number of transmitting and receiving antennas; (3) Implementation and experimental validation of new calibration techniques, which are needed for coherent imaging with multistatic configurations; and (4) Investigation of novel compressive antennas, which spatially modulate the wavefield in order to enhance the information transfer efficiency between sampling and imaging regions and use of Compressive Sensing algorithms.
NASA Astrophysics Data System (ADS)
Khouli, F.
An aeroelastic phenomenon, known as blade sailing, encountered during maritime operation of helicopters is identified as being a factor that limits the tactical flexibility of helicopter operation in some sea conditions. The hazards associated with this phenomenon and its complexity, owing to the number of factors contributing to its occurrence, led previous investigators to conclude that advanced and validated simulation tools are best suited to investigate it. A research gap is identified in terms of scaled experimental investigation of this phenomenon and practical engineering solutions to alleviate its negative impact on maritime helicopter operation. The feasibility of a proposed strategy to alleviate it required addressing a gap in modelling thin-walled composite active beams/rotor blades. The modelling is performed by extending a mathematically-consistent and asymptotic reduction strategy of the 3-D elastic problem to account for embedded active materials. The derived active cross-sectional theory is validated using 2-D finite element results for closed and open cross-sections. The geometrically-exact intrinsic formulation of active maritime rotor systems is demonstrated to yield compact and symbolic governing equations. The intrinsic feature is shown to allow a classical and proven solution scheme to be successfully applied to obtain time history solutions. A Froude-scaled experimental rotor was designed, built, and tested in a scaled ship airwake environment and representative ship motion. Based on experimental and simulations data, conclusions are drawn regarding the influence of the maritime operation environment and the rotor operation parameters on the blade sailing phenomenon. The experimental data is also used to successfully validate the developed simulation tools. The feasibility of an open-loop control strategy based on the integral active twist concept to counter blade sailing is established in a Mach-scaled maritime operation environment. Recommendations are proposed to improve the strategy and further establish its validity in a full-scale maritime operation environment.
Utilizing Metalized Fabrics for Liquid and Rip Detection and Localization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, Stephen; Mahan, Cody; Kuhn, Michael J
2013-01-01
This paper proposes a novel technique for utilizing conductive textiles as a distributed sensor for detecting and localizing liquids (e.g., blood), rips (e.g., bullet holes), and potentially biosignals. The proposed technique is verified through both simulation and experimental measurements. Circuit theory is utilized to depict conductive fabric as a bounded, near-infinite grid of resistors. Solutions to the well-known infinite resistance grid problem are used to confirm the accuracy and validity of this modeling approach. Simulations allow for discontinuities to be placed within the resistor matrix to illustrate the effects of bullet holes within the fabric. A real-time experimental system wasmore » developed that uses a multiplexed Wheatstone bridge approach to reconstruct the resistor grid across the conductive fabric and detect liquids and rips. The resistor grid model is validated through a comparison of simulated and experimental results. Results suggest accuracy proportional to the electrode spacing in determining the presence and location of discontinuities in conductive fabric samples. Future work is focused on refining the experimental system to provide more accuracy in detecting and localizing events as well as developing a complete prototype that can be deployed for field testing. Potential applications include intelligent clothing, flexible, lightweight sensing systems, and combat wound detection.« less
ERIC Educational Resources Information Center
Rossi, Robert Joseph
Methods drawn from four logical theories associated with studies of inductive processes are applied to the assessment and evaluation of experimental episode construct validity. It is shown that this application provides for estimates of episode informativeness with respect to the person examined in terms of the construct and to the construct…
Nonlocal character of quantum theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stapp, H.P.
1997-04-01
According to a common conception of causality, the truth of a statement that refers only to phenomena confined to an earlier time cannot depend upon which measurement an experimenter will freely choose to perform at a later time. According to a common idea of the theory of relativity this causality condition should be valid in all Lorentz frames. It is shown here that this concept of relativistic causality is incompatible with some simple predictions of quantum theory. {copyright} {ital 1997 American Association of Physics Teachers.}
NASA Technical Reports Server (NTRS)
Bannister, T. C.
1977-01-01
Advantages in the use of TV on board satellites as the primary data-recording system in a manned space laboratory when certain types of experiments are flown are indicated. Real-time or near-real-time validation, elimination of film weight, improved depth of field and low-light sensitivity, and better adaptability to computer and electronic processing of data are spelled out as advantages of TV over photographic techniques, say, in fluid dynamics experiments, and weightlessness studies.
Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gougar, Hans
2015-02-01
The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation,more » verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation and to manage the knowledge from past, current, and future experimental campaigns. By pulling together the best minds involved in code development, experiment design, and validation to establish and disseminate best practices and new techniques, the Nuclear Energy Knowledge and Validation Center (NEKVaC or the ‘Center’) will be a resource for industry, DOE Programs, and academia validation efforts.« less
Multi-body Dynamic Contact Analysis Tool for Transmission Design
2003-04-01
frequencies were computed in COSMIC NASTRAN, and were validated against the published experimental modal analysis [17]. • Using assumed time domain... modal superposition. • Results from the structural analysis (mode shapes or forced response) were converted into IDEAS universal format (dataset 55...ARMY RESEARCH LABORATORY Multi-body Dynamic Contact Analysis Tool for Transmission Design SBIR Phase II Final Report by
Shock tube and chemical kinetic modeling study of the oxidation of 2,5-dimethylfuran.
Sirjean, Baptiste; Fournet, René; Glaude, Pierre-Alexandre; Battin-Leclerc, Frédérique; Wang, Weijing; Oehlschlaeger, Matthew A
2013-02-21
A detailed kinetic model describing the oxidation of 2,5-dimethylfuran (DMF), a potential second-generation biofuel, is proposed. The kinetic model is based upon quantum chemical calculations for the initial DMF consumption reactions and important reactions of intermediates. The model is validated by comparison to new DMF shock tube ignition delay time measurements (over the temperature range 1300-1831 K and at nominal pressures of 1 and 4 bar) and the DMF pyrolysis speciation measurements of Lifshitz et al. [ J. Phys. Chem. A 1998 , 102 ( 52 ), 10655 - 10670 ]. Globally, modeling predictions are in good agreement with the considered experimental targets. In particular, ignition delay times are predicted well by the new model, with model-experiment deviations of at most a factor of 2, and DMF pyrolysis conversion is predicted well, to within experimental scatter of the Lifshitz et al. data. Additionally, comparisons of measured and model predicted pyrolysis speciation provides validation of theoretically calculated channels for the oxidation of DMF. Sensitivity and reaction flux analyses highlight important reactions as well as the primary reaction pathways responsible for the decomposition of DMF and formation and destruction of key intermediate and product species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallow, Anne; Abdelaziz, Omar; Graham, Jr., Samuel
The thermal charging performance of paraffin wax combined with compressed expanded natural graphite foam was studied for different graphite bulk densities. Constant heat fluxes between 0.39 W/cm 2 and 1.55 W/cm 2 were applied, as well as a constant boundary temperature of 60 °C. Thermal charging experiments indicate that, in the design of thermal batteries, thermal conductivity of the composite alone is an insufficient metric to determine the influence of the graphite foam on the thermal energy storage. By dividing the latent heat of the composite by the time to end of melt for each applied boundary condition, the energymore » storage performance was calculated to show the effects of composite thermal conductivity, graphite bulk density, and latent heat capacity. For the experimental volume, the addition of graphite beyond a graphite bulk density of 100 kg/m 3 showed limited benefit on the energy storage performance due to the decrease in latent heat storage capacity. These experimental results are used to validate a numerical model to predict the time to melt and for future use in the design of heat exchangers with graphite-foam based phase change material composites. As a result, size scale effects are explored parametrically with the validated model.« less
Dekker, Job; Belmont, Andrew S; Guttman, Mitchell; Leshyk, Victor O; Lis, John T; Lomvardas, Stavros; Mirny, Leonid A; O'Shea, Clodagh C; Park, Peter J; Ren, Bing; Politz, Joan C Ritland; Shendure, Jay; Zhong, Sheng
2017-09-13
The 4D Nucleome Network aims to develop and apply approaches to map the structure and dynamics of the human and mouse genomes in space and time with the goal of gaining deeper mechanistic insights into how the nucleus is organized and functions. The project will develop and benchmark experimental and computational approaches for measuring genome conformation and nuclear organization, and investigate how these contribute to gene regulation and other genome functions. Validated experimental technologies will be combined with biophysical approaches to generate quantitative models of spatial genome organization in different biological states, both in cell populations and in single cells.
Dekker, Job; Belmont, Andrew S.; Guttman, Mitchell; Leshyk, Victor O.; Lis, John T.; Lomvardas, Stavros; Mirny, Leonid A.; O’Shea, Clodagh C.; Park, Peter J.; Ren, Bing; Ritland Politz, Joan C.; Shendure, Jay; Zhong, Sheng
2017-01-01
Preface The 4D Nucleome Network aims to develop and apply approaches to map the structure and dynamics of the human and mouse genomes in space and time with the goal of gaining deeper mechanistic understanding of how the nucleus is organized and functions. The project will develop and benchmark experimental and computational approaches for measuring genome conformation and nuclear organization, and investigate how these contribute to gene regulation and other genome functions. Validated experimental approaches will be combined with biophysical modeling to generate quantitative models of spatial genome organization in different biological states, both in cell populations and in single cells. PMID:28905911
NASA Technical Reports Server (NTRS)
Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.
1986-01-01
The validity of the room equation of Crocker and Price (1982) for predicting the cabin interior sound pressure level was experimentally tested using a specially constructed setup for simultaneous measurements of transmitted sound intensity and interior sound pressure levels. Using measured values of the reverberation time and transmitted intensities, the equation was used to predict the space-averaged interior sound pressure level for three different fuselage conditions. The general agreement between the room equation and experimental test data is considered good enough for this equation to be used for preliminary design studies.
From Single-Cell Dynamics to Scaling Laws in Oncology
NASA Astrophysics Data System (ADS)
Chignola, Roberto; Sega, Michela; Stella, Sabrina; Vyshemirsky, Vladislav; Milotti, Edoardo
We are developing a biophysical model of tumor biology. We follow a strictly quantitative approach where each step of model development is validated by comparing simulation outputs with experimental data. While this strategy may slow down our advancements, at the same time it provides an invaluable reward: we can trust simulation outputs and use the model to explore territories of cancer biology where current experimental techniques fail. Here, we review our multi-scale biophysical modeling approach and show how a description of cancer at the cellular level has led us to general laws obeyed by both in vitro and in vivo tumors.
NASA Technical Reports Server (NTRS)
Neal, G.
1988-01-01
Flexible walled wind tunnels have for some time been used to reduce wall interference effects at the model. A necessary part of the 3-D wall adjustment strategy being developed for the Transonic Self-Streamlining Wind Tunnel (TSWT) of Southampton University is the use of influence coefficients. The influence of a wall bump on the centerline flow in TSWT has been calculated theoretically using a streamline curvature program. This report details the experimental verification of these influence coefficients and concludes that it is valid to use the theoretically determined values in 3-D model testing.
Concurrent systems and time synchronization
NASA Astrophysics Data System (ADS)
Burgin, Mark; Grathoff, Annette
2018-05-01
In the majority of scientific fields, system dynamics is described assuming existence of unique time for the whole system. However, it is established theoretically, for example, in relativity theory or in the system theory of time, and validated experimentally that there are different times and time scales in a variety of real systems - physical, chemical, biological, social, etc. In spite of this, there are no wide-ranging scientific approaches to exploration of such systems. Therefore, the goal of this paper is to study systems with this property. We call them concurrent systems because processes in them can go, events can happen and actions can be performed in different time scales. The problem of time synchronization is specifically explored.
Results of Microgravity Fluid Dynamics Captured With the Spheres-Slosh Experiment
NASA Technical Reports Server (NTRS)
Lapilli, Gabriel; Kirk, Daniel; Gutierrez, Hector; Schallhorn, Paul; Marsell, Brandon; Roth, Jacob; Moder, Jeffrey
2015-01-01
This paper provides an overview of the SPHERES-Slosh Experiment (SSE) aboard the International Space Station (ISS) and presents on-orbit results with data analysis. In order to predict the location of the liquid propellant during all times of a spacecraft mission, engineers and mission analysts utilize Computational Fluid Dynamics (CFD). These state-of-the-art computer programs numerically solve the fluid flow equations to predict the location of the fluid at any point in time during different spacecraft maneuvers. The models and equations used by these programs have been extensively validated on the ground, but long duration data has never been acquired in a microgravity environment. The SSE aboard the ISS is designed to acquire this type of data, used by engineers on earth to validate and improve the CFD prediction models, improving the design of the next generation of space vehicles as well as the safety of current missions. The experiment makes use of two Synchronized Position Hold, Engage, Reorient Experimental Satellites (SPHERES) connected by a frame. In the center of the frame there is a plastic, pill shaped tank that is partially filled with green-colored water. A pair of high resolution cameras records the movement of the liquid inside the tank as the experiment maneuvers within the Japanese Experimental Module test volume. Inertial measurement units record the accelerations and rotations of the tank, making the combination of stereo imaging and inertial data the inputs for CFD model validation.
Result of Microgravity Fluid Dynamics Captured with the SPHERES-Slosh Experiment
NASA Technical Reports Server (NTRS)
Lapilli, Gabriel; Kirk, Daniel; Gutierrez, Hector; Schallhorn, Paul; Marsell, Brandon; Roth, Jacob; Moder, Jeffrey
2015-01-01
This paper provides an overview of the SPHERES-Slosh Experiment (SSE) aboard the International Space Station (ISS) and presents on-orbit results with data analysis. In order to predict the location of the liquid propellant during all times of a spacecraft mission, engineers and mission analysts utilize Computational Fluid Dynamics (CFD). These state-of-the-art computer programs numerically solve the fluid flow equations to predict the location of the fluid at any point in time during different spacecraft maneuvers. The models and equations used by these programs have been extensively validated on the ground, but long duration data has never been acquired in a microgravity environment. The SSE aboard the ISS is designed to acquire this type of data, used by engineers on earth to validate and improve the CFD prediction models, improving the design of the next generation of space vehicles as well as the safety of current missions. The experiment makes use of two Synchronized Position Hold, Engage, Reorient Experimental Satellites (SPHERES) connected by a frame. In the center of the frame there is a plastic, pill shaped tank that is partially filled with green-colored water. A pair of high resolution cameras records the movement of the liquid inside the tank as the experiment maneuvers within the Japanese Experimental Module test volume. Inertial measurement units record the accelerations and rotations of the tank, making the combination of stereo imaging and inertial data the inputs for CFD model validation.
Results of Microgravity Fluid Dynamics Captured with the Spheres-Slosh Experiment
NASA Technical Reports Server (NTRS)
Lapilli, Gabriel; Kirk, Daniel Robert; Gutierrez, Hector; Schallhorn, Paul; Marsell, Brandon; Roth, Jacob; Jeffrey Moder
2015-01-01
This paper provides an overview of the SPHERES-Slosh Experiment (SSE) aboard the International Space Station (ISS) and presents on-orbit results with data analysis. In order to predict the location of the liquid propellant during all times of a spacecraft mission, engineers and mission analysts utilize Computational Fluid Dynamics (CFD). These state-of-the-art computer programs numerically solve the fluid flow equations to predict the location of the fluid at any point in time during different spacecraft maneuvers. The models and equations used by these programs have been extensively validated on the ground, but long duration data has never been acquired in a microgravity environment. The SSE aboard the ISS is designed to acquire this type of data, used by engineers on earth to validate and improve the CFD prediction models, improving the design of the next generation of space vehicles as well as the safety of current missions. The experiment makes use of two Synchronized Position Hold, Engage, Reorient Experimental Satellites (SPHERES) connected by a frame. In the center of the frame there is a plastic, pill shaped tank that is partially filled with green-colored water. A pair of high resolution cameras records the movement of the liquid inside the tank as the experiment maneuvers within the Japanese Experimental Module test volume. Inertial measurement units record the accelerations and rotations of the tank, making the combination of stereo imaging and inertial data the inputs for CFD model validation.
Rectification of General Relativity, Experimental Verifications, and Errors of the Wheeler School
NASA Astrophysics Data System (ADS)
Lo, C. Y.
2013-09-01
General relativity is not yet consistent. Pauli has misinterpreted Einstein's 1916 equivalence principle that can derive a valid field equation. The Wheeler School has distorted Einstein's 1916 principle to be his 1911 assumption of equivalence, and created new errors. Moreover, errors on dynamic solutions have allowed the implicit assumption of a unique coupling sign that violates the principle of causality. This leads to the space-time singularity theorems of Hawking and Penrose who "refute" applications for microscopic phenomena, and obstruct efforts to obtain a valid equation for the dynamic case. These errors also explain the mistakes in the press release of the 1993 Nobel Committee, who was unaware of the non-existence of dynamic solutions. To illustrate the damages to education, the MIT Open Course Phys. 8.033 is chosen. Rectification of errors confirms that E = mc2 is only conditionally valid, and leads to the discovery of the charge-mass interaction that is experimentally confirmed and subsequently the unification of gravitation and electromagnetism. The charge-mass interaction together with the unification predicts the weight reduction (instead of increment) of charged capacitors and heated metals, and helps to explain NASA's Pioneer anomaly and potentially other anomalies as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashcraft, C. Chace; Niederhaus, John Henry; Robinson, Allen C.
We present a verification and validation analysis of a coordinate-transformation-based numerical solution method for the two-dimensional axisymmetric magnetic diffusion equation, implemented in the finite-element simulation code ALEGRA. The transformation, suggested by Melissen and Simkin, yields an equation set perfectly suited for linear finite elements and for problems with large jumps in material conductivity near the axis. The verification analysis examines transient magnetic diffusion in a rod or wire in a very low conductivity background by first deriving an approximate analytic solution using perturbation theory. This approach for generating a reference solution is shown to be not fully satisfactory. A specializedmore » approach for manufacturing an exact solution is then used to demonstrate second-order convergence under spatial refinement and tem- poral refinement. For this new implementation, a significant improvement relative to previously available formulations is observed. Benefits in accuracy for computed current density and Joule heating are also demonstrated. The validation analysis examines the circuit-driven explosion of a copper wire using resistive magnetohydrodynamics modeling, in comparison to experimental tests. The new implementation matches the accuracy of the existing formulation, with both formulations capturing the experimental burst time and action to within approximately 2%.« less
NASA Astrophysics Data System (ADS)
Ferreira, E.; Alves, E.; Ferreira, R. M. L.
2012-04-01
Sediment deposition by continuous turbidity currents may affect eco-environmental river dynamics in natural reservoirs and hinder the maneuverability of bottom discharge gates in dam reservoirs. In recent years, innovative techniques have been proposed to enforce the deposition of turbidity further upstream in the reservoir (and away from the dam), namely, the use of solid and permeable obstacles such as water jet screens , geotextile screens, etc.. The main objective of this study is to validate a computational fluid dynamics (CFD) code applied to the simulation of the interaction between a turbidity current and a passive retention system, designed to induce sediment deposition. To accomplish the proposed objective, laboratory tests were conducted where a simple obstacle configuration was subjected to the passage of currents with different initial sediment concentrations. The experimental data was used to build benchmark cases to validate the 3D CFD software ANSYS-CFX. Sensitivity tests of mesh design, turbulence models and discretization requirements were performed. The validation consisted in comparing experimental and numerical results, involving instantaneous and time-averaged sediment concentrations and velocities. In general, a good agreement between the numerical and the experimental values is achieved when: i) realistic outlet conditions are specified, ii) channel roughness is properly calibrated, iii) two equation k - ɛ models are employed iv) a fine mesh is employed near the bottom boundary. Acknowledgements This study was funded by the Portuguese Foundation for Science and Technology through the project PTDC/ECM/099485/2008. The first author thanks the assistance of Professor Moitinho de Almeida from ICIST and to all members of the project and of the Fluvial Hydraulics group of CEHIDRO.
McAuliff, Bradley D; Kovera, Margaret Bull; Nunez, Gabriel
2009-06-01
This study examined the ability of jury-eligible community members (N = 248) to detect internal validity threats in psychological science presented during a trial. Participants read a case summary in which an expert testified about a study that varied in internal validity (valid, missing control group, confound, and experimenter bias) and ecological validity (high, low). Ratings of expert evidence quality and expert credibility were higher for the valid versus missing control group versions only. Internal validity did not influence verdict or ratings of plaintiff credibility and no differences emerged as a function of ecological validity. Expert evidence quality, expert credibility, and plaintiff credibility were positively correlated with verdict. Implications for the scientific reasoning literature and for trials containing psychological science are discussed.
Transition path time distribution and the transition path free energy barrier.
Pollak, Eli
2016-10-19
The recent experimental measurement of the transition path time distributions of proteins presents several challenges to theory. Firstly, why do the fits of the experimental data to a theoretical expression lead to barrier heights which are much lower than the free energies of activation of the observed transitions? Secondly, there is the theoretical question of determining the transition path time distribution, without invoking the Smoluchowski limit. In this paper, we derive an exact expression for a transition path time distribution which is valid for arbitrary memory friction using the normal mode transformation which underlies Kramers' rate theory. We then recall that for low barriers, there is a noticeable difference between the transition path time distribution obtained with absorbing boundary conditions and free boundary conditions. For the former, the transition times are shorter, since recrossings of the boundaries are disallowed. As a result, if one uses the distribution based on absorbing boundary conditions to fit the experimental data, one will find that the transition path barrier will be larger than the values found based on a theory with free boundary conditions. We then introduce the paradigm of a transition path barrier height, and show that one should always expect it to be much smaller than the activation energy.
Manku, H K; Dhanoa, J K; Kaur, S; Arora, J S; Mukhopadhyay, C S
2017-10-01
MicroRNAs (miRNAs) are small (19-25 base long), non-coding RNAs that regulate post-transcriptional gene expression by cleaving targeted mRNAs in several eukaryotes. The miRNAs play vital roles in multiple biological and metabolic processes, including developmental timing, signal transduction, cell maintenance and differentiation, diseases and cancers. Experimental identification of microRNAs is expensive and lab-intensive. Alternatively, computational approaches for predicting putative miRNAs from genomic or exomic sequences rely on features of miRNAs viz. secondary structures, sequence conservation, minimum free energy index (MFEI) etc. To date, not a single miRNA has been identified in bubaline (Bubalus bubalis), which is an economically important livestock. The present study aims at predicting the putative miRNAs of buffalo using comparative computational approach from buffalo whole genome shotgun sequencing data (INSDC: AWWX00000000.1). The sequences were blasted against the known mammalian miRNA. The obtained miRNAs were then passed through a series of filtration criteria to obtain the set of predicted (putative and novel) bubaline miRNA. Eight miRNAs were selected based on lowest E-value and validated by real time PCR (SYBR green chemistry) using RNU6 as endogenous control. The results from different trails of real time PCR shows that out of selected 8 miRNAs, only 2 (hsa-miR-1277-5p; bta-miR-2285b) are not expressed in bubaline PBMCs. The potential target genes based on their sequence complementarities were then predicted using miRanda. This work is the first report on prediction of bubaline miRNA from whole genome sequencing data followed by experimental validation. The finding could pave the way to future studies in economically important traits in buffalo. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Miki, Nobuhiko; Atarashi, Hiroyuki; Higuchi, Kenichi; Sawahashi, Mamoru; Nakagawa, Masao
This paper presents experimental evaluations of the effect of time diversity obtained by hybrid automatic repeat request (HARQ) with soft combining in space and path diversity schemes on orthogonal frequency division multiplexing (OFDM)-based packet radio access in a downlink broadband multipath fading channel. The effect of HARQ is analyzed through laboratory experiments employing fading simulators and field experiments conducted in downtown Yokosuka near Tokyo. After confirming the validity of experimental results based on numerical analysis of the time diversity gain in HARQ, we show by the experimental results that, for a fixed modulation and channel coding scheme (MCS), time diversity obtained by HARQ is effective in reducing the required received signal-to-interference plus noise power ratio (SINR) according to an increase in the number of transmissions, K, up to 10, even when the diversity effects are obtained through two-branch antenna diversity reception and path diversity using a number of multipaths greater than 12 observed in a real fading channel. Meanwhile, in combined use with the adaptive modulation and channel coding (AMC) scheme associated with space and path diversity, we clarify that the gain obtained by time diversity is almost saturated at the maximum number of transmissions in HARQ, K' = 4 in Chase combining and K' = 2 in Incremental redundancy, since the improvement in the residual packet error rate (PER) obtained through time diversity becomes small owing to the low PER in the initial packet transmission arising from appropriately selecting the optimum MCS in AMC. However, the experimental results elucidate that the time diversity in HARQ with soft combining associated with antenna diversity reception is effective in improving the throughput even in a broadband multipath channel with sufficient path diversity.
Validation of WIND for a Series of Inlet Flows
NASA Technical Reports Server (NTRS)
Slater, John W.; Abbott, John M.; Cavicchi, Richard H.
2002-01-01
Validation assessments compare WIND CFD simulations to experimental data for a series of inlet flows ranging in Mach number from low subsonic to hypersonic. The validation procedures follow the guidelines of the AIAA. The WIND code performs well in matching the available experimental data. The assessments demonstrate the use of WIND and provide confidence in its use for the analysis of aircraft inlets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clemens, Noel
This project was a combined computational and experimental effort to improve predictive capability for boundary layer flashback of premixed swirl flames relevant to gas-turbine power plants operating with high-hydrogen-content fuels. During the course of this project, significant progress in modeling was made on four major fronts: 1) use of direct numerical simulation of turbulent flames to understand the coupling between the flame and the turbulent boundary layer; 2) improved modeling capability for flame propagation in stratified pre-mixtures; 3) improved portability of computer codes using the OpenFOAM platform to facilitate transfer to industry and other researchers; and 4) application of LESmore » to flashback in swirl combustors, and a detailed assessment of its capabilities and limitations for predictive purposes. A major component of the project was an experimental program that focused on developing a rich experimental database of boundary layer flashback in swirl flames. Both methane and high-hydrogen fuels, including effects of elevated pressure (1 to 5 atm), were explored. For this project, a new model swirl combustor was developed. Kilohertz-rate stereoscopic PIV and chemiluminescence imaging were used to investigate the flame propagation dynamics. In addition to the planar measurements, a technique capable of detecting the instantaneous, time-resolved 3D flame front topography was developed and applied successfully to investigate the flow-flame interaction. The UT measurements and legacy data were used in a hierarchical validation approach where flows with increasingly complex physics were used for validation. First component models were validated with DNS and literature data in simplified configurations, and this was followed by validation with the UT 1-atm flashback cases, and then the UT high-pressure flashback cases. The new models and portable code represent a major improvement over what was available before this project was initiated.« less
A Validated Open-Source Multisolver Fourth-Generation Composite Femur Model.
MacLeod, Alisdair R; Rose, Hannah; Gill, Harinderjit S
2016-12-01
Synthetic biomechanical test specimens are frequently used for preclinical evaluation of implant performance, often in combination with numerical modeling, such as finite-element (FE) analysis. Commercial and freely available FE packages are widely used with three FE packages in particular gaining popularity: abaqus (Dassault Systèmes, Johnston, RI), ansys (ANSYS, Inc., Canonsburg, PA), and febio (University of Utah, Salt Lake City, UT). To the best of our knowledge, no study has yet made a comparison of these three commonly used solvers. Additionally, despite the femur being the most extensively studied bone in the body, no freely available validated model exists. The primary aim of the study was primarily to conduct a comparison of mesh convergence and strain prediction between the three solvers (abaqus, ansys, and febio) and to provide validated open-source models of a fourth-generation composite femur for use with all the three FE packages. Second, we evaluated the geometric variability around the femoral neck region of the composite femurs. Experimental testing was conducted using fourth-generation Sawbones® composite femurs instrumented with strain gauges at four locations. A generic FE model and four specimen-specific FE models were created from CT scans. The study found that the three solvers produced excellent agreement, with strain predictions being within an average of 3.0% for all the solvers (r2 > 0.99) and 1.4% for the two commercial codes. The average of the root mean squared error against the experimental results was 134.5% (r2 = 0.29) for the generic model and 13.8% (r2 = 0.96) for the specimen-specific models. It was found that composite femurs had variations in cortical thickness around the neck of the femur of up to 48.4%. For the first time, an experimentally validated, finite-element model of the femur is presented for use in three solvers. This model is freely available online along with all the supporting validation data.
Experimental Validation of a Thermoelastic Model for SMA Hybrid Composites
NASA Technical Reports Server (NTRS)
Turner, Travis L.
2001-01-01
This study presents results from experimental validation of a recently developed model for predicting the thermomechanical behavior of shape memory alloy hybrid composite (SMAHC) structures, composite structures with an embedded SMA constituent. The model captures the material nonlinearity of the material system with temperature and is capable of modeling constrained, restrained, or free recovery behavior from experimental measurement of fundamental engineering properties. A brief description of the model and analysis procedures is given, followed by an overview of a parallel effort to fabricate and characterize the material system of SMAHC specimens. Static and dynamic experimental configurations for the SMAHC specimens are described and experimental results for thermal post-buckling and random response are presented. Excellent agreement is achieved between the measured and predicted results, fully validating the theoretical model for constrained recovery behavior of SMAHC structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sileghem, L.; Wallner, T.; Verhelst, S.
As knock is one of the main factors limiting the efficiency of spark-ignition engines, the introduction of alcohol blends could help to mitigate knock concerns due to the elevated knock resistance of these blends. A model that can accurately predict their autoignition behavior would be of great value to engine designers. The current work aims to develop such a model for alcohol–gasoline blends. First, a mixing rule for the autoignition delay time of alcohol–gasoline blends is proposed. Subsequently, this mixing rule is used together with an autoignition delay time correlation of gasoline and an autoignition delay time cor-relation of methanolmore » in a knock integral model that is implemented in a two-zone engine code. The pre-dictive performance of the resulting model is validated through comparison against experimental measurements on a CFR engine for a range of gasoline–methanol blends. The knock limited spark advance, the knock intensity, the knock onset crank angle and the value of the knock integral at the experimental knock onset have been simulated and compared to the experimental values derived from in-cylinder pressure measurements.« less
Mun, So Youn; Lee, Byoung Sook
2015-04-01
The purpose of this study was to develop an integrated internet addiction prevention program and test its effects on the self-regulation and internet addiction of elementary students who are at risk for internet addiction. A quasi-experimental study with a nonequivalent control group pretest-posttest design was used. Participants were assigned to the experimental group (n=28) or control group (n=28). Contents of the program developed in this study included provision of information about internet addiction, interventions for empowerment and methods of behavioral modification. A pre-test and two post-tests were done to identify the effects of the program and their continuity. Effects were testified using Repeated measures ANOVA, simple effect analysis, and Time Contrast. The self-regulation of the experimental group after the program was significantly higher than the control group. The score for internet addiction self-diagnosis and the internet use time in the experimental group were significantly lower than the control group. The effects of the integrated internet addiction prevention program for preventing internet addiction in elementary students at risk for internet addiction were validated.
Lu, Xin; Soto, Marcelo A; Thévenaz, Luc
2017-07-10
A method based on coherent Rayleigh scattering distinctly evaluating temperature and strain is proposed and experimentally demonstrated for distributed optical fiber sensing. Combining conventional phase-sensitive optical time-domain domain reflectometry (ϕOTDR) and ϕOTDR-based birefringence measurements, independent distributed temperature and strain profiles are obtained along a polarization-maintaining fiber. A theoretical analysis, supported by experimental data, indicates that the proposed system for temperature-strain discrimination is intrinsically better conditioned than an equivalent existing approach that combines classical Brillouin sensing with Brillouin dynamic gratings. This is due to the higher sensitivity of coherent Rayleigh scatting compared to Brillouin scattering, thus offering better performance and lower temperature-strain uncertainties in the discrimination. Compared to the Brillouin-based approach, the ϕOTDR-based system here proposed requires access to only one fiber-end, and a much simpler experimental layout. Experimental results validate the full discrimination of temperature and strain along a 100 m-long elliptical-core polarization-maintaining fiber with measurement uncertainties of ~40 mK and ~0.5 με, respectively. These values agree very well with the theoretically expected measurand resolutions.
NASA Astrophysics Data System (ADS)
Wang, Huijun; Qu, Zheng; Tang, Shaofei; Pang, Mingqi; Zhang, Mingju
2017-08-01
In this paper, electromagnetic design and permanent magnet shape optimization for permanent magnet synchronous generator with hybrid excitation are investigated. Based on generator structure and principle, design outline is presented for obtaining high efficiency and low voltage fluctuation. In order to realize rapid design, equivalent magnetic circuits for permanent magnet and iron poles are developed. At the same time, finite element analysis is employed. Furthermore, by means of design of experiment (DOE) method, permanent magnet is optimized to reduce voltage waveform distortion. Finally, the validity of proposed design methods is validated by the analytical and experimental results.
Universal Effectiveness of Inducing Magnetic Moments in Graphene by Amino-Type sp3-Defects
Wu, Liting; Gao, Shengqing; Li, Ming; Wen, Jianfeng; Li, Xinyu; Liu, Fuchi
2018-01-01
Inducing magnetic moments in graphene is very important for its potential application in spintronics. Introducing sp3-defects on the graphene basal plane is deemed as the most promising approach to produce magnetic graphene. However, its universal validity has not been very well verified experimentally. By functionalization of approximately pure amino groups on graphene basal plane, a spin-generalization efficiency of ~1 μB/100 NH2 was obtained for the first time, thus providing substantial evidence for the validity of inducing magnetic moments by sp3-defects. As well, amino groups provide another potential sp3-type candidate to prepare magnetic graphene. PMID:29673185
Fast Whole-Engine Stirling Analysis
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako
2005-01-01
An experimentally validated approach is described for fast axisymmetric Stirling engine simulations. These simulations include the entire displacer interior and demonstrate it is possible to model a complete engine cycle in less than an hour. The focus of this effort was to demonstrate it is possible to produce useful Stirling engine performance results in a time-frame short enough to impact design decisions. The combination of utilizing the latest 64-bit Opteron computer processors, fiber-optical Myrinet communications, dynamic meshing, and across zone partitioning has enabled solution times at least 240 times faster than previous attempts at simulating the axisymmetric Stirling engine. A comparison of the multidimensional results, calibrated one-dimensional results, and known experimental results is shown. This preliminary comparison demonstrates that axisymmetric simulations can be very accurate, but more work remains to improve the simulations through such means as modifying the thermal equilibrium regenerator models, adding fluid-structure interactions, including radiation effects, and incorporating mechanodynamics.
Fast Whole-Engine Stirling Analysis
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako
2007-01-01
An experimentally validated approach is described for fast axisymmetric Stirling engine simulations. These simulations include the entire displacer interior and demonstrate it is possible to model a complete engine cycle in less than an hour. The focus of this effort was to demonstrate it is possible to produce useful Stirling engine performance results in a time-frame short enough to impact design decisions. The combination of utilizing the latest 64-bit Opteron computer processors, fiber-optical Myrinet communications, dynamic meshing, and across zone partitioning has enabled solution times at least 240 times faster than previous attempts at simulating the axisymmetric Stirling engine. A comparison of the multidimensional results, calibrated one-dimensional results, and known experimental results is shown. This preliminary comparison demonstrates that axisymmetric simulations can be very accurate, but more work remains to improve the simulations through such means as modifying the thermal equilibrium regenerator models, adding fluid-structure interactions, including radiation effects, and incorporating mechanodynamics.
The assessment of health policy changes using the time-reversed crossover design.
Sollecito, W A; Gillings, D B
1986-01-01
The time-reversed crossover design is a quasi-experimental design which can be applied to evaluate the impact of a change in health policy on a large population. This design makes use of separate sampling and analysis strategies to improve the validity of conclusions drawn from such an evaluation. The properties of the time-reversed crossover design are presented including the use of stratification on outcome in the sampling stage, which is intended to improve external validity. It is demonstrated that, although this feature of the design introduces internal validity threats due to regression toward the mean in extreme-outcome strata, these effects can be measured and eliminated from the test of significance of treatment effects. Methods for within- and across-stratum estimation and hypothesis-testing are presented which are similar to those which have been developed for the traditional two-period crossover design widely used in clinical trials. The procedures are illustrated using data derived from a study conducted by the United Mine Workers of America Health and Retirement Funds to measure the impact of cost-sharing on health care utilization among members of its health plan. PMID:3081465
NASA Astrophysics Data System (ADS)
Sipkens, Timothy A.; Hadwin, Paul J.; Grauer, Samuel J.; Daun, Kyle J.
2018-03-01
Competing theories have been proposed to account for how the latent heat of vaporization of liquid iron varies with temperature, but experimental confirmation remains elusive, particularly at high temperatures. We propose time-resolved laser-induced incandescence measurements on iron nanoparticles combined with Bayesian model plausibility, as a novel method for evaluating these relationships. Our approach scores the explanatory power of candidate models, accounting for parameter uncertainty, model complexity, measurement noise, and goodness-of-fit. The approach is first validated with simulated data and then applied to experimental data for iron nanoparticles in argon. Our results justify the use of Román's equation to account for the temperature dependence of the latent heat of vaporization of liquid iron.
Protocols for efficient simulations of long-time protein dynamics using coarse-grained CABS model.
Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian
2014-01-01
Coarse-grained (CG) modeling is a well-acknowledged simulation approach for getting insight into long-time scale protein folding events at reasonable computational cost. Depending on the design of a CG model, the simulation protocols vary from highly case-specific-requiring user-defined assumptions about the folding scenario-to more sophisticated blind prediction methods for which only a protein sequence is required. Here we describe the framework protocol for the simulations of long-term dynamics of globular proteins, with the use of the CABS CG protein model and sequence data. The simulations can start from a random or a selected (e.g., native) structure. The described protocol has been validated using experimental data for protein folding model systems-the prediction results agreed well with the experimental results.
Gitifar, Vahid; Eslamloueyan, Reza; Sarshar, Mohammad
2013-11-01
In this study, pretreatment of sugarcane bagasse and subsequent enzymatic hydrolysis is investigated using two categories of pretreatment methods: dilute acid (DA) pretreatment and combined DA-ozonolysis (DAO) method. Both methods are accomplished at different solid ratios, sulfuric acid concentrations, autoclave residence times, bagasse moisture content, and ozonolysis time. The results show that the DAO pretreatment can significantly increase the production of glucose compared to DA method. Applying k-fold cross validation method, two optimal artificial neural networks (ANNs) are trained for estimations of glucose concentrations for DA and DAO pretreatment methods. Comparing the modeling results with experimental data indicates that the proposed ANNs have good estimation abilities. Copyright © 2013 Elsevier Ltd. All rights reserved.
Grassi, Angela; Di Camillo, Barbara; Ciccarese, Francesco; Agnusdei, Valentina; Zanovello, Paola; Amadori, Alberto; Finesso, Lorenzo; Indraccolo, Stefano; Toffolo, Gianna Maria
2016-03-12
Inference of gene regulation from expression data may help to unravel regulatory mechanisms involved in complex diseases or in the action of specific drugs. A challenging task for many researchers working in the field of systems biology is to build up an experiment with a limited budget and produce a dataset suitable to reconstruct putative regulatory modules worth of biological validation. Here, we focus on small-scale gene expression screens and we introduce a novel experimental set-up and a customized method of analysis to make inference on regulatory modules starting from genetic perturbation data, e.g. knockdown and overexpression data. To illustrate the utility of our strategy, it was applied to produce and analyze a dataset of quantitative real-time RT-PCR data, in which interferon-α (IFN-α) transcriptional response in endothelial cells is investigated by RNA silencing of two candidate IFN-α modulators, STAT1 and IFIH1. A putative regulatory module was reconstructed by our method, revealing an intriguing feed-forward loop, in which STAT1 regulates IFIH1 and they both negatively regulate IFNAR1. STAT1 regulation on IFNAR1 was object of experimental validation at the protein level. Detailed description of the experimental set-up and of the analysis procedure is reported, with the intent to be of inspiration for other scientists who want to realize similar experiments to reconstruct gene regulatory modules starting from perturbations of possible regulators. Application of our approach to the study of IFN-α transcriptional response modulators in endothelial cells has led to many interesting novel findings and new biological hypotheses worth of validation.
MO-AB-BRA-02: A Novel Scatter Imaging Modality for Real-Time Image Guidance During Lung SBRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redler, G; Bernard, D; Templeton, A
2015-06-15
Purpose: A novel scatter imaging modality is developed and its feasibility for image-guided radiation therapy (IGRT) during stereotactic body radiation therapy (SBRT) for lung cancer patients is assessed using analytic and Monte Carlo models as well as experimental testing. Methods: During treatment, incident radiation interacts and scatters from within the patient. The presented methodology forms an image of patient anatomy from the scattered radiation for real-time localization of the treatment target. A radiographic flat panel-based pinhole camera provides spatial information regarding the origin of detected scattered radiation. An analytical model is developed, which provides a mathematical formalism for describing themore » scatter imaging system. Experimental scatter images are acquired by irradiating an object using a Varian TrueBeam accelerator. The differentiation between tissue types is investigated by imaging simple objects of known compositions (water, lung, and cortical bone equivalent). A lung tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is fabricated and imaged to investigate image quality for various quantities of delivered radiation. Monte Carlo N-Particle (MCNP) code is used for validation and testing by simulating scatter image formation using the experimental pinhole camera setup. Results: Analytical calculations, MCNP simulations, and experimental results when imaging the water, lung, and cortical bone equivalent objects show close agreement, thus validating the proposed models and demonstrating that scatter imaging differentiates these materials well. Lung tumor phantom images have sufficient contrast-to-noise ratio (CNR) to clearly distinguish tumor from surrounding lung tissue. CNR=4.1 and CNR=29.1 for 10MU and 5000MU images (equivalent to 0.5 and 250 second images), respectively. Conclusion: Lung SBRT provides favorable treatment outcomes, but depends on accurate target localization. A comprehensive approach, employing multiple simulation techniques and experiments, is taken to demonstrate the feasibility of a novel scatter imaging modality for the necessary real-time image guidance.« less
Åsberg, Dennis; Chutkowski, Marcin; Leśko, Marek; Samuelsson, Jörgen; Kaczmarski, Krzysztof; Fornstedt, Torgny
2017-01-06
Large pressure gradients are generated in ultra-high-pressure liquid chromatography (UHPLC) using sub-2μm particles causing significant temperature gradients over the column due to viscous heating. These pressure and temperature gradients affect retention and ultimately result in important selectivity shifts. In this study, we developed an approach for predicting the retention time shifts due to these gradients. The approach is presented as a step-by-step procedure and it is based on empirical linear relationships describing how retention varies as a function of temperature and pressure and how the average column temperature increases with the flow rate. It requires only four experiments on standard equipment, is based on straightforward calculations, and is therefore easy to use in method development. The approach was rigorously validated against experimental data obtained with a quality control method for the active pharmaceutical ingredient omeprazole. The accuracy of retention time predictions was very good with relative errors always less than 1% and in many cases around 0.5% (n=32). Selectivity shifts observed between omeprazole and the related impurities when changing the flow rate could also be accurately predicted resulting in good estimates of the resolution between critical peak pairs. The approximations which the presented approach are based on were all justified. The retention factor as a function of pressure and temperature was studied in an experimental design while the temperature distribution in the column was obtained by solving the fundamental heat and mass balance equations for the different experimental conditions. We strongly believe that this approach is sufficiently accurate and experimentally feasible for this separation to be a valuable tool when developing a UHPLC method. After further validation with other separation systems, it could become a useful approach in UHPLC method development, especially in the pharmaceutical industry where demands are high for robustness and regulatory oversight. Copyright © 2016 Elsevier B.V. All rights reserved.
Cardoso, Joana; Mesquita, Marta; Dias Pereira, António; Bettencourt-Dias, Mónica; Chaves, Paula; Pereira-Leal, José B
2016-01-01
Barrett's esophagus is the major risk factor for esophageal adenocarcinoma. It has a low but non-neglectable risk, high surveillance costs and no reliable risk stratification markers. We sought to identify early biomarkers, predictive of Barrett's malignant progression, using a meta-analysis approach on gene expression data. This in silico strategy was followed by experimental validation in a cohort of patients with extended follow up from the Instituto Português de Oncologia de Lisboa de Francisco Gentil EPE (Portugal). Bioinformatics and systems biology approaches singled out two candidate predictive markers for Barrett's progression, CYR61 and TAZ. Although previously implicated in other malignancies and in epithelial-to-mesenchymal transition phenotypes, our experimental validation shows for the first time that CYR61 and TAZ have the potential to be predictive biomarkers for cancer progression. Experimental validation by reverse transcriptase quantitative PCR and immunohistochemistry confirmed the up-regulation of both genes in Barrett's samples associated with high-grade dysplasia/adenocarcinoma. In our cohort CYR61 and TAZ up-regulation ranged from one to ten years prior to progression to adenocarcinoma in Barrett's esophagus index samples. Finally, we found that CYR61 and TAZ over-expression is correlated with early focal signs of epithelial to mesenchymal transition. Our results highlight both CYR61 and TAZ genes as potential predictive biomarkers for stratification of the risk for development of adenocarcinoma and suggest a potential mechanistic route for Barrett's esophagus neoplastic progression.
NASA Technical Reports Server (NTRS)
Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David
2015-01-01
The mass properties of an aerospace vehicle are required by multiple disciplines in the analysis and prediction of flight behavior. Pendulum oscillation methods have been developed and employed for almost a century as a means to measure mass properties. However, these oscillation methods are costly, time consuming, and risky. The NASA Armstrong Flight Research Center has been investigating the Dynamic Inertia Measurement, or DIM method as a possible alternative to oscillation methods. The DIM method uses ground test techniques that are already applied to aerospace vehicles when conducting modal surveys. Ground vibration tests would require minimal additional instrumentation and time to apply the DIM method. The DIM method has been validated on smaller test articles, but has not yet been fully proven on large aerospace vehicles.
Are atmospheric surface layer flows ergodic?
NASA Astrophysics Data System (ADS)
Higgins, Chad W.; Katul, Gabriel G.; Froidevaux, Martin; Simeonov, Valentin; Parlange, Marc B.
2013-06-01
The transposition of atmospheric turbulence statistics from the time domain, as conventionally sampled in field experiments, is explained by the so-called ergodic hypothesis. In micrometeorology, this hypothesis assumes that the time average of a measured flow variable represents an ensemble of independent realizations from similar meteorological states and boundary conditions. That is, the averaging duration must be sufficiently long to include a large number of independent realizations of the sampled flow variable so as to represent the ensemble. While the validity of the ergodic hypothesis for turbulence has been confirmed in laboratory experiments, and numerical simulations for idealized conditions, evidence for its validity in the atmospheric surface layer (ASL), especially for nonideal conditions, continues to defy experimental efforts. There is some urgency to make progress on this problem given the proliferation of tall tower scalar concentration networks aimed at constraining climate models yet are impacted by nonideal conditions at the land surface. Recent advancements in water vapor concentration lidar measurements that simultaneously sample spatial and temporal series in the ASL are used to investigate the validity of the ergodic hypothesis for the first time. It is shown that ergodicity is valid in a strict sense above uniform surfaces away from abrupt surface transitions. Surprisingly, ergodicity may be used to infer the ensemble concentration statistics of a composite grass-lake system using only water vapor concentration measurements collected above the sharp transition delineating the lake from the grass surface.
Composition of web services using Markov decision processes and dynamic programming.
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.
Overview of HIT-SI3 experiment: Simulations, Diagnostics, and Summary of Current Results
NASA Astrophysics Data System (ADS)
Penna, James; Jarboe, Thomas; Nelson, Brian; Hossack, Aaron; Sutherland, Derek; Morgan, Kyle; Hansen, Chris; Benedett, Thomas; Everson, Chris; Victor, Brian
2016-10-01
The Helicity Injected Torus - Steady Inductive 3(HIT-SI3)experiment forms and maintains spheromaks via Steady Inductive Helicity Injection (SIHI). Three injector units allow for continuous injection of helicity into a copper flux conserver in order to sustain a spheromak. Firing of the injectors with a phase difference allows finite rotation of the plasma to provide a stabilizing effect. Simulations in the MHD code NIMROD and the fluid-model code PSI-TET provide validation and a basis for interpretation of the observed experimental data. Thompson Scattering (TS) and Far Infrared (FIR) Interferometer systems allow temperature and line-averaged density measurements to be taken. An Ion Doppler Spectroscopy (IDS) system allows measurement of the plasma rotation and velocity. HIT-SI3 data has been used for validation of IDCD predictions, in particular the projected impedance of helicity injectors according to the theory. The experimental impedances have been calculated here for the first time for different HIT-SI3 regimes. Such experimental evidence will contribute to the design of future experiments employing IDCD as a current-drive mechanism. Work supported by the D.O.E., Office of Science, Office of Fusion Science.
Sensitivity analysis of cool-down strategies for a transonic cryogenic tunnel
NASA Technical Reports Server (NTRS)
Thibodeaux, J. J.
1982-01-01
Guidelines and suggestions substantiated by real-time simulation data to ensure optimum time and energy use of injected liquid nitrogen for cooling the Langley 0.3-Meter Transonic Cryogenic Tunnel (TCT) are presented. It is directed toward enabling operators and researchers to become cognizant of criteria for using the 0.3-m TCT in an energy- or time-efficient manner. Operational recommendations were developed based on information collected from a validated simulator of the 0.3-m TCT and experimental data from the tunnel. Results and trends, however, can be extrapolated to other similarly constructed cryogenic wind tunnels.
Calculation of the time resolution of the J-PET tomograph using kernel density estimation
NASA Astrophysics Data System (ADS)
Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
2017-06-01
In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.
NASA Astrophysics Data System (ADS)
Atobe, Satoshi; Nonami, Shunsuke; Hu, Ning; Fukunaga, Hisao
2017-09-01
Foreign object impact events are serious threats to composite laminates because impact damage leads to significant degradation of the mechanical properties of the structure. Identification of the location and force history of the impact that was applied to the structure can provide useful information for assessing the structural integrity. This study proposes a method for identifying impact forces acting on CFRP (carbon fiber reinforced plastic) laminated plates on the basis of the sound radiated from the impacted structure. Identification of the impact location and force history is performed using the sound pressure measured with microphones. To devise a method for identifying the impact location from the difference in the arrival times of the sound wave detected with the microphones, the propagation path of the sound wave from the impacted point to the sensor is examined. For the identification of the force history, an experimentally constructed transfer matrix is employed to relate the force history to the corresponding sound pressure. To verify the validity of the proposed method, impact tests are conducted by using a CFRP cross-ply laminate as the specimen, and an impulse hammer as the impactor. The experimental results confirm the validity of the present method for identifying the impact location from the arrival time of the sound wave detected with the microphones. Moreover, the results of force history identification show the feasibility of identifying the force history accurately from the measured sound pressure using the experimental transfer matrix.
Geometric Limitations Of Ultrasonic Measurements
NASA Astrophysics Data System (ADS)
von Nicolai, C.; Schilling, F.
2006-12-01
Laboratory experiments are a key for interpreting seismic field observations. Due to their potential in many experimental set-ups, the determination of elastic properties of minerals and rocks by ultrasonic measurements is common in Geosciences. The quality and thus use of ultrasonic data, however, strongly depends on the sample geometry and wavelength of the sound wave. Two factors, the diameter-to-wavelength- ratio and the diameter-to-length-ratio, are believed to be the essential parameters to affect ultrasonic signal quality. In this study, we determined under well defined conditions the restricting dimensional parameters to test the validity of published assumptions. By the use of commercial ultrasonic transducers a number of experiments were conducted on aluminium, alumina, and acrylic glass rods of varying diameter (30-10 mm) and constant length. At each diameter compressional wave travel times were measured by pulse- transmission method. From the observed travel times ultrasonic wave velocities were calculated. One additional experiment was performed with a series of square-shaped aluminium blocks in order to investigate the effect of the geometry of the samples cross-sectional area. The experimental results show that the simple diameter-to-wavelength ratios are not valid even under idealized experimental conditions and more complex relation has to be talen into account. As diameter decreases the P-waves direct phase is increasingly interfered and weakened by sidewall reflections. At very small diameters compressional waves are replaced by bar waves and P-wave signals become non resolvable. Considering the suppression of both effects, a critical D/ë-ratio was determined and compared to experimental set-ups from various publications. These tests indicate that some published and cited data derived from small diameter set-ups are out off the range of physical possibility.
A Linear Electromagnetic Piston Pump
NASA Astrophysics Data System (ADS)
Hogan, Paul H.
Advancements in mobile hydraulics for human-scale applications have increased demand for a compact hydraulic power supply. Conventional designs couple a rotating electric motor to a hydraulic pump, which increases the package volume and requires several energy conversions. This thesis investigates the use of a free piston as the moving element in a linear motor to eliminate multiple energy conversions and decrease the overall package volume. A coupled model used a quasi-static magnetic equivalent circuit to calculate the motor inductance and the electromagnetic force acting on the piston. The force was an input to a time domain model to evaluate the mechanical and pressure dynamics. The magnetic circuit model was validated with finite element analysis and an experimental prototype linear motor. The coupled model was optimized using a multi-objective genetic algorithm to explore the parameter space and maximize power density and efficiency. An experimental prototype linear pump coupled pistons to an off-the-shelf linear motor to validate the mechanical and pressure dynamics models. The magnetic circuit force calculation agreed within 3% of finite element analysis, and within 8% of experimental data from the unoptimized prototype linear motor. The optimized motor geometry also had good agreement with FEA; at zero piston displacement, the magnetic circuit calculates optimized motor force within 10% of FEA in less than 1/1000 the computational time. This makes it well suited to genetic optimization algorithms. The mechanical model agrees very well with the experimental piston pump position data when tuned for additional unmodeled mechanical friction. Optimized results suggest that an improvement of 400% of the state of the art power density is attainable with as high as 85% net efficiency. This demonstrates that a linear electromagnetic piston pump has potential to serve as a more compact and efficient supply of fluid power for the human scale.
Prevention of Blast-Related Injuries
2013-07-01
collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE...Introduction 4 Statement of Work 4 Task I Report 4 1 . Adjustment of the experimental design and methodology 4 2. Preparations for Blast
A Hybrid Model for Multiscale Laser Plasma Simulations with Detailed Collisional Physics
2017-06-15
Validation against experimental data •Nonequilibrium radiation transport: coupling with a collisional-radiative model •Inelastic collisions in a MF...for Public Release; Distribution is Unlimited. PA# 17383 Collisional Radiative (CR) Overview Updates • Investigated Quasi -Steady-State • Investigated...Techniques Quasi Stead-State (QSS) • Assumes fast kinetics between states within an ion distribution • Assumes longer diffusion/decay times than
Performance Characteristics of Compact Mobile LIFS (Laser-Induced Fluorescence Spectrum) Lidar
NASA Astrophysics Data System (ADS)
Tomida, Takayuki; Nishizawa, Naoto; Sakurai, Kosuke; Suganumata, Hikaru; Tsukada, Shodai; Song, Sung-Moo; Park, Ho-Dong; Saito, Yasunori
2016-06-01
We developed a compact but versatile laser-induced fluorescence spectrum (LIFS) lidar that has potential use for material or aerosol identification outside experimental rooms. The compactness and mobility of the LIFS lidar means observations can be more freely conducted at any place and any time. Its performance characteristics were validated by threedimensional fluorescence imaging of targets and remote detection of quasi bio/organic aerosols.
Psychophysiology of Delayed Extinction and Reconsolidation in Humans
2013-02-01
to modify or block it. The aim of this project is to create an experimental assay in the form of an optimal Pavlovian differential fear- conditioning ...group. Data from the pharmacological group demonstrate that participants show differential conditioning learning on Day 1, supporting the validity of...our modified fear- conditioning paradigm. Results suggest that propranolol administration at the time of memory reactivation does not decrease the fear
Development of Additional Hazard Assessment Models
1977-03-01
globules, their trajectory (the distance from the spill point to the impact point on the river bed), and the time required for sinking. Established theories ...chemicals, the dissolution rate is estimated by using eddy diffusivity surface renewal theories . The validity of predictions of these theories has been... theories and experimental data on aeration of rivers. * Describe dispersion in rivers with stationary area source and sources moving with the stream
Design and Experimental Validation for Direct-Drive Fault-Tolerant Permanent-Magnet Vernier Machines
Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian
2014-01-01
A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis. PMID:25045729
NASA Astrophysics Data System (ADS)
Guillemaut, C.; Metzger, C.; Moulton, D.; Heinola, K.; O’Mullane, M.; Balboa, I.; Boom, J.; Matthews, G. F.; Silburn, S.; Solano, E. R.; contributors, JET
2018-06-01
The design and operation of future fusion devices relying on H-mode plasmas requires reliable modelling of edge-localized modes (ELMs) for precise prediction of divertor target conditions. An extensive experimental validation of simple analytical predictions of the time evolution of target plasma loads during ELMs has been carried out here in more than 70 JET-ITER-like wall H-mode experiments with a wide range of conditions. Comparisons of these analytical predictions with diagnostic measurements of target ion flux density, power density, impact energy and electron temperature during ELMs are presented in this paper and show excellent agreement. The analytical predictions tested here are made with the ‘free-streaming’ kinetic model (FSM) which describes ELMs as a quasi-neutral plasma bunch expanding along the magnetic field lines into the Scrape-Off Layer without collisions. Consequences of the FSM on energy reflection and deposition on divertor targets during ELMs are also discussed.
Quasi-experimental study designs series-paper 6: risk of bias assessment.
Waddington, Hugh; Aloe, Ariel M; Becker, Betsy Jane; Djimeu, Eric W; Hombrados, Jorge Garcia; Tugwell, Peter; Wells, George; Reeves, Barney
2017-09-01
Rigorous and transparent bias assessment is a core component of high-quality systematic reviews. We assess modifications to existing risk of bias approaches to incorporate rigorous quasi-experimental approaches with selection on unobservables. These are nonrandomized studies using design-based approaches to control for unobservable sources of confounding such as difference studies, instrumental variables, interrupted time series, natural experiments, and regression-discontinuity designs. We review existing risk of bias tools. Drawing on these tools, we present domains of bias and suggest directions for evaluation questions. The review suggests that existing risk of bias tools provide, to different degrees, incomplete transparent criteria to assess the validity of these designs. The paper then presents an approach to evaluating the internal validity of quasi-experiments with selection on unobservables. We conclude that tools for nonrandomized studies of interventions need to be further developed to incorporate evaluation questions for quasi-experiments with selection on unobservables. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Spring, Samuel D.
2006-01-01
This report documents the results of an experimental program conducted on two advanced metallic alloy systems (Rene' 142 directionally solidified alloy (DS) and Rene' N6 single crystal alloy) and the characterization of two distinct internal state variable inelastic constitutive models. The long term objective of the study was to develop a computational life prediction methodology that can integrate the obtained material data. A specialized test matrix for characterizing advanced unified viscoplastic models was specified and conducted. This matrix included strain controlled tensile tests with intermittent relaxtion test with 2 hr hold times, constant stress creep tests, stepped creep tests, mixed creep and plasticity tests, cyclic temperature creep tests and tests in which temperature overloads were present to simulate actual operation conditions for validation of the models. The selected internal state variable models where shown to be capable of representing the material behavior exhibited by the experimental results; however the program ended prior to final validation of the models.
A Computational Investigation of Gear Windage
NASA Technical Reports Server (NTRS)
Hill, Matthew J.; Kunz, Robert F.
2012-01-01
A CFD method has been developed for application to gear windage aerodynamics. The goals of this research are to develop and validate numerical and modeling approaches for these systems, to develop physical understanding of the aerodynamics of gear windage loss, including the physics of loss mitigation strategies, and to propose and evaluate new approaches for minimizing loss. Absolute and relative frame CFD simulation, overset gridding, multiphase flow analysis, and sub-layer resolved turbulence modeling were brought to bear in achieving these goals. Several spur gear geometries were studied for which experimental data are available. Various shrouding configurations and free-spinning (no shroud) cases were studied. Comparisons are made with experimental data from the open literature, and data recently obtained in the NASA Glenn Research Center Gear Windage Test Facility. The results show good agreement with experiment. Interrogation of the validative and exploratory CFD results have led, for the first time, to a detailed understanding of the physical mechanisms of gear windage loss, and have led to newly proposed mitigation strategies whose effectiveness is computationally explored.
NASA Astrophysics Data System (ADS)
Zhu, C.; Rimstidt, J. D.; Liu, Z.; Yuan, H.
2016-12-01
The principle of detailed balance (PDB) has been a cornerstone for irreversible thermodynamics and chemical kinetics for a long time, and its wide application in geochemistry has mostly been implicit and without experimental testing of its applicability. Nevertheless, many extrapolations based on PDB without experimental validation have far reaching impacts on society's mega environmental enterprises. Here we report an isotope doping method that independently measures simultaneous dissolution and precipitation rates and can test this principle. The technique reacts a solution enriched in a rare isotope of an element with a solid having natural isotopic abundances (Beck et al., 1992; Gaillardet, 2008; Gruber et al., 2013). Dissolution and precipitation rates are found from the changing isotopic ratios. Our quartz experiment doped with 29Si showed that the equilibrium dissolution rate remains unchanged at all degrees of undersaturation. We recommend this approach to test the validity of using the detailed balance relationship in rate equations for other substances.
Predicting cancerlectins by the optimal g-gap dipeptides
NASA Astrophysics Data System (ADS)
Lin, Hao; Liu, Wei-Xin; He, Jiao; Liu, Xin-Hui; Ding, Hui; Chen, Wei
2015-12-01
The cancerlectin plays a key role in the process of tumor cell differentiation. Thus, to fully understand the function of cancerlectin is significant because it sheds light on the future direction for the cancer therapy. However, the traditional wet-experimental methods were money- and time-consuming. It is highly desirable to develop an effective and efficient computational tool to identify cancerlectins. In this study, we developed a sequence-based method to discriminate between cancerlectins and non-cancerlectins. The analysis of variance (ANOVA) was used to choose the optimal feature set derived from the g-gap dipeptide composition. The jackknife cross-validated results showed that the proposed method achieved the accuracy of 75.19%, which is superior to other published methods. For the convenience of other researchers, an online web-server CaLecPred was established and can be freely accessed from the website http://lin.uestc.edu.cn/server/CalecPred. We believe that the CaLecPred is a powerful tool to study cancerlectins and to guide the related experimental validations.
NASA Astrophysics Data System (ADS)
Vriend, Nathalie; Tsang, Jonny; Arran, Matthew; Jin, Binbin; Johnsen, Alexander
2017-11-01
When a mixture of small, smooth particles and larger, coarse particles is released on a rough inclined plane, the initial uniform front may break up in distinct fingers which elongate over time. This fingering instability is sensitive to the unique arrangement of individual particles and is driven by granular segregation (Pouliquen et al., 1997). Variability in initial conditions create significant limitations for consistent experimental and numerical validation of newly developed theoretical models (Baker et al., 2016) for finger formation. We present an experimental study using a novel tool that sets the initial fingering width of the instability. By changing this trigger width between experiments, we explore the response of the avalanche breakup to perturbations of different widths. Discrete particle simulations (using MercuryDPM, Thornton et al., 2012) are conducted under a similar setting, reproducing the variable finger width, allowing validation between experiments and numerical simulations. A good agreement between simulations and experiments is obtained, and ongoing theoretical work is briefly introduced. NMV acknowledges the Royal Society Dorothy Hodgkin Research Fellowship.
NASA Astrophysics Data System (ADS)
Ishihara, Koichi; Asai, Yusuke; Kudo, Riichi; Ichikawa, Takeo; Takatori, Yasushi; Mizoguchi, Masato
2013-12-01
Multiuser multiple-input multiple-output (MU-MIMO) has been proposed as a means to improve spectrum efficiency for various future wireless communication systems. This paper reports indoor experimental results obtained for a newly developed and implemented downlink (DL) MU-MIMO orthogonal frequency division multiplexing (OFDM) transceiver for gigabit wireless local area network systems in the microwave band. In the transceiver, the channel state information (CSI) is estimated at each user and fed back to an access point (AP) on a real-time basis. At the AP, the estimated CSI is used to calculate the transmit beamforming weight for DL MU-MIMO transmission. This paper also proposes a recursive inverse matrix computation scheme for computing the transmit weight in real time. Experiments with the developed transceiver demonstrate its feasibility in a number of indoor scenarios. The experimental results clarify that DL MU-MIMO-OFDM transmission can achieve a 972-Mbit/s transmission data rate with simple digital signal processing of single-antenna users in an indoor environment.
Mühlberger, A; Jekel, K; Probst, T; Schecklmann, M; Conzelmann, A; Andreatta, M; Rizzo, A A; Pauli, P; Romanos, M
2016-05-13
This study compares the performance in a continuous performance test within a virtual reality classroom (CPT-VRC) between medicated children with ADHD, unmedicated children with ADHD, and healthy children. N = 94 children with ADHD (n = 26 of them received methylphenidate and n = 68 were unmedicated) and n = 34 healthy children performed the CPT-VRC. Omission errors, reaction time/variability, commission errors, and body movements were assessed. Furthermore, ADHD questionnaires were administered and compared with the CPT-VRC measures. The unmedicated ADHD group exhibited more omission errors and showed slower reaction times than the healthy group. Reaction time variability was higher in the unmedicated ADHD group compared with both the healthy and the medicated ADHD group. Omission errors and reaction time variability were associated with inattentiveness ratings of experimenters. Head movements were correlated with hyperactivity ratings of parents and experimenters. Virtual reality is a promising technology to assess ADHD symptoms in an ecologically valid environment. © The Author(s) 2016.
Validation of the thermal challenge problem using Bayesian Belief Networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McFarland, John; Swiler, Laura Painton
The thermal challenge problem has been developed at Sandia National Laboratories as a testbed for demonstrating various types of validation approaches and prediction methods. This report discusses one particular methodology to assess the validity of a computational model given experimental data. This methodology is based on Bayesian Belief Networks (BBNs) and can incorporate uncertainty in experimental measurements, in physical quantities, and model uncertainties. The approach uses the prior and posterior distributions of model output to compute a validation metric based on Bayesian hypothesis testing (a Bayes' factor). This report discusses various aspects of the BBN, specifically in the context ofmore » the thermal challenge problem. A BBN is developed for a given set of experimental data in a particular experimental configuration. The development of the BBN and the method for ''solving'' the BBN to develop the posterior distribution of model output through Monte Carlo Markov Chain sampling is discussed in detail. The use of the BBN to compute a Bayes' factor is demonstrated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasten, C. P., E-mail: ckasten@alum.mit.edu; White, A. E.; Irby, J. H.
2014-04-15
Accurately predicting the turbulent transport properties of magnetically confined plasmas is a major challenge of fusion energy research. Validation of transport models is typically done by applying so-called “synthetic diagnostics” to the output of nonlinear gyrokinetic simulations, and the results are compared to experimental data. As part of the validation process, comparing two independent turbulence measurements to each other provides the opportunity to test the synthetic diagnostics themselves; a step which is rarely possible due to limited availability of redundant fluctuation measurements on magnetic confinement experiments. At Alcator C-Mod, phase-contrast imaging (PCI) is a commonly used turbulence diagnostic. PCI measuresmore » line-integrated electron density fluctuations with high sensitivity and wavenumber resolution (1.6 cm{sup −1}≲|k{sub R}|≲11 cm{sup −1}). A new fast two-color interferometry (FTCI) diagnostic on the Alcator C-Mod tokamak measures long-wavelength (|k{sub R}|≲3.0 cm{sup −1}) line-integrated electron density fluctuations. Measurements of coherent and broadband fluctuations made by PCI and FTCI are compared here for the first time. Good quantitative agreement is found between the two measurements. This provides experimental validation of the low-wavenumber region of the PCI calibration, and also helps validate the low-wavenumber portions of the synthetic PCI diagnostic that has been used in gyrokinetic model validation work in the past. We discuss possibilities to upgrade FTCI, so that a similar comparison could be done at higher wavenumbers in the future.« less
NASA Astrophysics Data System (ADS)
Jaber, Khalid Mohammad; Alia, Osama Moh'd.; Shuaib, Mohammed Mahmod
2018-03-01
Finding the optimal parameters that can reproduce experimental data (such as the velocity-density relation and the specific flow rate) is a very important component of the validation and calibration of microscopic crowd dynamic models. Heavy computational demand during parameter search is a known limitation that exists in a previously developed model known as the Harmony Search-Based Social Force Model (HS-SFM). In this paper, a parallel-based mechanism is proposed to reduce the computational time and memory resource utilisation required to find these parameters. More specifically, two MATLAB-based multicore techniques (parfor and create independent jobs) using shared memory are developed by taking advantage of the multithreading capabilities of parallel computing, resulting in a new framework called the Parallel Harmony Search-Based Social Force Model (P-HS-SFM). The experimental results show that the parfor-based P-HS-SFM achieved a better computational time of about 26 h, an efficiency improvement of ? 54% and a speedup factor of 2.196 times in comparison with the HS-SFM sequential processor. The performance of the P-HS-SFM using the create independent jobs approach is also comparable to parfor with a computational time of 26.8 h, an efficiency improvement of about 30% and a speedup of 2.137 times.
Taffarel, Marilda Onghero; Luna, Stelio Pacca Loureiro; de Oliveira, Flavia Augusta; Cardoso, Guilherme Schiess; Alonso, Juliana de Moura; Pantoja, Jose Carlos; Brondani, Juliana Tabarelli; Love, Emma; Taylor, Polly; White, Kate; Murrell, Joanna C
2015-04-01
Quantification of pain plays a vital role in the diagnosis and management of pain in animals. In order to refine and validate an acute pain scale for horses a prospective, randomized, blinded study was conducted. Twenty-four client owned adult horses were recruited and allocated to one of four following groups: anaesthesia only (GA); pre-emptive analgesia and anaesthesia (GAA,); anaesthesia, castration and postoperative analgesia (GC); or pre-emptive analgesia, anaesthesia and castration (GCA). One investigator, unaware of the treatment group, assessed all horses at time-points before and after intervention and completed the pain scale. Videos were also obtained at these time-points and were evaluated by a further four blinded evaluators who also completed the scale. The data were used to investigate the relevance, specificity, criterion validity and inter- and intra-observer reliability of each item on the pain scale, and to evaluate construct validity and responsiveness of the scale. Construct validity was demonstrated by the observed differences in scores between the groups, four hours after anaesthetic recovery and before administration of systemic analgesia in the GC group. Inter- and intra-observer reliability for the items was only satisfactory. Subsequently the pain scale was refined, based on results for relevance, specificity and total item correlation. Scale refinement and exclusion of items that did not meet predefined requirements generated a selection of relevant pain behaviours in horses. After further validation for reliability, these may be used to evaluate pain under clinical and experimental conditions.
Modeling Specular Exchange Between Concentric Cylinders in a Radiative Shielded Furnace
NASA Technical Reports Server (NTRS)
Schunk, Richard Gregory; Wessling, Francis C.
2000-01-01
The objective of this research is to develop and validate mathematical models to characterize the thermal performance of a radiative shielded furnace, the University of Alabama in Huntsville (UAH) Isothermal Diffusion Oven. The mathematical models are validated against experimental data obtained from testing the breadboard oven in a terrestrial laboratory environment. It is anticipated that the validation will produce math models capable of predicting the thermal performance of the furnace over a wide range of operating conditions, including those for which no experimental data is available. Of particular interest is the furnace core temperature versus heater power parametric and the transient thermal response of the furnace. Application to a microgravity environment is not considered, although it is conjectured that the removal of any gravity dependent terms from the math models developed for the terrestrial application should yield adequate results in a microgravity environment. The UAH Isothermal Diffusion Oven is designed to provide a thermal environment that is conducive to measuring the diffusion of high temperature liquid metals. In addition to achieving the temperatures required to melt a sample placed within the furnace, reducing or eliminating convective motions within the melt is an important design consideration [1]. Both of these influences are reflected in the design of the furnace. Reducing unwanted heat losses from the furnace is achieved through the use of low conductivity materials and reflective shielding. As evidenced by the highly conductive copper core used to house the sample within the furnace, convective motions can be greatly suppressed by providing an essentially uniform thermal environment. An oven of this design could ultimately be utilized in a microgravity environment, presumably as a experiment payload. Such an application precipitates other design requirements that limit the resources available to the furnace such as power, mass, volume, and possibly even time. Through the experimental and numerical results obtained, the power requirements and thermal response time of the breadboard furnace are quantified.
Seo, Hyun-Ju; Kim, Soo Young; Lee, Yoon Jae; Jang, Bo-Hyoung; Park, Ji-Eun; Sheen, Seung-Soo; Hahn, Seo Kyung
2016-02-01
To develop a study Design Algorithm for Medical Literature on Intervention (DAMI) and test its interrater reliability, construct validity, and ease of use. We developed and then revised the DAMI to include detailed instructions. To test the DAMI's reliability, we used a purposive sample of 134 primary, mainly nonrandomized studies. We then compared the study designs as classified by the original authors and through the DAMI. Unweighted kappa statistics were computed to test interrater reliability and construct validity based on the level of agreement between the original and DAMI classifications. Assessment time was also recorded to evaluate ease of use. The DAMI includes 13 study designs, including experimental and observational studies of interventions and exposure. Both the interrater reliability (unweighted kappa = 0.67; 95% CI [0.64-0.75]) and construct validity (unweighted kappa = 0.63, 95% CI [0.52-0.67]) were substantial. Mean classification time using the DAMI was 4.08 ± 2.44 minutes (range, 0.51-10.92). The DAMI showed substantial interrater reliability and construct validity. Furthermore, given its ease of use, it could be used to accurately classify medical literature for systematic reviews of interventions although minimizing disagreement between authors of such reviews. Copyright © 2016 Elsevier Inc. All rights reserved.
Short- and long-time diffusion and dynamic scaling in suspensions of charged colloidal particles.
Banchio, Adolfo J; Heinen, Marco; Holmqvist, Peter; Nägele, Gerhard
2018-04-07
We report on a comprehensive theory-simulation-experimental study of collective and self-diffusion in concentrated suspensions of charge-stabilized colloidal spheres. In theory and simulation, the spheres are assumed to interact directly by a hard-core plus screened Coulomb effective pair potential. The intermediate scattering function, f c (q, t), is calculated by elaborate accelerated Stokesian dynamics (ASD) simulations for Brownian systems where many-particle hydrodynamic interactions (HIs) are fully accounted for, using a novel extrapolation scheme to a macroscopically large system size valid for all correlation times. The study spans the correlation time range from the colloidal short-time to the long-time regime. Additionally, Brownian Dynamics (BD) simulation and mode-coupling theory (MCT) results of f c (q, t) are generated where HIs are neglected. Using these results, the influence of HIs on collective and self-diffusion and the accuracy of the MCT method are quantified. It is shown that HIs enhance collective and self-diffusion at intermediate and long times. At short times self-diffusion, and for wavenumbers outside the structure factor peak region also collective diffusion, are slowed down by HIs. MCT significantly overestimates the slowing influence of dynamic particle caging. The dynamic scattering functions obtained in the ASD simulations are in overall good agreement with our dynamic light scattering (DLS) results for a concentration series of charged silica spheres in an organic solvent mixture, in the experimental time window and wavenumber range. From the simulation data for the time derivative of the width function associated with f c (q, t), there is indication of long-time exponential decay of f c (q, t), for wavenumbers around the location of the static structure factor principal peak. The experimental scattering functions in the probed time range are consistent with a time-wavenumber factorization scaling behavior of f c (q, t) that was first reported by Segrè and Pusey [Phys. Rev. Lett. 77, 771 (1996)] for suspensions of hard spheres. Our BD simulation and MCT results predict a significant violation of exact factorization scaling which, however, is approximately restored according to the ASD results when HIs are accounted for, consistent with the experimental findings for f c (q, t). Our study of collective diffusion is amended by simulation and theoretical results for the self-intermediate scattering function, f s (q, t), and its non-Gaussian parameter α 2 (t) and for the particle mean squared displacement W(t) and its time derivative. Since self-diffusion properties are not assessed in standard DLS measurements, a method to deduce W(t) approximately from f c (q, t) is theoretically validated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomar, Vikas
2017-03-06
DoE-NETL partnered with Purdue University to predict the creep and associated microstructure evolution of tungsten-based refractory alloys. Researchers use grain boundary (GB) diagrams, a new concept, to establish time-dependent creep resistance and associated microstructure evolution of grain boundaries/intergranular films GB/IGF controlled creep as a function of load, environment, and temperature. The goal was to conduct a systematic study that includes the development of a theoretical framework, multiscale modeling, and experimental validation using W-based body-centered-cubic alloys, doped/alloyed with one or two of the following elements: nickel, palladium, cobalt, iron, and copper—typical refractory alloys. Prior work has already established and validated amore » basic theory for W-based binary and ternary alloys; the study conducted under this project extended this proven work. Based on interface diagrams phase field models were developed to predict long term microstructural evolution. In order to validate the models nanoindentation creep data was used to elucidate the role played by the interface properties in predicting long term creep strength and microstructure evolution.« less
Validation of the Soil Moisture Active Passive mission using USDA-ARS experimental watersheds
USDA-ARS?s Scientific Manuscript database
The calibration and validation program of the Soil Moisture Active Passive mission (SMAP) relies upon an international cooperative of in situ networks to provide ground truth references across a variety of landscapes. The USDA Agricultural Research Service operates several experimental watersheds wh...
Rowe, Rachel K.; Harrison, Jordan L.; Thomas, Theresa C.; Pauly, James R.; Adelson, P. David; Lifshitz, Jonathan
2013-01-01
The use of animal modeling in traumatic brain injury (TBI) research is justified by the lack of sufficiently comprehensive in vitro and computer modeling that incorporates all components of the neurovascular unit. Valid animal modeling of TBI requires accurate replication of both the mechanical forces and secondary injury conditions observed in human patients. Regulatory requirements for animal modeling emphasize the administration of appropriate anesthetics and analgesics unless withholding these drugs is scientifically justified. The objective of this review is to present scientific justification for standardizing the use of anesthetics and analgesics, within a study, when modeling TBI in order to preserve study validity. Evidence for the interference of anesthetics and analgesics in the natural course of brain injury calls for consistent consideration of pain management regimens when conducting TBI research. Anesthetics administered at the time of or shortly after induction of brain injury can alter cognitive, motor, and histological outcomes following TBI. A consistent anesthesia protocol based on experimental objectives within each individual study is imperative when conducting TBI studies to control for the confounding effects of anesthesia on outcome parameters. Experimental studies that replicate the clinical condition are essential to gain further understanding and evaluate possible treatments for TBI. However, with animal models of TBI it is essential that investigators assure a uniform drug delivery protocol that minimizes confounding variables, while minimizing pain and suffering. PMID:23877609
SU-E-T-664: Radiobiological Modeling of Prophylactic Cranial Irradiation in Mice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, D; Debeb, B; Woodward, W
Purpose: Prophylactic cranial irradiation (PCI) is a clinical technique used to reduce the incidence of brain metastasis and improve overall survival in select patients with ALL and SCLC, and we have shown the potential of PCI in select breast cancer patients through a mouse model (manuscript in preparation). We developed a computational model using our experimental results to demonstrate the advantage of treating brain micro-metastases early. Methods: MATLAB was used to develop the computational model of brain metastasis and PCI in mice. The number of metastases per mouse and the volume of metastases from four- and eight-week endpoints were fitmore » to normal and log-normal distributions, respectively. Model input parameters were optimized so that model output would match the experimental number of metastases per mouse. A limiting dilution assay was performed to validate the model. The effect of radiation at different time points was computationally evaluated through the endpoints of incidence, number of metastases, and tumor burden. Results: The correlation between experimental number of metastases per mouse and the Gaussian fit was 87% and 66% at the two endpoints. The experimental volumes and the log-normal fit had correlations of 99% and 97%. In the optimized model, the correlation between number of metastases per mouse and the Gaussian fit was 96% and 98%. The log-normal volume fit and the model agree 100%. The model was validated by a limiting dilution assay, where the correlation was 100%. The model demonstrates that cells are very sensitive to radiation at early time points, and delaying treatment introduces a threshold dose at which point the incidence and number of metastases decline. Conclusion: We have developed a computational model of brain metastasis and PCI in mice that is highly correlated to our experimental data. The model shows that early treatment of subclinical disease is highly advantageous.« less
Fischer, Kenneth J; Johnson, Joshua E; Waller, Alexander J; McIff, Terence E; Toby, E Bruce; Bilgen, Mehmet
2011-10-01
The objective of this study was to validate the MRI-based joint contact modeling methodology in the radiocarpal joints by comparison of model results with invasive specimen-specific radiocarpal contact measurements from four cadaver experiments. We used a single validation criterion for multiple outcome measures to characterize the utility and overall validity of the modeling approach. For each experiment, a Pressurex film and a Tekscan sensor were sequentially placed into the radiocarpal joints during simulated grasp. Computer models were constructed based on MRI visualization of the cadaver specimens without load. Images were also acquired during the loaded configuration used with the direct experimental measurements. Geometric surface models of the radius, scaphoid and lunate (including cartilage) were constructed from the images acquired without the load. The carpal bone motions from the unloaded state to the loaded state were determined using a series of 3D image registrations. Cartilage thickness was assumed uniform at 1.0 mm with an effective compressive modulus of 4 MPa. Validation was based on experimental versus model contact area, contact force, average contact pressure and peak contact pressure for the radioscaphoid and radiolunate articulations. Contact area was also measured directly from images acquired under load and compared to the experimental and model data. Qualitatively, there was good correspondence between the MRI-based model data and experimental data, with consistent relative size, shape and location of radioscaphoid and radiolunate contact regions. Quantitative data from the model generally compared well with the experimental data for all specimens. Contact area from the MRI-based model was very similar to the contact area measured directly from the images. For all outcome measures except average and peak pressures, at least two specimen models met the validation criteria with respect to experimental measurements for both articulations. Only the model for one specimen met the validation criteria for average and peak pressure of both articulations; however the experimental measures for peak pressure also exhibited high variability. MRI-based modeling can reliably be used for evaluating the contact area and contact force with similar confidence as in currently available experimental techniques. Average contact pressure, and peak contact pressure were more variable from all measurement techniques, and these measures from MRI-based modeling should be used with some caution.
Inductive detection of the free surface of liquid metals
NASA Astrophysics Data System (ADS)
Zürner, Till; Ratajczak, Matthias; Wondrak, Thomas; Eckert, Sven
2017-11-01
A novel measurement system to determine the surface position and topology of liquid metals is presented. It is based on the induction of eddy currents by a time-harmonic magnetic field and the subsequent measurement of the resulting secondary magnetic field using gradiometric induction coils. The system is validated experimentally for static and dynamic surfaces of the low-melting liquid metal alloy gallium-indium-tin in a narrow vessel. It is shown that a precision below 1 mm and a time resolution of at least 20 Hz can be achieved.
Zarit, Steven H.; Liu, Yin; Bangerter, Lauren R.; Rovine, Michael J.
2017-01-01
Objectives There is growing emphasis on empirical validation of the efficacy of community-based services for older people and their families, but research on services such as respite care faces methodological challenges that have limited the growth of outcome studies. We identify problems associated with the usual research approaches for studying respite care, with the goal of stimulating use of novel and more appropriate research designs that can lead to improved studies of community-based services. Method Using the concept of research validity, we evaluate the methodological approaches in the current literature on respite services, including adult day services, in-home respite and overnight respite. Results Although randomized control trials (RCTs) are possible in community settings, validity is compromised by practical limitations of randomization and other problems. Quasi-experimental and interrupted time series designs offer comparable validity to RCTs and can be implemented effectively in community settings. Conclusion An emphasis on RCTs by funders and researchers is not supported by scientific evidence. Alternative designs can lead to development of a valid body of research on community services such as respite. PMID:26729467
Zarit, Steven H; Bangerter, Lauren R; Liu, Yin; Rovine, Michael J
2017-03-01
There is growing emphasis on empirical validation of the efficacy of community-based services for older people and their families, but research on services such as respite care faces methodological challenges that have limited the growth of outcome studies. We identify problems associated with the usual research approaches for studying respite care, with the goal of stimulating use of novel and more appropriate research designs that can lead to improved studies of community-based services. Using the concept of research validity, we evaluate the methodological approaches in the current literature on respite services, including adult day services, in-home respite and overnight respite. Although randomized control trials (RCTs) are possible in community settings, validity is compromised by practical limitations of randomization and other problems. Quasi-experimental and interrupted time series designs offer comparable validity to RCTs and can be implemented effectively in community settings. An emphasis on RCTs by funders and researchers is not supported by scientific evidence. Alternative designs can lead to development of a valid body of research on community services such as respite.
Time-Dependent Material Properties of Shotcrete: Experimental and Numerical Study.
Neuner, Matthias; Cordes, Tobias; Drexel, Martin; Hofstetter, Günter
2017-09-11
A new experimental program, focusing on the evolution of the Young's modulus, uniaxial compressive strength, shrinkage and creep of shotcrete is presented. The laboratory tests are, starting at very young ages of the material, conducted on two different types of specimens sampled at the site of the Brenner Basetunnel. The experimental results are evaluated and compared to other experiments from the literature. In addition, three advanced constitutive models for shotcrete, i.e., the model by Meschke, the model by Schädlich and Schweiger, and the model by Neuner et al., are validated on the basis of the test data, and the capabilities of the models to represent the observed shotcrete behavior are assessed. Hence, the gap between the the outdated experimental data on shotcrete available in the literature on the one hand and the nowadays available advanced shotcrete models, on the other hand, is closed.
Strong Unitary and Overlap Uncertainty Relations: Theory and Experiment
NASA Astrophysics Data System (ADS)
Bong, Kok-Wei; Tischler, Nora; Patel, Raj B.; Wollmann, Sabine; Pryde, Geoff J.; Hall, Michael J. W.
2018-06-01
We derive and experimentally investigate a strong uncertainty relation valid for any n unitary operators, which implies the standard uncertainty relation and others as special cases, and which can be written in terms of geometric phases. It is saturated by every pure state of any n -dimensional quantum system, generates a tight overlap uncertainty relation for the transition probabilities of any n +1 pure states, and gives an upper bound for the out-of-time-order correlation function. We test these uncertainty relations experimentally for photonic polarization qubits, including the minimum uncertainty states of the overlap uncertainty relation, via interferometric measurements of generalized geometric phases.
Segmental Dynamics of Forward Fall Arrests: System Identification Approach
Kim, Kyu-Jung; Ashton-Miller, James A.
2009-01-01
Background Fall-related injuries are multifaceted problems, necessitating thorough biodynamic simulation to identify critical biomechanical factors. Methods A 2-degree-of-freedom discrete impact model was constructed through system identification and validation processes using the experimental data to understand dynamic interactions of various biomechanical parameters in bimanual forward fall arrests. Findings The bimodal reaction force response from the identified models had small identification errors for the first and second force peaks less than 3.5% and high coherence between the measured and identified model responses (R2=0.95). Model validation with separate experimental data also demonstrated excellent validation accuracy and coherence, less than 7% errors and R2=0.87, respectively. The first force peak was usually greater than the second force peak and strongly correlated with the impact velocity of the upper extremity, while the second force peak was associated with the impact velocity of the body. The impact velocity of the upper extremity relative to the body could be a major risk factor to fall-related injuries as observed from model simulations that a 75% faster arm movement relative to the falling speed of the body alone could double the first force peak from soft landing, thereby readily exceeding the fracture strength of the distal radius. Interpretation Considering that the time-critical nature of falling often calls for a fast arm movement, the use of the upper extremity in forward fall arrests is not biomechanically justified unless sufficient reaction time and coordinated protective motion of the upper extremity are available. PMID:19250726
Further Validation of a CFD Code for Calculating the Performance of Two-Stage Light Gas Guns
NASA Technical Reports Server (NTRS)
Bogdanoff, David W.
2017-01-01
Earlier validations of a higher-order Godunov code for modeling the performance of two-stage light gas guns are reviewed. These validation comparisons were made between code predictions and experimental data from the NASA Ames 1.5" and 0.28" guns and covered muzzle velocities of 6.5 to 7.2 km/s. In the present report, five more series of code validation comparisons involving experimental data from the Ames 0.22" (1.28" pump tube diameter), 0.28", 0.50", 1.00" and 1.50" guns are presented. The total muzzle velocity range of the validation data presented herein is 3 to 11.3 km/s. The agreement between the experimental data and CFD results is judged to be very good. Muzzle velocities were predicted within 0.35 km/s for 74% of the cases studied with maximum differences being 0.5 km/s and for 4 out of 50 cases, 0.5 - 0.7 km/s.
NASA Astrophysics Data System (ADS)
Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi
2017-01-01
Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.
2003-03-01
Different?," Jour. of Experimental & Theoretical Artificial Intelligence, Special Issue on Al for Systems Validation and Verification, 12(4), 2000, pp...Hamilton, D., " Experiences in Improving the State of Practice in Verification and Validation of Knowledge-Based Systems," Workshop Notes of the AAAI...Unsuspected Power of the Standard Turing Test," Jour. of Experimental & Theoretical Artificial Intelligence., 12, 2000, pp3 3 1-3 4 0 . [30] Gaschnig
Chen, J; Ding, W X; Brower, D L; Finkenthal, D; Muscatello, C; Taussig, D; Boivin, R
2016-11-01
Motivated by the need to measure fast equilibrium temporal dynamics, non-axisymmetric structures, and core magnetic fluctuations (coherent and broadband), a three-chord Faraday-effect polarimeter-interferometer system with fast time response and high phase resolution has recently been installed on the DIII-D tokamak. A novel detection scheme utilizing two probe beams and two detectors for each chord results in reduced phase noise and increased time response [δb ∼ 1G with up to 3 MHz bandwidth]. First measurement results were obtained during the recent DIII-D experimental campaign. Simultaneous Faraday and density measurements have been successfully demonstrated and high-frequency, up to 100 kHz, Faraday-effect perturbations have been observed. Preliminary comparisons with EFIT are used to validate diagnostic performance. Principle of the diagnostic and first experimental results is presented.
Small-signal model for the series resonant converter
NASA Technical Reports Server (NTRS)
King, R. J.; Stuart, T. A.
1985-01-01
The results of a previous discrete-time model of the series resonant dc-dc converter are reviewed and from these a small signal dynamic model is derived. This model is valid for low frequencies and is based on the modulation of the diode conduction angle for control. The basic converter is modeled separately from its output filter to facilitate the use of these results for design purposes. Experimental results are presented.
Frame-Transfer Gating Raman Spectroscopy for Time-Resolved Multiscalar Combustion Diagnostics
NASA Technical Reports Server (NTRS)
Nguyen, Quang-Viet; Fischer, David G.; Kojima, Jun
2011-01-01
Accurate experimental measurement of spatially and temporally resolved variations in chemical composition (species concentrations) and temperature in turbulent flames is vital for characterizing the complex phenomena occurring in most practical combustion systems. These diagnostic measurements are called multiscalar because they are capable of acquiring multiple scalar quantities simultaneously. Multiscalar diagnostics also play a critical role in the area of computational code validation. In order to improve the design of combustion devices, computational codes for modeling turbulent combustion are often used to speed up and optimize the development process. The experimental validation of these codes is a critical step in accepting their predictions for engine performance in the absence of cost-prohibitive testing. One of the most critical aspects of setting up a time-resolved stimulated Raman scattering (SRS) diagnostic system is the temporal optical gating scheme. A short optical gate is necessary in order for weak SRS signals to be detected with a good signal- to-noise ratio (SNR) in the presence of strong background optical emissions. This time-synchronized optical gating is a classical problem even to other spectroscopic techniques such as laser-induced fluorescence (LIF) or laser-induced breakdown spectroscopy (LIBS). Traditionally, experimenters have had basically two options for gating: (1) an electronic means of gating using an image intensifier before the charge-coupled-device (CCD), or (2) a mechanical optical shutter (a rotary chopper/mechanical shutter combination). A new diagnostic technology has been developed at the NASA Glenn Research Center that utilizes a frame-transfer CCD sensor, in conjunction with a pulsed laser and multiplex optical fiber collection, to realize time-resolved Raman spectroscopy of turbulent flames that is free from optical background noise (interference). The technology permits not only shorter temporal optical gating (down to <1 s, in principle), but also higher optical throughput, thus resulting in a substantial increase in measurement SNR.
NASA Astrophysics Data System (ADS)
Goyal, Rahul; Trivedi, Chirag; Kumar Gandhi, Bhupendra; Cervantes, Michel J.
2017-07-01
Hydraulic turbines are operated over an extended operating range to meet the real time electricity demand. Turbines operated at part load have flow parameters not matching the designed ones. This results in unstable flow conditions in the runner and draft tube developing low frequency and high amplitude pressure pulsations. The unsteady pressure pulsations affect the dynamic stability of the turbine and cause additional fatigue. The work presented in this paper discusses the flow field investigation of a high head model Francis turbine at part load: 50% of the rated load. Numerical simulation of the complete turbine has been performed. Unsteady pressure pulsations in the vaneless space, runner, and draft tube are investigated and validated with available experimental data. Detailed analysis of the rotor stator interaction and draft tube flow field are performed and discussed. The analysis shows the presence of a rotating vortex rope in the draft tube at the frequency of 0.3 times of the runner rotational frequency. The frequency of the vortex rope precession, which causes severe fluctuations and vibrations in the draft tube, is predicted within 3.9% of the experimental measured value. The vortex rope results pressure pulsations propagating in the system whose frequency is also perceive in the runner and upstream the runner.
Computation of viscous blast wave flowfields
NASA Technical Reports Server (NTRS)
Atwood, Christopher A.
1991-01-01
A method to determine unsteady solutions of the Navier-Stokes equations was developed and applied. The structural finite-volume, approximately factored implicit scheme uses Newton subiterations to obtain the spatially and temporally second-order accurate time history of the interaction of blast-waves with stationary targets. The inviscid flux is evaluated using MacCormack's modified Steger-Warming flux or Roe flux difference splittings with total variation diminishing limiters, while the viscous flux is computed using central differences. The use of implicit boundary conditions in conjunction with a telescoping in time and space method permitted solutions to this strongly unsteady class of problems. Comparisons of numerical, analytical, and experimental results were made in two and three dimensions. These comparisons revealed accurate wave speed resolution with nonoscillatory discontinuity capturing. The purpose of this effort was to address the three-dimensional, viscous blast-wave problem. Test cases were undertaken to reveal these methods' weaknesses in three regimes: (1) viscous-dominated flow; (2) complex unsteady flow; and (3) three-dimensional flow. Comparisons of these computations to analytic and experimental results provided initial validation of the resultant code. Addition details on the numerical method and on the validation can be found in the appendix. Presently, the code is capable of single zone computations with selection of any permutation of solid wall or flow-through boundaries.
Thermal charging study of compressed expanded natural graphite/phase change material composites
Mallow, Anne; Abdelaziz, Omar; Graham, Jr., Samuel
2016-08-12
The thermal charging performance of paraffin wax combined with compressed expanded natural graphite foam was studied for different graphite bulk densities. Constant heat fluxes between 0.39 W/cm 2 and 1.55 W/cm 2 were applied, as well as a constant boundary temperature of 60 °C. Thermal charging experiments indicate that, in the design of thermal batteries, thermal conductivity of the composite alone is an insufficient metric to determine the influence of the graphite foam on the thermal energy storage. By dividing the latent heat of the composite by the time to end of melt for each applied boundary condition, the energymore » storage performance was calculated to show the effects of composite thermal conductivity, graphite bulk density, and latent heat capacity. For the experimental volume, the addition of graphite beyond a graphite bulk density of 100 kg/m 3 showed limited benefit on the energy storage performance due to the decrease in latent heat storage capacity. These experimental results are used to validate a numerical model to predict the time to melt and for future use in the design of heat exchangers with graphite-foam based phase change material composites. As a result, size scale effects are explored parametrically with the validated model.« less
Van Liefferinge, Dagmar; Sonuga-Barke, Edmund; Danckaerts, Marina; Fayn, Kirill; Van Broeck, Nady; van der Oord, Saskia
2018-05-30
Emotional lability (EL) is an important trans-diagnostic concept that is associated with significant functional impairment in childhood and adolescence. EL is typically measured with questionnaires, although little is known about the ecological validity of these ratings. In this paper, we undertook 2 studies addressing this issue by examining the relationship between rating-based measures of EL and directly measured emotional expressions and experiences. Furthermore, the associations between directly measured emotional expressions and experiences and attention-deficit/hyperactivity disorder (ADHD) symptomatology were also examined, given the clear association of EL with ADHD in former research. In Study 1, we examined the relationship between parental report of children's EL and ADHD, and children's emotional expressions in an experimental context (N = 67). In Study 2, we examined the relationship between parental ratings and real-time measures of emotional experiences in daily life in adolescents (N = 65). EL ratings were associated with different elements of real-time emotional experiences and expressions. Elements of emotional expressions but not emotional experiences were also associated with ADHD symptom reports. These studies provide evidence for the ecological validity of EL ratings. Furthermore, they add evidence for the associations between EL and ADHD. Copyright © 2018 John Wiley & Sons, Ltd.
Ramo, Nicole L.; Puttlitz, Christian M.
2018-01-01
Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558
Vivaldi: visualization and validation of biomacromolecular NMR structures from the PDB.
Hendrickx, Pieter M S; Gutmanas, Aleksandras; Kleywegt, Gerard J
2013-04-01
We describe Vivaldi (VIsualization and VALidation DIsplay; http://pdbe.org/vivaldi), a web-based service for the analysis, visualization, and validation of NMR structures in the Protein Data Bank (PDB). Vivaldi provides access to model coordinates and several types of experimental NMR data using interactive visualization tools, augmented with structural annotations and model-validation information. The service presents information about the modeled NMR ensemble, validation of experimental chemical shifts, residual dipolar couplings, distance and dihedral angle constraints, as well as validation scores based on empirical knowledge and databases. Vivaldi was designed for both expert NMR spectroscopists and casual non-expert users who wish to obtain a better grasp of the information content and quality of NMR structures in the public archive. Copyright © 2013 Wiley Periodicals, Inc.
Denis-Alpizar, Otoniel; Bemish, Raymond J; Meuwly, Markus
2017-03-21
Vibrational energy relaxation (VER) of diatomics following collisions with the surrounding medium is an important elementary process for modeling high-temperature gas flow. VER is characterized by two parameters: the vibrational relaxation time τ vib and the state relaxation rates. Here the vibrational relaxation of CO(ν=0←ν=1) in Ar is considered for validating a computational approach to determine the vibrational relaxation time parameter (pτ vib ) using an accurate, fully dimensional potential energy surface. For lower temperatures, comparison with experimental data shows very good agreement whereas at higher temperatures (up to 25 000 K), comparisons with an empirically modified model due to Park confirm its validity for CO in Ar. Additionally, the calculations provide insight into the importance of Δν>1 transitions that are ignored in typical applications of the Landau-Teller framework.
Rapid hybridization of nucleic acids using isotachophoresis
Bercovici, Moran; Han, Crystal M.; Liao, Joseph C.; Santiago, Juan G.
2012-01-01
We use isotachophoresis (ITP) to control and increase the rate of nucleic acid hybridization reactions in free solution. We present a new physical model, validation experiments, and demonstrations of this assay. We studied the coupled physicochemical processes of preconcentration, mixing, and chemical reaction kinetics under ITP. Our experimentally validated model enables a closed form solution for ITP-aided reaction kinetics, and reveals a new characteristic time scale which correctly predicts order 10,000-fold speed-up of chemical reaction rate for order 100 pM reactants, and greater enhancement at lower concentrations. At 500 pM concentration, we measured a reaction time which is 14,000-fold lower than that predicted for standard second-order hybridization. The model and method are generally applicable to acceleration of reactions involving nucleic acids, and may be applicable to a wide range of reactions involving ionic reactants. PMID:22733732
A three-dimensional, time-dependent model of Mobile Bay
NASA Technical Reports Server (NTRS)
Pitts, F. H.; Farmer, R. C.
1976-01-01
A three-dimensional, time-variant mathematical model for momentum and mass transport in estuaries was developed and its solution implemented on a digital computer. The mathematical model is based on state and conservation equations applied to turbulent flow of a two-component, incompressible fluid having a free surface. Thus, bouyancy effects caused by density differences between the fresh and salt water, inertia from thare river and tidal currents, and differences in hydrostatic head are taken into account. The conservation equations, which are partial differential equations, are solved numerically by an explicit, one-step finite difference scheme and the solutions displayed numerically and graphically. To test the validity of the model, a specific estuary for which scaled model and experimental field data are available, Mobile Bay, was simulated. Comparisons of velocity, salinity and water level data show that the model is valid and a viable means of simulating the hydrodynamics and mass transport in non-idealized estuaries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
English, Shawn Allen; Nelson, Stacy Michelle; Briggs, Timothy
Presented is a model verification and validation effort using low - velocity impact (LVI) of carbon fiber reinforced polymer laminate experiments. A flat cylindrical indenter impacts the laminate with enough energy to produce delamination, matrix cracks and fiber breaks. Included in the experimental efforts are ultrasonic scans of the damage for qualitative validation of the models. However, the primary quantitative metrics of validation are the force time history measured through the instrumented indenter and initial and final velocities. The simulations, whi ch are run on Sandia's Sierra finite element codes , consist of all physics and material parameters of importancemore » as determined by a sensitivity analysis conducted on the LVI simulation. A novel orthotropic damage and failure constitutive model that is cap able of predicting progressive composite damage and failure is described in detail and material properties are measured, estimated from micromechanics or optimized through calibration. A thorough verification and calibration to the accompanying experiment s are presented. Specia l emphasis is given to the four - point bend experiment. For all simulations of interest, the mesh and material behavior is verified through extensive convergence studies. An ensemble of simulations incorporating model parameter unc ertainties is used to predict a response distribution which is then compared to experimental output. The result is a quantifiable confidence in material characterization and model physics when simulating this phenomenon in structures of interest.« less
Lima, Gustavo F; Freitas, Victor C G; Araújo, Renan P; Maitelli, André L; Salazar, Andrés O
2017-09-15
The pipeline inspection using a device called Pipeline Inspection Gauge (PIG) is safe and reliable when the PIG is at low speeds during inspection. We built a Testing Laboratory, containing a testing loop and supervisory system to study speed control techniques for PIGs. The objective of this work is to present and validate the Testing Laboratory, which will allow development of a speed controller for PIGs and solve an existing problem in the oil industry. The experimental methodology used throughout the project is also presented. We installed pressure transducers on pipeline outer walls to detect the PIG's movement and, with data from supervisory, calculated an average speed of 0.43 m/s. At the same time, the electronic board inside the PIG received data from odometer and calculated an average speed of 0.45 m/s. We found an error of 4.44%, which is experimentally acceptable. The results showed that it is possible to successfully build a Testing Laboratory to detect the PIG's passage and estimate its speed. The validation of the Testing Laboratory using data from the odometer and its auxiliary electronic was very successful. Lastly, we hope to develop more research in the oil industry area using this Testing Laboratory.
Freitas, Victor C. G.; Araújo, Renan P.; Maitelli, André L.; Salazar, Andrés O.
2017-01-01
The pipeline inspection using a device called Pipeline Inspection Gauge (PIG) is safe and reliable when the PIG is at low speeds during inspection. We built a Testing Laboratory, containing a testing loop and supervisory system to study speed control techniques for PIGs. The objective of this work is to present and validate the Testing Laboratory, which will allow development of a speed controller for PIGs and solve an existing problem in the oil industry. The experimental methodology used throughout the project is also presented. We installed pressure transducers on pipeline outer walls to detect the PIG’s movement and, with data from supervisory, calculated an average speed of 0.43 m/s. At the same time, the electronic board inside the PIG received data from odometer and calculated an average speed of 0.45 m/s. We found an error of 4.44%, which is experimentally acceptable. The results showed that it is possible to successfully build a Testing Laboratory to detect the PIG’s passage and estimate its speed. The validation of the Testing Laboratory using data from the odometer and its auxiliary electronic was very successful. Lastly, we hope to develop more research in the oil industry area using this Testing Laboratory. PMID:28914757
Haley, David W
2011-09-01
The current study examined whether the psychological stress of the still-face (SF) task (i.e. stress resulting from a parent's unresponsiveness) is a valid laboratory stress paradigm for evaluating infant cortisol reactivity. Given that factors external to the experimental paradigm, such as arriving at a new place, may cause an elevation in cortisol secretion; we tested the hypothesis that infants would show a cortisol response to the SF task but not to a normal FF task (control). Saliva was collected for cortisol measurement from 6-month-old infants (n = 31) randomly assigned to either a repeated SF task or to a continuous FF task. Parent-infant dyads were videotaped. Salivary cortisol concentration was measured at baseline, 20, and 30 min after the start of the procedure. Infant salivary cortisol concentrations showed a significant increase over time for the SF task but not for the FF task. The results provide new evidence that the repeated SF task provides a psychological challenge that is due to the SF condition rather than to some non-task related factor; these results provide internal validity for the paradigm. The study offers new insight into the role of parent-infant interactions in the activation of the infant stress response system.
Solar-Diesel Hybrid Power System Optimization and Experimental Validation
NASA Astrophysics Data System (ADS)
Jacobus, Headley Stewart
As of 2008 1.46 billion people, or 22 percent of the World's population, were without electricity. Many of these people live in remote areas where decentralized generation is the only method of electrification. Most mini-grids are powered by diesel generators, but new hybrid power systems are becoming a reliable method to incorporate renewable energy while also reducing total system cost. This thesis quantifies the measurable Operational Costs for an experimental hybrid power system in Sierra Leone. Two software programs, Hybrid2 and HOMER, are used during the system design and subsequent analysis. Experimental data from the installed system is used to validate the two programs and to quantify the savings created by each component within the hybrid system. This thesis bridges the gap between design optimization studies that frequently lack subsequent validation and experimental hybrid system performance studies.
Hariharan, Prasanna; D’Souza, Gavin A.; Horner, Marc; Morrison, Tina M.; Malinauskas, Richard A.; Myers, Matthew R.
2017-01-01
A “credible” computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing “model credibility” is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a “threshold-based” validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results (“S”) of velocity and viscous shear stress were compared with inter-laboratory experimental measurements (“D”). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student’s t-test. However, following the threshold-based approach, a Student’s t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices. PMID:28594889
Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R
2017-01-01
A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices.
Aliev, Abil E; Kulke, Martin; Khaneja, Harmeet S; Chudasama, Vijay; Sheppard, Tom D; Lanigan, Rachel M
2014-02-01
We propose a new approach for force field optimizations which aims at reproducing dynamics characteristics using biomolecular MD simulations, in addition to improved prediction of motionally averaged structural properties available from experiment. As the source of experimental data for dynamics fittings, we use (13) C NMR spin-lattice relaxation times T1 of backbone and sidechain carbons, which allow to determine correlation times of both overall molecular and intramolecular motions. For structural fittings, we use motionally averaged experimental values of NMR J couplings. The proline residue and its derivative 4-hydroxyproline with relatively simple cyclic structure and sidechain dynamics were chosen for the assessment of the new approach in this work. Initially, grid search and simplexed MD simulations identified large number of parameter sets which fit equally well experimental J couplings. Using the Arrhenius-type relationship between the force constant and the correlation time, the available MD data for a series of parameter sets were analyzed to predict the value of the force constant that best reproduces experimental timescale of the sidechain dynamics. Verification of the new force-field (termed as AMBER99SB-ILDNP) against NMR J couplings and correlation times showed consistent and significant improvements compared to the original force field in reproducing both structural and dynamics properties. The results suggest that matching experimental timescales of motions together with motionally averaged characteristics is the valid approach for force field parameter optimization. Such a comprehensive approach is not restricted to cyclic residues and can be extended to other amino acid residues, as well as to the backbone. Copyright © 2013 Wiley Periodicals, Inc.
Ravikumar, Balaguru; Parri, Elina; Timonen, Sanna; Airola, Antti; Wennerberg, Krister
2017-01-01
Due to relatively high costs and labor required for experimental profiling of the full target space of chemical compounds, various machine learning models have been proposed as cost-effective means to advance this process in terms of predicting the most potent compound-target interactions for subsequent verification. However, most of the model predictions lack direct experimental validation in the laboratory, making their practical benefits for drug discovery or repurposing applications largely unknown. Here, we therefore introduce and carefully test a systematic computational-experimental framework for the prediction and pre-clinical verification of drug-target interactions using a well-established kernel-based regression algorithm as the prediction model. To evaluate its performance, we first predicted unmeasured binding affinities in a large-scale kinase inhibitor profiling study, and then experimentally tested 100 compound-kinase pairs. The relatively high correlation of 0.77 (p < 0.0001) between the predicted and measured bioactivities supports the potential of the model for filling the experimental gaps in existing compound-target interaction maps. Further, we subjected the model to a more challenging task of predicting target interactions for such a new candidate drug compound that lacks prior binding profile information. As a specific case study, we used tivozanib, an investigational VEGF receptor inhibitor with currently unknown off-target profile. Among 7 kinases with high predicted affinity, we experimentally validated 4 new off-targets of tivozanib, namely the Src-family kinases FRK and FYN A, the non-receptor tyrosine kinase ABL1, and the serine/threonine kinase SLK. Our sub-sequent experimental validation protocol effectively avoids any possible information leakage between the training and validation data, and therefore enables rigorous model validation for practical applications. These results demonstrate that the kernel-based modeling approach offers practical benefits for probing novel insights into the mode of action of investigational compounds, and for the identification of new target selectivities for drug repurposing applications. PMID:28787438
Modeling and characterization of multipath in global navigation satellite system ranging signals
NASA Astrophysics Data System (ADS)
Weiss, Jan Peter
The Global Positioning System (GPS) provides position, velocity, and time information to users in anywhere near the earth in real-time and regardless of weather conditions. Since the system became operational, improvements in many areas have reduced systematic errors affecting GPS measurements such that multipath, defined as any signal taking a path other than the direct, has become a significant, if not dominant, error source for many applications. This dissertation utilizes several approaches to characterize and model multipath errors in GPS measurements. Multipath errors in GPS ranging signals are characterized for several receiver systems and environments. Experimental P(Y) code multipath data are analyzed for ground stations with multipath levels ranging from minimal to severe, a C-12 turboprop, an F-18 jet, and an aircraft carrier. Comparisons between receivers utilizing single patch antennas and multi-element arrays are also made. In general, the results show significant reductions in multipath with antenna array processing, although large errors can occur even with this kind of equipment. Analysis of airborne platform multipath shows that the errors tend to be small in magnitude because the size of the aircraft limits the geometric delay of multipath signals, and high in frequency because aircraft dynamics cause rapid variations in geometric delay. A comprehensive multipath model is developed and validated. The model integrates 3D structure models, satellite ephemerides, electromagnetic ray-tracing algorithms, and detailed antenna and receiver models to predict multipath errors. Validation is performed by comparing experimental and simulated multipath via overall error statistics, per satellite time histories, and frequency content analysis. The validation environments include two urban buildings, an F-18, an aircraft carrier, and a rural area where terrain multipath dominates. The validated models are used to identify multipath sources, characterize signal properties, evaluate additional antenna and receiver tracking configurations, and estimate the reflection coefficients of multipath-producing surfaces. Dynamic models for an F-18 landing on an aircraft carrier correlate aircraft dynamics to multipath frequency content; the model also characterizes the separate contributions of multipath due to the aircraft, ship, and ocean to the overall error statistics. Finally, reflection coefficients for multipath produced by terrain are estimated via a least-squares algorithm.
Validation of NASA Thermal Ice Protection Computer Codes. Part 1; Program Overview
NASA Technical Reports Server (NTRS)
Miller, Dean; Bond, Thomas; Sheldon, David; Wright, William; Langhals, Tammy; Al-Khalil, Kamel; Broughton, Howard
1996-01-01
The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center. LEWICE/Thermal (electrothermal deicing & anti-icing), and ANTICE (hot-gas & electrothermal anti-icing). The Thermal Code Validation effort was designated as a priority during a 1994 'peer review' of the NASA Lewis Icing program, and was implemented as a cooperative effort with industry. During April 1996, the first of a series of experimental validation tests was conducted in the NASA Lewis Icing Research Tunnel(IRT). The purpose of the April 96 test was to validate the electrothermal predictive capabilities of both LEWICE/Thermal, and ANTICE. A heavily instrumented test article was designed and fabricated for this test, with the capability of simulating electrothermal de-icing and anti-icing modes of operation. Thermal measurements were then obtained over a range of test conditions, for comparison with analytical predictions. This paper will present an overview of the test, including a detailed description of: (1) the validation process; (2) test article design; (3) test matrix development; and (4) test procedures. Selected experimental results will be presented for de-icing and anti-icing modes of operation. Finally, the status of the validation effort at this point will be summarized. Detailed comparisons between analytical predictions and experimental results are contained in the following two papers: 'Validation of NASA Thermal Ice Protection Computer Codes: Part 2- The Validation of LEWICE/Thermal' and 'Validation of NASA Thermal Ice Protection Computer Codes: Part 3-The Validation of ANTICE'
From military to civil loadings: Preliminary numerical-based thorax injury criteria investigations.
Goumtcha, Aristide Awoukeng; Bodo, Michèle; Taddei, Lorenzo; Roth, Sébastien
2016-03-01
Effects of the impact of a mechanical structure on the human body are of great interest in the understanding of body trauma. Experimental tests have led to first conclusions about the dangerousness of an impact observing impact forces or displacement time history with PMHS (Post Mortem human Subjects). They have allowed providing interesting data for the development and the validation of numerical biomechanical models. These models, widely used in the framework of automotive crashworthiness, have led to the development of numerical-based injury criteria and tolerance thresholds. The aim of this process is to improve the safety of mechanical structures in interaction with the body. In a military context, investigations both at experimental and numerical level are less successfully completed. For both military and civil frameworks, the literature list a number of numerical analysis trying to propose injury mechanisms, and tolerance thresholds based on biofidelic Finite Element (FE) models of different part of the human body. However the link between both frameworks is not obvious, since lots of parameters are different: great mass impacts at relatively low velocity for civil impacts (falls, automotive crashworthiness) and low mass at very high velocity for military loadings (ballistic, blast). In this study, different accident cases were investigated, and replicated with a previously developed and validated FE model of the human thorax named Hermaphrodite Universal Biomechanical YX model (HUBYX model). These previous validations included replications of standard experimental tests often used to validate models in the context of automotive industry, experimental ballistic tests in high speed dynamic impact and also numerical replication of blast loading test ensuring its biofidelity. In order to extend the use of this model in other frameworks, some real-world accidents were reconstructed, and consequences of these loadings on the FE model were explored. These various numerical replications of accident coming from different contexts raise the question about the ability of a FE model to correctly predict several kinds of trauma, from blast or ballistic impacts to falls, sports or automotive ones in a context of numerical injury mechanisms and tolerance limits investigations. Copyright © 2015 John Wiley & Sons, Ltd.
Kim, Min Kyung; Lane, Anatoliy; Kelley, James J; Lun, Desmond S
2016-01-01
Several methods have been developed to predict system-wide and condition-specific intracellular metabolic fluxes by integrating transcriptomic data with genome-scale metabolic models. While powerful in many settings, existing methods have several shortcomings, and it is unclear which method has the best accuracy in general because of limited validation against experimentally measured intracellular fluxes. We present a general optimization strategy for inferring intracellular metabolic flux distributions from transcriptomic data coupled with genome-scale metabolic reconstructions. It consists of two different template models called DC (determined carbon source model) and AC (all possible carbon sources model) and two different new methods called E-Flux2 (E-Flux method combined with minimization of l2 norm) and SPOT (Simplified Pearson cOrrelation with Transcriptomic data), which can be chosen and combined depending on the availability of knowledge on carbon source or objective function. This enables us to simulate a broad range of experimental conditions. We examined E. coli and S. cerevisiae as representative prokaryotic and eukaryotic microorganisms respectively. The predictive accuracy of our algorithm was validated by calculating the uncentered Pearson correlation between predicted fluxes and measured fluxes. To this end, we compiled 20 experimental conditions (11 in E. coli and 9 in S. cerevisiae), of transcriptome measurements coupled with corresponding central carbon metabolism intracellular flux measurements determined by 13C metabolic flux analysis (13C-MFA), which is the largest dataset assembled to date for the purpose of validating inference methods for predicting intracellular fluxes. In both organisms, our method achieves an average correlation coefficient ranging from 0.59 to 0.87, outperforming a representative sample of competing methods. Easy-to-use implementations of E-Flux2 and SPOT are available as part of the open-source package MOST (http://most.ccib.rutgers.edu/). Our method represents a significant advance over existing methods for inferring intracellular metabolic flux from transcriptomic data. It not only achieves higher accuracy, but it also combines into a single method a number of other desirable characteristics including applicability to a wide range of experimental conditions, production of a unique solution, fast running time, and the availability of a user-friendly implementation.
Scott, C E H; Eaton, M J; Nutton, R W; Wade, F A; Evans, S L; Pankaj, P
2017-01-01
Up to 40% of unicompartmental knee arthroplasty (UKA) revisions are performed for unexplained pain which may be caused by elevated proximal tibial bone strain. This study investigates the effect of tibial component metal backing and polyethylene thickness on bone strain in a cemented fixed-bearing medial UKA using a finite element model (FEM) validated experimentally by digital image correlation (DIC) and acoustic emission (AE). A total of ten composite tibias implanted with all-polyethylene (AP) and metal-backed (MB) tibial components were loaded to 2500 N. Cortical strain was measured using DIC and cancellous microdamage using AE. FEMs were created and validated and polyethylene thickness varied from 6 mm to 10 mm. The volume of cancellous bone exposed to < -3000 µε (pathological loading) and < -7000 µε (yield point) minimum principal (compressive) microstrain and > 3000 µε and > 7000 µε maximum principal (tensile) microstrain was computed. Experimental AE data and the FEM volume of cancellous bone with compressive strain < -3000 µε correlated strongly: R = 0.947, R 2 = 0.847, percentage error 12.5% (p < 0.001). DIC and FEM data correlated: R = 0.838, R 2 = 0.702, percentage error 4.5% (p < 0.001). FEM strain patterns included MB lateral edge concentrations; AP concentrations at keel, peg and at the region of load application. Cancellous strains were higher in AP implants at all loads: 2.2- (10 mm) to 3.2-times (6 mm) the volume of cancellous bone compressively strained < -7000 µε. AP tibial components display greater volumes of pathologically overstrained cancellous bone than MB implants of the same geometry. Increasing AP thickness does not overcome these pathological forces and comes at the cost of greater bone resection.Cite this article: C. E. H. Scott, M. J. Eaton, R. W. Nutton, F. A. Wade, S. L. Evans, P. Pankaj. Metal-backed versus all-polyethylene unicompartmental knee arthroplasty: Proximal tibial strain in an experimentally validated finite element model. Bone Joint Res 2017;6:22-30. DOI:10.1302/2046-3758.61.BJR-2016-0142.R1. © 2017 Scott et al.
Experimental design data for the biosynthesis of citric acid using Central Composite Design method.
Kola, Anand Kishore; Mekala, Mallaiah; Goli, Venkat Reddy
2017-06-01
In the present investigation, we report that statistical design and optimization of significant variables for the microbial production of citric acid from sucrose in presence of filamentous fungi A. niger NCIM 705. Various combinations of experiments were designed with Central Composite Design (CCD) of Response Surface Methodology (RSM) for the production of citric acid as a function of six variables. The variables are; initial sucrose concentration, initial pH of medium, fermentation temperature, incubation time, stirrer rotational speed, and oxygen flow rate. From experimental data, a statistical model for this process has been developed. The optimum conditions reported in the present article are initial concentration of sucrose of 163.6 g/L, initial pH of medium 5.26, stirrer rotational speed of 247.78 rpm, incubation time of 8.18 days, fermentation temperature of 30.06 °C and flow rate of oxygen of 1.35 lpm. Under optimum conditions the predicted maximum citric acid is 86.42 g/L. The experimental validation carried out under the optimal values and reported citric acid to be 82.0 g/L. The model is able to represent the experimental data and the agreement between the model and experimental data is good.
NASA Astrophysics Data System (ADS)
Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki
2016-04-01
The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.
NASA Astrophysics Data System (ADS)
Pennacchi, Paolo
2008-04-01
The modelling of the unbalanced magnetic pull (UMP) in generators and the experimental validation of the proposed method are presented in this paper. The UMP is one of the most remarkable effects of electromechanical interactions in rotating machinery. As a consequence of the rotor eccentricity, the imbalance of the electromagnetic forces acting between rotor and stator generates a net radial force. This phenomenon can be avoided by means of a careful assembly and manufacture in small and stiff machines, like electrical motors. On the contrary, the eccentricity of the active part of the rotor with respect to the stator is unavoidable in big generators of power plants, because they operate above their first critical speed and are supported by oil-film bearings. In the first part of the paper, a method aimed to calculate the UMP force is described. This model is more general than those available in literature, which are limited to circular orbits. The model is based on the actual position of the rotor inside the stator, therefore on the actual air-gap distribution, regardless of the orbit type. The closed form of the nonlinear UMP force components is presented. In the second part, the experimental validation of the proposed model is presented. The dynamical behaviour in the time domain of a steam turbo-generator of a power plant is considered and it is shown that the model is able to reproduce the dynamical effects due to the excitation of the magnetic field in the generator.
Translating Fatigue to Human Performance.
Enoka, Roger M; Duchateau, Jacques
2016-11-01
Despite flourishing interest in the topic of fatigue-as indicated by the many presentations on fatigue at the 2015 Annual Meeting of the American College of Sports Medicine-surprisingly little is known about its effect on human performance. There are two main reasons for this dilemma: 1) the inability of current terminology to accommodate the scope of the conditions ascribed to fatigue, and 2) a paucity of validated experimental models. In contrast to current practice, a case is made for a unified definition of fatigue to facilitate its management in health and disease. On the basis of the classic two-domain concept of Mosso, fatigue is defined as a disabling symptom in which physical and cognitive function is limited by interactions between performance fatigability and perceived fatigability. As a symptom, fatigue can only be measured by self-report, quantified as either a trait characteristic or a state variable. One consequence of such a definition is that the word fatigue should not be preceded by an adjective (e.g., central, mental, muscle, peripheral, and supraspinal) to suggest the locus of the changes responsible for an observed level of fatigue. Rather, mechanistic studies should be performed with validated experimental models to identify the changes responsible for the reported fatigue. As indicated by three examples (walking endurance in old adults, time trials by endurance athletes, and fatigue in persons with multiple sclerosis) discussed in the review, however, it has proven challenging to develop valid experimental models of fatigue. The proposed framework provides a foundation to address the many gaps in knowledge of how laboratory measures of fatigue and fatigability affect real-world performance.
Mesquita, Marta; Dias Pereira, António; Bettencourt-Dias, Mónica; Chaves, Paula; Pereira-Leal, José B.
2016-01-01
Barrett’s esophagus is the major risk factor for esophageal adenocarcinoma. It has a low but non-neglectable risk, high surveillance costs and no reliable risk stratification markers. We sought to identify early biomarkers, predictive of Barrett’s malignant progression, using a meta-analysis approach on gene expression data. This in silico strategy was followed by experimental validation in a cohort of patients with extended follow up from the Instituto Português de Oncologia de Lisboa de Francisco Gentil EPE (Portugal). Bioinformatics and systems biology approaches singled out two candidate predictive markers for Barrett’s progression, CYR61 and TAZ. Although previously implicated in other malignancies and in epithelial-to-mesenchymal transition phenotypes, our experimental validation shows for the first time that CYR61 and TAZ have the potential to be predictive biomarkers for cancer progression. Experimental validation by reverse transcriptase quantitative PCR and immunohistochemistry confirmed the up-regulation of both genes in Barrett’s samples associated with high-grade dysplasia/adenocarcinoma. In our cohort CYR61 and TAZ up-regulation ranged from one to ten years prior to progression to adenocarcinoma in Barrett’s esophagus index samples. Finally, we found that CYR61 and TAZ over-expression is correlated with early focal signs of epithelial to mesenchymal transition. Our results highlight both CYR61 and TAZ genes as potential predictive biomarkers for stratification of the risk for development of adenocarcinoma and suggest a potential mechanistic route for Barrett’s esophagus neoplastic progression. PMID:27583562
Translating Fatigue to Human Performance
Enoka, Roger M.; Duchateau, Jacques
2016-01-01
Despite flourishing interest in the topic of fatigue—as indicated by the many presentations on fatigue at the 2015 annual meeting of the American College of Sports Medicine—surprisingly little is known about its impact on human performance. There are two main reasons for this dilemma: (1) the inability of current terminology to accommodate the scope of the conditions ascribed to fatigue, and (2) a paucity of validated experimental models. In contrast to current practice, a case is made for a unified definition of fatigue to facilitate its management in health and disease. Based on the classic two-domain concept of Mosso, fatigue is defined as a disabling symptom in which physical and cognitive function is limited by interactions between performance fatigability and perceived fatigability. As a symptom, fatigue can only be measured by self-report, quantified as either a trait characteristic or a state variable. One consequence of such a definition is that the word fatigue should not be preceded by an adjective (e.g., central, mental, muscle, peripheral, and supraspinal) to suggest the locus of the changes responsible for an observed level of fatigue. Rather, mechanistic studies should be performed with validated experimental models to identify the changes responsible for the reported fatigue. As indicated by three examples (walking endurance in old adults, time trials by endurance athletes, and fatigue in persons with multiple sclerosis) discussed in the review, however, it has proven challenging to develop valid experimental models of fatigue. The proposed framework provides a foundation to address the many gaps in knowledge of how laboratory measures of fatigue and fatigability impact real-world performance. PMID:27015386
2016-06-02
Retrieval of droplet-size density distribution from multiple-field-of-view cross-polarized lidar signals: theory and experimental validation...theoretical and experimental studies of mul- tiple scattering and multiple-field-of-view (MFOV) li- dar detection have made possible the retrieval of cloud...droplet cloud are typical of Rayleigh scattering, with a signature close to a dipole (phase function quasi -flat and a zero-depolarization ratio
2001-08-30
Body with Thermo-Chemical destribution of Heat-Protected System . In: Physical and Gasdynamic Phenomena in Supersonic Flows Over Bodies. Edit. By...Final Report on ISTC Contract # 1809p Parametric Study of Advanced Mixing of Fuel/Oxidant System in High Speed Gaseous Flows and Experimental...of Advanced Mixing of Fuel/Oxidant System in High Speed Gaseous Flows and Experimental Validation Planning 5c. PROGRAM ELEMENT NUMBER 5d. PROJECT
Experimental validation of ultrasonic guided modes in electrical cables by optical interferometry.
Mateo, Carlos; de Espinosa, Francisco Montero; Gómez-Ullate, Yago; Talavera, Juan A
2008-03-01
In this work, the dispersion curves of elastic waves propagating in electrical cables and in bare copper wires are obtained theoretically and validated experimentally. The theoretical model, based on Gazis equations formulated according to the global matrix methodology, is resolved numerically. Viscoelasticity and attenuation are modeled theoretically using the Kelvin-Voigt model. Experimental tests are carried out using interferometry. There is good agreement between the simulations and the experiments despite the peculiarities of electrical cables.
NASA Astrophysics Data System (ADS)
Wagner, David R.; Holmgren, Per; Skoglund, Nils; Broström, Markus
2018-06-01
The design and validation of a newly commissioned entrained flow reactor is described in the present paper. The reactor was designed for advanced studies of fuel conversion and ash formation in powder flames, and the capabilities of the reactor were experimentally validated using two different solid biomass fuels. The drop tube geometry was equipped with a flat flame burner to heat and support the powder flame, optical access ports, a particle image velocimetry (PIV) system for in situ conversion monitoring, and probes for extraction of gases and particulate matter. A detailed description of the system is provided based on simulations and measurements, establishing the detailed temperature distribution and gas flow profiles. Mass balance closures of approximately 98% were achieved by combining gas analysis and particle extraction. Biomass fuel particles were successfully tracked using shadow imaging PIV, and the resulting data were used to determine the size, shape, velocity, and residence time of converting particles. Successful extractive sampling of coarse and fine particles during combustion while retaining their morphology was demonstrated, and it opens up for detailed time resolved studies of rapid ash transformation reactions; in the validation experiments, clear and systematic fractionation trends for K, Cl, S, and Si were observed for the two fuels tested. The combination of in situ access, accurate residence time estimations, and precise particle sampling for subsequent chemical analysis allows for a wide range of future studies, with implications and possibilities discussed in the paper.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-05
... modeling needs and experimental validation techniques for complex flow phenomena in and around off- shore... experimental validation. Ultimately, research in this area may lead to significant improvements in wind plant... meeting will consist of an initial plenary session in which invited speakers will survey available...
Validation of a Monte Carlo simulation of the Inveon PET scanner using GATE
NASA Astrophysics Data System (ADS)
Lu, Lijun; Zhang, Houjin; Bian, Zhaoying; Ma, Jianhua; Feng, Qiangjin; Chen, Wufan
2016-08-01
The purpose of this study is to validate the application of GATE (Geant4 Application for Tomographic Emission) Monte Carlo simulation toolkit in order to model the performance characteristics of Siemens Inveon small animal PET system. The simulation results were validated against experimental/published data in accordance with the NEMA NU-4 2008 protocol for standardized evaluation of spatial resolution, sensitivity, scatter fraction (SF) and noise equivalent counting rate (NECR) of a preclinical PET system. An agreement of less than 18% was obtained between the radial, tangential and axial spatial resolutions of the simulated and experimental results. The simulated peak NECR of mouse-size phantom agreed with the experimental result, while for the rat-size phantom simulated value was higher than experimental result. The simulated and experimental SFs of mouse- and rat- size phantom both reached an agreement of less than 2%. It has been shown the feasibility of our GATE model to accurately simulate, within certain limits, all major performance characteristics of Inveon PET system.
Revisit the faster-is-slower effect for an exit at a corner
NASA Astrophysics Data System (ADS)
Chen, Jun Min; Lin, Peng; Wu, Fan Yu; Li Gao, Dong; Wang, Guo Yuan
2018-02-01
The faster-is-slower effect (FIS), which means that crowd at a high enough velocity could significantly increase the evacuation time to escape through an exit, is an interesting phenomenon in pedestrian dynamics. Such phenomenon had been studied widely and has been experimentally verified in different systems of discrete particles flowing through a centre exit. To experimentally validate this phenomenon by using people under high pressure is difficult due to ethical issues. A mouse, similar to a human, is a kind of self-driven and soft body creature with competitive behaviour under stressed conditions. Therefore, mice are used to escape through an exit at a corner. A number of repeated tests are conducted and the average escape time per mouse at different levels of stimulus are analysed. The escape times do not increase obviously with the level of stimulus for the corner exit, which is contrary to the experiment with the center exit. The experimental results show that the FIS effect is not necessary a universal law for any discrete system. The observation could help the design of buildings by relocating their exits to the corner in rooms to avoid the formation of FIS effect.
A Possible Tool for Checking Errors in the INAA Results, Based on Neutron Data and Method Validation
NASA Astrophysics Data System (ADS)
Cincu, Em.; Grigore, Ioana Manea; Barbos, D.; Cazan, I. L.; Manu, V.
2008-08-01
This work presents preliminary results of a new type of possible application in the INAA experiments of elemental analysis, useful to check errors occurred during investigation of unknown samples; it relies on the INAA method validation experiments and accuracy of the neutron data from the literature. The paper comprises 2 sections, the first one presents—in short—the steps of the experimental tests carried out for INAA method validation and for establishing the `ACTIVA-N' laboratory performance, which is-at the same time-an illustration of the laboratory evolution on the way to get performance. Section 2 presents our recent INAA results on CRMs, of which interpretation opens discussions about the usefulness of using a tool for checking possible errors, different from the usual statistical procedures. The questionable aspects and the requirements to develop a practical checking tool are discussed.
NASA Astrophysics Data System (ADS)
Kim, Hyun-Tae; Romanelli, M.; Yuan, X.; Kaye, S.; Sips, A. C. C.; Frassinetti, L.; Buchanan, J.; Contributors, JET
2017-06-01
This paper presents for the first time a statistical validation of predictive TRANSP simulations of plasma temperature using two transport models, GLF23 and TGLF, over a database of 80 baseline H-mode discharges in JET-ILW. While the accuracy of the predicted T e with TRANSP-GLF23 is affected by plasma collisionality, the dependency of predictions on collisionality is less significant when using TRANSP-TGLF, indicating that the latter model has a broader applicability across plasma regimes. TRANSP-TGLF also shows a good matching of predicted T i with experimental measurements allowing for a more accurate prediction of the neutron yields. The impact of input data and assumptions prescribed in the simulations are also investigated in this paper. The statistical validation and the assessment of uncertainty level in predictive TRANSP simulations for JET-ILW-DD will constitute the basis for the extrapolation to JET-ILW-DT experiments.
Experimental validation of docking and capture using space robotics testbeds
NASA Technical Reports Server (NTRS)
Spofford, John; Schmitz, Eric; Hoff, William
1991-01-01
This presentation describes the application of robotic and computer vision systems to validate docking and capture operations for space cargo transfer vehicles. Three applications are discussed: (1) air bearing systems in two dimensions that yield high quality free-flying, flexible, and contact dynamics; (2) validation of docking mechanisms with misalignment and target dynamics; and (3) computer vision technology for target location and real-time tracking. All the testbeds are supported by a network of engineering workstations for dynamic and controls analyses. Dynamic simulation of multibody rigid and elastic systems are performed with the TREETOPS code. MATRIXx/System-Build and PRO-MATLAB/Simulab are the tools for control design and analysis using classical and modern techniques such as H-infinity and LQG/LTR. SANDY is a general design tool to optimize numerically a multivariable robust compensator with a user-defined structure. Mathematica and Macsyma are used to derive symbolically dynamic and kinematic equations.
NASA Astrophysics Data System (ADS)
Marquet, F.; Pernot, M.; Aubry, J.-F.; Montaldo, G.; Marsac, L.; Tanter, M.; Fink, M.
2009-05-01
A non-invasive protocol for transcranial brain tissue ablation with ultrasound is studied and validated in vitro. The skull induces strong aberrations both in phase and in amplitude, resulting in a severe degradation of the beam shape. Adaptive corrections of the distortions induced by the skull bone are performed using a previous 3D computational tomography scan acquisition (CT) of the skull bone structure. These CT scan data are used as entry parameters in a FDTD (finite differences time domain) simulation of the full wave propagation equation. A numerical computation is used to deduce the impulse response relating the targeted location and the ultrasound therapeutic array, thus providing a virtual time-reversal mirror. This impulse response is then time-reversed and transmitted experimentally by a therapeutic array positioned exactly in the same referential frame as the one used during CT scan acquisitions. In vitro experiments are conducted on monkey and human skull specimens using an array of 300 transmit elements working at a central frequency of 1 MHz. These experiments show a precise refocusing of the ultrasonic beam at the targeted location with a positioning error lower than 0.7 mm. The complete validation of this transcranial adaptive focusing procedure paves the way to in vivo animal and human transcranial HIFU investigations.
NASA Astrophysics Data System (ADS)
Dörr, Dominik; Joppich, Tobias; Schirmaier, Fabian; Mosthaf, Tobias; Kärger, Luise; Henning, Frank
2016-10-01
Thermoforming of continuously fiber reinforced thermoplastics (CFRTP) is ideally suited to thin walled and complex shaped products. By means of forming simulation, an initial validation of the producibility of a specific geometry, an optimization of the forming process and the prediction of fiber-reorientation due to forming is possible. Nevertheless, applied methods need to be validated. Therefor a method is presented, which enables the calculation of error measures for the mismatch between simulation results and experimental tests, based on measurements with a conventional coordinate measuring device. As a quantitative measure, describing the curvature is provided, the presented method is also suitable for numerical or experimental sensitivity studies on wrinkling behavior. The applied methods for forming simulation, implemented in Abaqus explicit, are presented and applied to a generic geometry. The same geometry is tested experimentally and simulation and test results are compared by the proposed validation method.
Validation of an automated mite counter for Dermanyssus gallinae in experimental laying hen cages.
Mul, Monique F; van Riel, Johan W; Meerburg, Bastiaan G; Dicke, Marcel; George, David R; Groot Koerkamp, Peter W G
2015-08-01
For integrated pest management (IPM) programs to be maximally effective, monitoring of the growth and decline of the pest populations is essential. Here, we present the validation results of a new automated monitoring device for the poultry red mite (Dermanyssus gallinae), a serious pest in laying hen facilities world-wide. This monitoring device (called an "automated mite counter") was validated in experimental laying hen cages with live birds and a growing population of D. gallinae. This validation study resulted in 17 data points of 'number of mites counted' by the automated mite counter and the 'number of mites present' in the experimental laying hen cages. The study demonstrated that the automated mite counter was able to track the D. gallinae population effectively. A wider evaluation showed that this automated mite counter can become a useful tool in IPM of D. gallinae in laying hen facilities.
Kerry, Matthew J; Embretson, Susan E
2017-01-01
Future time perspective (FTP) is defined as "perceptions of the future as being limited or open-ended" (Lang and Carstensen, 2002; p. 125). The construct figures prominently in both workplace and retirement domains, but the age-predictions are competing: Workplace research predicts decreasing FTP age-change, in contrast, retirement scholars predict increasing FTP age-change. For the first time, these competing predictions are pitted in an experimental manipulation of subjective life expectancy (SLE). A sample of N = 207 older adults (age 45-60) working full-time (>30-h/week) were randomly assigned to SLE questions framed as either 'Live-to' or 'Die-by' to evaluate competing predictions for FTP. Results indicate general support for decreasing age-change in FTP, indicated by independent-sample t -tests showing lower FTP in the 'Die-by' framing condition. Further general-linear model analyses were conducted to test for interaction effects of retirement planning with experimental framings on FTP and intended retirement; While retirement planning buffered FTP's decrease, simple-effects also revealed that retirement planning increased intentions for sooner retirement, but lack of planning increased intentions for later retirement. Discussion centers on practical implications of our findings and consequences validity evidence in future empirical research of FTP in both workplace and retirement domains.
Using Numerical Modeling to Simulate Space Capsule Ground Landings
NASA Technical Reports Server (NTRS)
Heymsfield, Ernie; Fasanella, Edwin L.
2009-01-01
Experimental work is being conducted at the National Aeronautics and Space Administration s (NASA) Langley Research Center (LaRC) to investigate ground landing capabilities of the Orion crew exploration vehicle (CEV). The Orion capsule is NASA s replacement for the Space Shuttle. The Orion capsule will service the International Space Station and be used for future space missions to the Moon and to Mars. To evaluate the feasibility of Orion ground landings, a series of capsule impact tests are being performed at the NASA Langley Landing and Impact Research Facility (LandIR). The experimental results derived at LandIR provide means to validate and calibrate nonlinear dynamic finite element models, which are also being developed during this study. Because of the high cost and time involvement intrinsic to full-scale testing, numerical simulations are favored over experimental work. Subsequent to a numerical model validated by actual test responses, impact simulations will be conducted to study multiple impact scenarios not practical to test. Twenty-one swing tests using the LandIR gantry were conducted during the June 07 through October 07 time period to evaluate the Orion s impact response. Results for two capsule initial pitch angles, 0deg and -15deg , along with their computer simulations using LS-DYNA are presented in this article. A soil-vehicle friction coefficient of 0.45 was determined by comparing the test stopping distance with computer simulations. In addition, soil modeling accuracy is presented by comparing vertical penetrometer impact tests with computer simulations for the soil model used during the swing tests.
Sahoo, Debasis; Robbe, Cyril; Deck, Caroline; Meyer, Frank; Papy, Alexandre; Willinger, Remy
2016-11-01
The main objective of this study is to develop a methodology to assess this risk based on experimental tests versus numerical predictive head injury simulations. A total of 16 non-lethal projectiles (NLP) impacts were conducted with rigid force plate at three different ranges of impact velocity (120, 72 and 55m/s) and the force/deformation-time data were used for the validation of finite element (FE) NLP. A good accordance between experimental and simulation data were obtained during validation of FE NLP with high correlation value (>0.98) and peak force discrepancy of less than 3%. A state-of-the art finite element head model with enhanced brain and skull material laws and specific head injury criteria was used for numerical computation of NLP impacts. Frontal and lateral FE NLP impacts to the head model at different velocities were performed under LS-DYNA. It is the very first time that the lethality of NLP is assessed by axonal strain computation to predict diffuse axonal injury (DAI) in NLP impacts to head. In case of temporo-parietal impact the min-max risk of DAI is 0-86%. With a velocity above 99.2m/s there is greater than 50% risk of DAI for temporo-parietal impacts. All the medium- and high-velocity impacts are susceptible to skull fracture, with a percentage risk higher than 90%. This study provides tool for a realistic injury (DAI and skull fracture) assessment during NLP impacts to the human head. Copyright © 2016 Elsevier Ltd. All rights reserved.
van de Streek, Jacco; Neumann, Marcus A
2010-10-01
This paper describes the validation of a dispersion-corrected density functional theory (d-DFT) method for the purpose of assessing the correctness of experimental organic crystal structures and enhancing the information content of purely experimental data. 241 experimental organic crystal structures from the August 2008 issue of Acta Cryst. Section E were energy-minimized in full, including unit-cell parameters. The differences between the experimental and the minimized crystal structures were subjected to statistical analysis. The r.m.s. Cartesian displacement excluding H atoms upon energy minimization with flexible unit-cell parameters is selected as a pertinent indicator of the correctness of a crystal structure. All 241 experimental crystal structures are reproduced very well: the average r.m.s. Cartesian displacement for the 241 crystal structures, including 16 disordered structures, is only 0.095 Å (0.084 Å for the 225 ordered structures). R.m.s. Cartesian displacements above 0.25 A either indicate incorrect experimental crystal structures or reveal interesting structural features such as exceptionally large temperature effects, incorrectly modelled disorder or symmetry breaking H atoms. After validation, the method is applied to nine examples that are known to be ambiguous or subtly incorrect.
Lance, Blake W.; Smith, Barton L.
2016-06-23
Transient convection has been investigated experimentally for the purpose of providing Computational Fluid Dynamics (CFD) validation benchmark data. A specialized facility for validation benchmark experiments called the Rotatable Buoyancy Tunnel was used to acquire thermal and velocity measurements of flow over a smooth, vertical heated plate. The initial condition was forced convection downward with subsequent transition to mixed convection, ending with natural convection upward after a flow reversal. Data acquisition through the transient was repeated for ensemble-averaged results. With simple flow geometry, validation data were acquired at the benchmark level. All boundary conditions (BCs) were measured and their uncertainties quantified.more » Temperature profiles on all four walls and the inlet were measured, as well as as-built test section geometry. Inlet velocity profiles and turbulence levels were quantified using Particle Image Velocimetry. System Response Quantities (SRQs) were measured for comparison with CFD outputs and include velocity profiles, wall heat flux, and wall shear stress. Extra effort was invested in documenting and preserving the validation data. Details about the experimental facility, instrumentation, experimental procedure, materials, BCs, and SRQs are made available through this paper. As a result, the latter two are available for download and the other details are included in this work.« less
Computational fluid dynamics modeling of laboratory flames and an industrial flare.
Singh, Kanwar Devesh; Gangadharan, Preeti; Chen, Daniel H; Lou, Helen H; Li, Xianchang; Richmond, Peyton
2014-11-01
A computational fluid dynamics (CFD) methodology for simulating the combustion process has been validated with experimental results. Three different types of experimental setups were used to validate the CFD model. These setups include an industrial-scale flare setups and two lab-scale flames. The CFD study also involved three different fuels: C3H6/CH/Air/N2, C2H4/O2/Ar and CH4/Air. In the first setup, flare efficiency data from the Texas Commission on Environmental Quality (TCEQ) 2010 field tests were used to validate the CFD model. In the second setup, a McKenna burner with flat flames was simulated. Temperature and mass fractions of important species were compared with the experimental data. Finally, results of an experimental study done at Sandia National Laboratories to generate a lifted jet flame were used for the purpose of validation. The reduced 50 species mechanism, LU 1.1, the realizable k-epsilon turbulence model, and the EDC turbulence-chemistry interaction model were usedfor this work. Flare efficiency, axial profiles of temperature, and mass fractions of various intermediate species obtained in the simulation were compared with experimental data and a good agreement between the profiles was clearly observed. In particular the simulation match with the TCEQ 2010 flare tests has been significantly improved (within 5% of the data) compared to the results reported by Singh et al. in 2012. Validation of the speciated flat flame data supports the view that flares can be a primary source offormaldehyde emission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mangina, R. S.; Enloe, C. L.; Font, G. I.
2015-11-15
We present an experimental case study of time-resolved force production by an aerodynamic plasma actuator immersed in various mixtures of electropositive (N{sub 2}) and electronegative gases (O{sub 2} and SF{sub 6}) at atmospheric pressure using a fixed AC high-voltage input of 16 kV peak amplitude at 200 Hz frequency. We have observed distinct changes in the discharge structures during both negative- and positive-going voltage half-cycles, with corresponding variations in the actuator's force production: a ratio of 4:1 in the impulse produced by the negative-going half-cycle of the discharge among the various gas mixtures we explored, 2:1 in the impulse produced by themore » positive-going half-cycle, and cases in which the negative-going half-cycle dominates force production (by a ratio of 1.5:1), where the half-cycles produce identical force levels, and where the positive-going half cycle dominates (by a ratio of 1:5). We also present time-resolved experimental evidence for the first time that shows electrons do play a significant role in the momentum coupling to surrounding neutrals during the negative going voltage half-cycle of the N{sub 2} discharge. We show that there is sufficient macroscopic variation in the plasma that the predictions of numerical models at the microscopic level can be validated even though the plasma itself cannot be measured directly on those spatial and temporal scales.« less
Design and Implementation of a Smart Home System Using Multisensor Data Fusion Technology.
Hsu, Yu-Liang; Chou, Po-Huan; Chang, Hsing-Cheng; Lin, Shyan-Lung; Yang, Shih-Chin; Su, Heng-Yi; Chang, Chih-Chien; Cheng, Yuan-Sheng; Kuo, Yu-Chen
2017-07-15
This paper aims to develop a multisensor data fusion technology-based smart home system by integrating wearable intelligent technology, artificial intelligence, and sensor fusion technology. We have developed the following three systems to create an intelligent smart home environment: (1) a wearable motion sensing device to be placed on residents' wrists and its corresponding 3D gesture recognition algorithm to implement a convenient automated household appliance control system; (2) a wearable motion sensing device mounted on a resident's feet and its indoor positioning algorithm to realize an effective indoor pedestrian navigation system for smart energy management; (3) a multisensor circuit module and an intelligent fire detection and alarm algorithm to realize a home safety and fire detection system. In addition, an intelligent monitoring interface is developed to provide in real-time information about the smart home system, such as environmental temperatures, CO concentrations, communicative environmental alarms, household appliance status, human motion signals, and the results of gesture recognition and indoor positioning. Furthermore, an experimental testbed for validating the effectiveness and feasibility of the smart home system was built and verified experimentally. The results showed that the 3D gesture recognition algorithm could achieve recognition rates for automated household appliance control of 92.0%, 94.8%, 95.3%, and 87.7% by the 2-fold cross-validation, 5-fold cross-validation, 10-fold cross-validation, and leave-one-subject-out cross-validation strategies. For indoor positioning and smart energy management, the distance accuracy and positioning accuracy were around 0.22% and 3.36% of the total traveled distance in the indoor environment. For home safety and fire detection, the classification rate achieved 98.81% accuracy for determining the conditions of the indoor living environment.
Design and Implementation of a Smart Home System Using Multisensor Data Fusion Technology
Chou, Po-Huan; Chang, Hsing-Cheng; Lin, Shyan-Lung; Yang, Shih-Chin; Su, Heng-Yi; Chang, Chih-Chien; Cheng, Yuan-Sheng; Kuo, Yu-Chen
2017-01-01
This paper aims to develop a multisensor data fusion technology-based smart home system by integrating wearable intelligent technology, artificial intelligence, and sensor fusion technology. We have developed the following three systems to create an intelligent smart home environment: (1) a wearable motion sensing device to be placed on residents’ wrists and its corresponding 3D gesture recognition algorithm to implement a convenient automated household appliance control system; (2) a wearable motion sensing device mounted on a resident’s feet and its indoor positioning algorithm to realize an effective indoor pedestrian navigation system for smart energy management; (3) a multisensor circuit module and an intelligent fire detection and alarm algorithm to realize a home safety and fire detection system. In addition, an intelligent monitoring interface is developed to provide in real-time information about the smart home system, such as environmental temperatures, CO concentrations, communicative environmental alarms, household appliance status, human motion signals, and the results of gesture recognition and indoor positioning. Furthermore, an experimental testbed for validating the effectiveness and feasibility of the smart home system was built and verified experimentally. The results showed that the 3D gesture recognition algorithm could achieve recognition rates for automated household appliance control of 92.0%, 94.8%, 95.3%, and 87.7% by the 2-fold cross-validation, 5-fold cross-validation, 10-fold cross-validation, and leave-one-subject-out cross-validation strategies. For indoor positioning and smart energy management, the distance accuracy and positioning accuracy were around 0.22% and 3.36% of the total traveled distance in the indoor environment. For home safety and fire detection, the classification rate achieved 98.81% accuracy for determining the conditions of the indoor living environment. PMID:28714884
Cloud computing and validation of expandable in silico livers.
Ropella, Glen E P; Hunt, C Anthony
2010-12-03
In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute, mechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance ISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a previously validated ISL and initiated re-validated experiments that required scaling experiments to use more simulated lobules than previously, more than could be achieved using the local cluster technology. Rather than dramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon EC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster nodes and assessing the scientific equivalence of local cluster validation experiments with those executed using the cloud platform. The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling protocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated lobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was demonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more samples. The process was analogous to demonstration of results equivalency from two different wet-labs. The results provide additional evidence that disposition simulations using ISLs can cover the behavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype similarity). The scientific value of experimenting with multiscale biomedical models has been limited to research groups with access to computer clusters. The availability of cloud technology coupled with the evidence of scientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide straightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware.
FY2017 Pilot Project Plan for the Nuclear Energy Knowledge and Validation Center Initiative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Weiju
To prepare for technical development of computational code validation under the Nuclear Energy Knowledge and Validation Center (NEKVAC) initiative, several meetings were held by a group of experts of the Idaho National Laboratory (INL) and the Oak Ridge National Laboratory (ORNL) to develop requirements of, and formulate a structure for, a transient fuel database through leveraging existing resources. It was concluded in discussions of these meetings that a pilot project is needed to address the most fundamental issues that can generate immediate stimulus to near-future validation developments as well as long-lasting benefits to NEKVAC operation. The present project is proposedmore » based on the consensus of these discussions. Analysis of common scenarios in code validation indicates that the incapability of acquiring satisfactory validation data is often a showstopper that must first be tackled before any confident validation developments can be carried out. Validation data are usually found scattered in different places most likely with interrelationships among the data not well documented, incomplete with information for some parameters missing, nonexistent, or unrealistic to experimentally generate. Furthermore, with very different technical backgrounds, the modeler, the experimentalist, and the knowledgebase developer that must be involved in validation data development often cannot communicate effectively without a data package template that is representative of the data structure for the information domain of interest to the desired code validation. This pilot project is proposed to use the legendary TREAT Experiments Database to provide core elements for creating an ideal validation data package. Data gaps and missing data interrelationships will be identified from these core elements. All the identified missing elements will then be filled in with experimental data if available from other existing sources or with dummy data if nonexistent. The resulting hybrid validation data package (composed of experimental and dummy data) will provide a clear and complete instance delineating the structure of the desired validation data and enabling effective communication among the modeler, the experimentalist, and the knowledgebase developer. With a good common understanding of the desired data structure by the three parties of subject matter experts, further existing data hunting will be effectively conducted, new experimental data generation will be realistically pursued, knowledgebase schema will be practically designed; and code validation will be confidently planned.« less
Progress Towards a Microgravity CFD Validation Study Using the ISS SPHERES-SLOSH Experiment
NASA Technical Reports Server (NTRS)
Storey, Jedediah M.; Kirk, Daniel; Marsell, Brandon (Editor); Schallhorn, Paul (Editor)
2017-01-01
Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecrafts mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many CFD programs have been validated by slosh experiments using various fluids in earth gravity, but prior to the ISS SPHERES-Slosh experiment1, little experimental data for long-duration, zero-gravity slosh existed. This paper presents the current status of an ongoing CFD validation study using the ISS SPHERES-Slosh experimental data.
Progress Towards a Microgravity CFD Validation Study Using the ISS SPHERES-SLOSH Experiment
NASA Technical Reports Server (NTRS)
Storey, Jed; Kirk, Daniel (Editor); Marsell, Brandon (Editor); Schallhorn, Paul (Editor)
2017-01-01
Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecrafts mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many CFD programs have been validated by slosh experiments using various fluids in earth gravity, but prior to the ISS SPHERES-Slosh experiment, little experimental data for long-duration, zero-gravity slosh existed. This paper presents the current status of an ongoing CFD validation study using the ISS SPHERES-Slosh experimental data.
Baskan, O; Speetjens, M F M; Metcalfe, G; Clercx, H J H
2015-10-01
Countless theoretical/numerical studies on transport and mixing in two-dimensional (2D) unsteady flows lean on the assumption that Hamiltonian mechanisms govern the Lagrangian dynamics of passive tracers. However, experimental studies specifically investigating said mechanisms are rare. Moreover, they typically concern local behavior in specific states (usually far away from the integrable state) and generally expose this indirectly by dye visualization. Laboratory experiments explicitly addressing the global Hamiltonian progression of the Lagrangian flow topology entirely from integrable to chaotic state, i.e., the fundamental route to efficient transport by chaotic advection, appear non-existent. This motivates our study on experimental visualization of this progression by direct measurement of Poincaré sections of passive tracer particles in a representative 2D time-periodic flow. This admits (i) accurate replication of the experimental initial conditions, facilitating true one-to-one comparison of simulated and measured behavior, and (ii) direct experimental investigation of the ensuing Lagrangian dynamics. The analysis reveals a close agreement between computations and observations and thus experimentally validates the full global Hamiltonian progression at a great level of detail.
Composition of Web Services Using Markov Decision Processes and Dynamic Programming
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247
Determination of the core temperature of a Li-ion cell during thermal runaway
NASA Astrophysics Data System (ADS)
Parhizi, M.; Ahmed, M. B.; Jain, A.
2017-12-01
Safety and performance of Li-ion cells is severely affected by thermal runaway where exothermic processes within the cell cause uncontrolled temperature rise, eventually leading to catastrophic failure. Most past experimental papers on thermal runaway only report surface temperature measurement, while the core temperature of the cell remains largely unknown. This paper presents an experimentally validated method based on thermal conduction analysis to determine the core temperature of a Li-ion cell during thermal runaway using surface temperature and chemical kinetics data. Experiments conducted on a thermal test cell show that core temperature computed using this method is in good agreement with independent thermocouple-based measurements in a wide range of experimental conditions. The validated method is used to predict core temperature as a function of time for several previously reported thermal runaway tests. In each case, the predicted peak core temperature is found to be several hundreds of degrees Celsius higher than the measured surface temperature. This shows that surface temperature alone is not sufficient for thermally characterizing the cell during thermal runaway. Besides providing key insights into the fundamental nature of thermal runaway, the ability to determine the core temperature shown here may lead to practical tools for characterizing and mitigating thermal runaway.
NASA Astrophysics Data System (ADS)
Ramotar, Lokendra; Rohrauer, Greg L.; Filion, Ryan; MacDonald, Kathryn
2017-03-01
The development of a dynamic thermal battery model for hybrid and electric vehicles is realized. A thermal equivalent circuit model is created which aims to capture and understand the heat propagation from the cells through the entire pack and to the environment using a production vehicle battery pack for model validation. The inclusion of production hardware and the liquid battery thermal management system components into the model considers physical and geometric properties to calculate thermal resistances of components (conduction, convection and radiation) along with their associated heat capacity. Various heat sources/sinks comprise the remaining model elements. Analog equivalent circuit simulations using PSpice are compared to experimental results to validate internal temperature nodes and heat rates measured through various elements, which are then employed to refine the model further. Agreement with experimental results indicates the proposed method allows for a comprehensive real-time battery pack analysis at little computational expense when compared to other types of computer based simulations. Elevated road and ambient conditions in Mesa, Arizona are simulated on a parked vehicle with varying quiescent cooling rates to examine the effect on the diurnal battery temperature for longer term static exposure. A typical daily driving schedule is also simulated and examined.
Capelli, Claudio; Biglino, Giovanni; Petrini, Lorenza; Migliavacca, Francesco; Cosentino, Daria; Bonhoeffer, Philipp; Taylor, Andrew M; Schievano, Silvia
2012-12-01
Finite element (FE) modelling can be a very resourceful tool in the field of cardiovascular devices. To ensure result reliability, FE models must be validated experimentally against physical data. Their clinical application (e.g., patients' suitability, morphological evaluation) also requires fast simulation process and access to results, while engineering applications need highly accurate results. This study shows how FE models with different mesh discretisations can suit clinical and engineering requirements for studying a novel device designed for percutaneous valve implantation. Following sensitivity analysis and experimental characterisation of the materials, the stent-graft was first studied in a simplified geometry (i.e., compliant cylinder) and validated against in vitro data, and then in a patient-specific implantation site (i.e., distensible right ventricular outflow tract). Different meshing strategies using solid, beam and shell elements were tested. Results showed excellent agreement between computational and experimental data in the simplified implantation site. Beam elements were found to be convenient for clinical applications, providing reliable results in less than one hour in a patient-specific anatomical model. Solid elements remain the FE choice for engineering applications, albeit more computationally expensive (>100 times). This work also showed how information on device mechanical behaviour differs when acquired in a simplified model as opposed to a patient-specific model.
Wideband THz Time Domain Spectroscopy based on Optical Rectification and Electro-Optic Sampling
Tomasino, A.; Parisi, A.; Stivala, S.; Livreri, P.; Cino, A. C.; Busacca, A. C.; Peccianti, M.; Morandotti, R.
2013-01-01
We present an analytical model describing the full electromagnetic propagation in a THz time-domain spectroscopy (THz-TDS) system, from the THz pulses via Optical Rectification to the detection via Electro Optic-Sampling. While several investigations deal singularly with the many elements that constitute a THz-TDS, in our work we pay particular attention to the modelling of the time-frequency behaviour of all the stages which compose the experimental set-up. Therefore, our model considers the following main aspects: (i) pump beam focusing into the generation crystal; (ii) phase-matching inside both the generation and detection crystals; (iii) chromatic dispersion and absorption inside the crystals; (iv) Fabry-Perot effect; (v) diffraction outside, i.e. along the propagation, (vi) focalization and overlapping between THz and probe beams, (vii) electro-optic sampling. In order to validate our model, we report on the comparison between the simulations and the experimental data obtained from the same set-up, showing their good agreement. PMID:24173583
NASA Astrophysics Data System (ADS)
McKeown, Joseph T.; Zweiacker, Kai; Liu, Can; Coughlin, Daniel R.; Clarke, Amy J.; Baldwin, J. Kevin; Gibbs, John W.; Roehling, John D.; Imhoff, Seth D.; Gibbs, Paul J.; Tourret, Damien; Wiezorek, Jörg M. K.; Campbell, Geoffrey H.
2016-03-01
Additive manufacturing (AM) of metals and alloys is becoming a pervasive technology in both research and industrial environments, though significant challenges remain before widespread implementation of AM can be realized. In situ investigations of rapid alloy solidification with high spatial and temporal resolutions can provide unique experimental insight into microstructure evolution and kinetics that are relevant for AM processing. Hypoeutectic thin-film Al-Cu and Al-Si alloys were investigated using dynamic transmission electron microscopy to monitor pulsed-laser-induced rapid solidification across microsecond timescales. Solid-liquid interface velocities measured from time-resolved images revealed accelerating solidification fronts in both alloys. The observed microstructure evolution, solidification product, and presence of a morphological instability at the solid-liquid interface in the Al-4 at.%Cu alloy are related to the measured interface velocities and small differences in composition that affect the thermophysical properties of the alloys. These time-resolved in situ measurements can inform and validate predictive modeling efforts for AM.
Ignition and combustion characteristics of metallized propellants
NASA Technical Reports Server (NTRS)
Mueller, D. C.; Turns, Stephen R.
1991-01-01
Over the past six months, experimental investigations were continued and theoretical work on the secondary atomization process was begun. Final shakedown of the sizing/velocity measuring system was completed and the aluminum combustion detection system was modified and tested. Atomizer operation was improved to allow steady state operation over long periods of time for several slurries. To validate the theoretical modeling, work involving carbon slurry atomization and combustion was begun and qualitative observations were made. Simultaneous measurements of aluminum slurry droplet size distributions and detection of burning aluminum particles were performed at several axial locations above the burner. The principle theoretical effort was the application of a rigid shell formation model to aluminum slurries and an investigation of the effects of various parameters on the shell formation process. This shell formation model was extended to include the process leading up to droplet disruption, and previously developed analytical models were applied to yield theoretical aluminum agglomerate ignition and combustion times. The several theoretical times were compared with the experimental results.
McKeown, Joseph T.; Zweiacker, Kai; Liu, Can; ...
2016-01-27
In research and industrial environments, additive manufacturing (AM) of metals and alloys is becoming a pervasive technology, though significant challenges remain before widespread implementation of AM can be realized. In situ investigations of rapid alloy solidification with high spatial and temporal resolutions can provide unique experimental insight into microstructure evolution and kinetics that are relevant for AM processing. Hypoeutectic thin-film Al–Cu and Al–Si alloys were investigated using dynamic transmission electron microscopy to monitor pulsed-laser-induced rapid solidification across microsecond timescales. Solid–liquid interface velocities measured from time-resolved images revealed accelerating solidification fronts in both alloys. We observed microstructure evolution, solidification product, andmore » presence of a morphological instability at the solid–liquid interface in the Al–4 at.%Cu alloy are related to the measured interface velocities and small differences in composition that affect the thermophysical properties of the alloys. These time-resolved in situ measurements can inform and validate predictive modeling efforts for AM.« less
Development and parameter identification of a visco-hyperelastic model for the periodontal ligament.
Huang, Huixiang; Tang, Wencheng; Tan, Qiyan; Yan, Bin
2017-04-01
The present study developed and implemented a new visco-hyperelastic model that is capable of predicting the time-dependent biomechanical behavior of the periodontal ligament. The constitutive model has been implemented into the finite element package ABAQUS by means of a user-defined material subroutine (UMAT). The stress response is decomposed into two constitutive parts in parallel which are a hyperelastic and a time-dependent viscoelastic stress response. In order to identify the model parameters, the indentation equation based on V-W hyperelastic model and the indentation creep model are developed. Then the parameters are determined by fitting them to the corresponding nanoindentation experimental data of the PDL. The nanoindentation experiment was simulated by finite element analysis to validate the visco-hyperelastic model. The simulated results are in good agreement with the experimental data, which demonstrates that the visco-hyperelastic model developed is able to accurately predict the time-dependent mechanical behavior of the PDL. Copyright © 2017 Elsevier Ltd. All rights reserved.
Elasto-dynamic analysis of a gear pump-Part IV: Improvement in the pressure distribution modelling
NASA Astrophysics Data System (ADS)
Mucchi, E.; Dalpiaz, G.; Fernàndez del Rincòn, A.
2015-01-01
This work concerns external gear pumps for automotive applications, which operate at high speed and low pressure. In previous works of the authors (Part I and II, [1,2]), a non-linear lumped-parameter kineto-elastodynamic model for the prediction of the dynamic behaviour of external gear pumps was presented. It takes into account the most important phenomena involved in the operation of this kind of machine. The two main sources of noise and vibration are considered: pressure pulsation and gear meshing. The model has been used in order to foresee the influence of working conditions and design modifications on vibration generation. The model experimental validation is a difficult task. Thus, Part III proposes a novel methodology for the validation carried out by the comparison of simulations and experimental results concerning forces and moments: it deals with the external and inertial components acting on the gears, estimated by the model, and the reactions and inertial components on the pump casing and the test plate, obtained by measurements. The validation is carried out by comparing the level of the time synchronous average in the time domain and the waterfall maps in the frequency domain, with particular attention to identify system resonances. The validation results are satisfactory global, but discrepancies are still present. Moreover, the assessed model has been properly modified for the application to a new virtual pump prototype with helical gears in order to foresee gear accelerations and dynamic forces. Part IV is focused on improvements in the modelling and analysis of the phenomena bound to the pressure distribution around the gears in order to achieve results closer to the measured values. As a matter of fact, the simulation results have shown that a variable meshing stiffness has a notable contribution on the dynamic behaviour of the pump but this is not as important as the pressure phenomena. As a consequence, the original model was modified with the aim at improving the calculation of pressure forces and torques. The improved pressure formulation includes several phenomena not considered in the previous one, such as the variable pressure evolution at input and output ports, as well as an accurate description of the trapped volume and its connections with high and low pressure chambers. The importance of these improvements are highlighted by comparison with experimental results, showing satisfactory matching.
Marvel Analysis of the Measured High-resolution Rovibronic Spectra of TiO
NASA Astrophysics Data System (ADS)
McKemmish, Laura K.; Masseron, Thomas; Sheppard, Samuel; Sandeman, Elizabeth; Schofield, Zak; Furtenbacher, Tibor; Császár, Attila G.; Tennyson, Jonathan; Sousa-Silva, Clara
2017-02-01
Accurate, experimental rovibronic energy levels, with associated labels and uncertainties, are reported for 11 low-lying electronic states of the diatomic {}48{{Ti}}16{{O}} molecule, determined using the Marvel (Measured Active Rotational-Vibrational Energy Levels) algorithm. All levels are based on lines corresponding to critically reviewed and validated high-resolution experimental spectra taken from 24 literature sources. The transition data are in the 2-22,160 cm-1 region. Out of the 49,679 measured transitions, 43,885 are triplet-triplet, 5710 are singlet-singlet, and 84 are triplet-singlet transitions. A careful analysis of the resulting experimental spectroscopic network (SN) allows 48,590 transitions to be validated. The transitions determine 93 vibrational band origins of {}48{{Ti}}16{{O}}, including 71 triplet and 22 singlet ones. There are 276 (73) triplet-triplet (singlet-singlet) band-heads derived from Marvel experimental energies, 123(38) of which have never been assigned in low- or high-resolution experiments. The highest J value, where J stands for the total angular momentum, for which an energy level is validated is 163. The number of experimentally derived triplet and singlet {}48{{Ti}}16{{O}} rovibrational energy levels is 8682 and 1882, respectively. The lists of validated lines and levels for {}48{{Ti}}16{{O}} are deposited in the supporting information to this paper.
Houdek, Petr
2017-01-01
The aim of this perspective article is to show that current experimental evidence on factors influencing dishonesty has limited external validity. Most of experimental studies is built on random assignments, in which control/experimental groups of subjects face varied sizes of the expected reward for behaving dishonestly, opportunities for cheating, means of rationalizing dishonest behavior etc., and mean groups' reactions are observed. The studies have internal validity in assessing the causal influence of these and other factors, but they lack external validity in organizational, market and other environments. If people can opt into or out of diverse real-world environments, an experiment aimed at studying factors influencing real-life degree of dishonesty should permit for such an option. The behavior of such self-selected groups of marginal subjects would probably contain a larger level of (non)deception than the behavior of average people. The article warns that there are not many studies that would enable self-selection or sorting of participants into varying environments, and that limits current knowledge of the extent and dynamics of dishonest and fraudulent behavior. The article focuses on suggestions how to improve dishonesty research, especially how to avoid the experimenter demand bias.
Houdek, Petr
2017-01-01
The aim of this perspective article is to show that current experimental evidence on factors influencing dishonesty has limited external validity. Most of experimental studies is built on random assignments, in which control/experimental groups of subjects face varied sizes of the expected reward for behaving dishonestly, opportunities for cheating, means of rationalizing dishonest behavior etc., and mean groups’ reactions are observed. The studies have internal validity in assessing the causal influence of these and other factors, but they lack external validity in organizational, market and other environments. If people can opt into or out of diverse real-world environments, an experiment aimed at studying factors influencing real-life degree of dishonesty should permit for such an option. The behavior of such self-selected groups of marginal subjects would probably contain a larger level of (non)deception than the behavior of average people. The article warns that there are not many studies that would enable self-selection or sorting of participants into varying environments, and that limits current knowledge of the extent and dynamics of dishonest and fraudulent behavior. The article focuses on suggestions how to improve dishonesty research, especially how to avoid the experimenter demand bias. PMID:28955279
Experimental Validation of a Coupled Fluid-Multibody Dynamics Model for Tanker Trucks
2007-11-08
order to accurately predict the dynamic response of tanker trucks, the model must accurately account for the following effects : • Incompressible...computational code which uses a time- accurate explicit solution procedure is used to solve both the solid and fluid equations of motion. Many commercial...position vector, τ is the deviatoric stress tensor, D is the rate of deformation tensor, f r is the body force vector, r is the artificial
Thermal Mechanisms for High Amplitude Aerodynamic Flow Control (YIP 2012)
2016-04-15
memorandum, master’s thesis, progress, quarterly, research , special, group study, etc. 3. DATES COVERED. Indicate the time during which the work...boundary layer ahead of the plasma. Since the ns-DBD flow control mechanism is primarily thermal, or least symmetric if associated with a quasi ...conditions with minimal experimental effort. The validity of probing a single location on the low speed side of the mixing layer to test for control
Jet Measurements for Development of Jet Noise Prediction Tools
NASA Technical Reports Server (NTRS)
Bridges, James E.
2006-01-01
The primary focus of my presentation is the development of the jet noise prediction code JeNo with most examples coming from the experimental work that drove the theoretical development and validation. JeNo is a statistical jet noise prediction code, based upon the Lilley acoustic analogy. Our approach uses time-average 2-D or 3-D mean and turbulent statistics of the flow as input. The output is source distributions and spectral directivity.
Reliability, Validity, and Usability of Data Extraction Programs for Single-Case Research Designs.
Moeyaert, Mariola; Maggin, Daniel; Verkuilen, Jay
2016-11-01
Single-case experimental designs (SCEDs) have been increasingly used in recent years to inform the development and validation of effective interventions in the behavioral sciences. An important aspect of this work has been the extension of meta-analytic and other statistical innovations to SCED data. Standard practice within SCED methods is to display data graphically, which requires subsequent users to extract the data, either manually or using data extraction programs. Previous research has examined issues of reliability and validity of data extraction programs in the past, but typically at an aggregate level. Little is known, however, about the coding of individual data points. We focused on four different software programs that can be used for this purpose (i.e., Ungraph, DataThief, WebPlotDigitizer, and XYit), and examined the reliability of numeric coding, the validity compared with real data, and overall program usability. This study indicates that the reliability and validity of the retrieved data are independent of the specific software program, but are dependent on the individual single-case study graphs. Differences were found in program usability in terms of user friendliness, data retrieval time, and license costs. Ungraph and WebPlotDigitizer received the highest usability scores. DataThief was perceived as unacceptable and the time needed to retrieve the data was double that of the other three programs. WebPlotDigitizer was the only program free to use. As a consequence, WebPlotDigitizer turned out to be the best option in terms of usability, time to retrieve the data, and costs, although the usability scores of Ungraph were also strong. © The Author(s) 2016.
Fatigue Failure of Space Shuttle Main Engine Turbine Blades
NASA Technical Reports Server (NTRS)
Swanson, Gregrory R.; Arakere, Nagaraj K.
2000-01-01
Experimental validation of finite element modeling of single crystal turbine blades is presented. Experimental results from uniaxial high cycle fatigue (HCF) test specimens and full scale Space Shuttle Main Engine test firings with the High Pressure Fuel Turbopump Alternate Turbopump (HPFTP/AT) provide the data used for the validation. The conclusions show the significant contribution of the crystal orientation within the blade on the resulting life of the component, that the analysis can predict this variation, and that experimental testing demonstrates it.
NASA Astrophysics Data System (ADS)
Zhang, Yaning; Xu, Fei; Li, Bingxi; Kim, Yong-Song; Zhao, Wenke; Xie, Gongnan; Fu, Zhongbin
2018-04-01
This study aims to validate the three-phase heat and mass transfer model developed in the first part (Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development). Experimental results from studies and experiments were used for the validation. The results showed that the correlation coefficients for the simulated and experimental water contents at different soil depths were between 0.83 and 0.92. The correlation coefficients for the simulated and experimental liquid water contents at different soil temperatures were between 0.95 and 0.99. With these high accuracies, the developed model can be well used to predict the water contents at different soil depths and temperatures.
Experimental design methodologies in the optimization of chiral CE or CEC separations: an overview.
Dejaegher, Bieke; Mangelings, Debby; Vander Heyden, Yvan
2013-01-01
In this chapter, an overview of experimental designs to develop chiral capillary electrophoresis (CE) and capillary electrochromatographic (CEC) methods is presented. Method development is generally divided into technique selection, method optimization, and method validation. In the method optimization part, often two phases can be distinguished, i.e., a screening and an optimization phase. In method validation, the method is evaluated on its fit for purpose. A validation item, also applying experimental designs, is robustness testing. In the screening phase and in robustness testing, screening designs are applied. During the optimization phase, response surface designs are used. The different design types and their application steps are discussed in this chapter and illustrated by examples of chiral CE and CEC methods.
Verification and Validation of Residual Stresses in Bi-Material Composite Rings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Stacy Michelle; Hanson, Alexander Anthony; Briggs, Timothy
Process-induced residual stresses commonly occur in composite structures composed of dissimilar materials. These residual stresses form due to differences in the composite materials’ coefficients of thermal expansion and the shrinkage upon cure exhibited by polymer matrix materials. Depending upon the specific geometric details of the composite structure and the materials’ curing parameters, it is possible that these residual stresses could result in interlaminar delamination or fracture within the composite. Therefore, the consideration of potential residual stresses is important when designing composite parts and their manufacturing processes. However, the experimental determination of residual stresses in prototype parts can be time andmore » cost prohibitive. As an alternative to physical measurement, it is possible for computational tools to be used to quantify potential residual stresses in composite prototype parts. Therefore, the objectives of the presented work are to demonstrate a simplistic method for simulating residual stresses in composite parts, as well as the potential value of sensitivity and uncertainty quantification techniques during analyses for which material property parameters are unknown. Specifically, a simplified residual stress modeling approach, which accounts for coefficient of thermal expansion mismatch and polymer shrinkage, is implemented within the Sandia National Laboratories’ developed SIERRA/SolidMechanics code. Concurrent with the model development, two simple, bi-material structures composed of a carbon fiber/epoxy composite and aluminum, a flat plate and a cylinder, are fabricated and the residual stresses are quantified through the measurement of deformation. Then, in the process of validating the developed modeling approach with the experimental residual stress data, manufacturing process simulations of the two simple structures are developed and undergo a formal verification and validation process, including a mesh convergence study, sensitivity analysis, and uncertainty quantification. The simulations’ final results show adequate agreement with the experimental measurements, indicating the validity of a simple modeling approach, as well as a necessity for the inclusion of material parameter uncertainty in the final residual stress predictions.« less
Olondo, C; Legarda, F; Herranz, M; Idoeta, R
2017-04-01
This paper shows the procedure performed to validate the migration equation and the migration parameters' values presented in a previous paper (Legarda et al., 2011) regarding the migration of 137 Cs in Spanish mainland soils. In this paper, this model validation has been carried out checking experimentally obtained activity concentration values against those predicted by the model. This experimental data come from the measured vertical activity profiles of 8 new sampling points which are located in northern Spain. Before testing predicted values of the model, the uncertainty of those values has been assessed with the appropriate uncertainty analysis. Once establishing the uncertainty of the model, both activity concentration values, experimental versus model predicted ones, have been compared. Model validation has been performed analyzing its accuracy, studying it as a whole and also at different depth intervals. As a result, this model has been validated as a tool to predict 137 Cs behaviour in a Mediterranean environment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rohanova, Miroslava; Balikova, Marie
2009-04-01
para-Methoxymethamphetamine (PMMA) is an abused psychedelic compound with reports of several intoxications and deaths after ingestion. However, its pharmacokinetics based on a controlled study is unknown and only partial information on its biotransformation is available. Our experimental study was designed for the time disposition profile of PMMA and its metabolites para-methoxyamphetamine (PMA), para-hydroxymethamphetamine (OH-MAM) and para-hydroxyamphetamine (OH-AM) in blood and biological tissues in rats after the bolus subcutaneous dose 40 mg/kg using a validated GC-MS method. The experimental results ascertained could be useful for subsequent evaluation of PMMA psychotropic or neurotoxic effects and the diagnostic concern of intoxication.
Experimental Demonstration of X-Ray Drive Enhancement with Rugby-Shaped Hohlraums
NASA Astrophysics Data System (ADS)
Philippe, F.; Casner, A.; Caillaud, T.; Landoas, O.; Monteil, M. C.; Liberatore, S.; Park, H. S.; Amendt, P.; Robey, H.; Sorce, C.; Li, C. K.; Seguin, F.; Rosenberg, M.; Petrasso, R.; Glebov, V.; Stoeckl, C.
2010-01-01
Rugby-shaped hohlraums have been suggested as a way to enhance x-ray drive in the indirect drive approach to inertial confinement fusion. This Letter presents an experimental comparison of rugby-shaped and cylinder hohlraums used for D2 and DHe3-filled capsules implosions on the Omega laser facility, demonstrating an increase of x-ray flux by 18% in rugby-shaped hohlraums. The highest yields to date for deuterium gas implosions in indirect drive on Omega (1.5×1010 neutrons) were obtained, allowing for the first time the measurement of a DD burn history. Proton spectra measurements provide additional validation of the higher drive in rugby-shaped hohlraums.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.
Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
2017-06-30
Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do
2017-01-01
Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113
An open source platform for multi-scale spatially distributed simulations of microbial ecosystems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Segre, Daniel
2014-08-14
The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.
Aerodynamic and aeroacoustic for wind turbine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohamed, Maizi; Rabah, Dizene
2015-03-10
This paper describes a hybrid approach forpredicting noise radiated from the rotating Wind Turbine (HAWT) blades, where the sources are extracted from an unsteady Reynolds-Averaged-Navier Stocks (URANS) simulation, ANSYS CFX 11.0, was used to calculate The near-field flow parameters around the blade surface that are necessary for FW-H codes. Comparisons with NREL Phase II experimental results are presented with respect to the pressure distributions for validating a capacity of the solver to calculate the near-field flow on and around the wind turbine blades, The results show that numerical data have a good agreement with experimental. The acoustic pressure, presented asmore » a sum of thickness and loading noise components, is analyzed by means of a discrete fast Fourier transformation for the presentation of the time acoustic time histories in the frequency domain. The results convincingly show that dipole source noise is the dominant noise source for this wind turbine.« less
Prediction of Enzyme Mutant Activity Using Computational Mutagenesis and Incremental Transduction
Basit, Nada; Wechsler, Harry
2011-01-01
Wet laboratory mutagenesis to determine enzyme activity changes is expensive and time consuming. This paper expands on standard one-shot learning by proposing an incremental transductive method (T2bRF) for the prediction of enzyme mutant activity during mutagenesis using Delaunay tessellation and 4-body statistical potentials for representation. Incremental learning is in tune with both eScience and actual experimentation, as it accounts for cumulative annotation effects of enzyme mutant activity over time. The experimental results reported, using cross-validation, show that overall the incremental transductive method proposed, using random forest as base classifier, yields better results compared to one-shot learning methods. T2bRF is shown to yield 90% on T4 and LAC (and 86% on HIV-1). This is significantly better than state-of-the-art competing methods, whose performance yield is at 80% or less using the same datasets. PMID:22007208
Faraday-effect polarimeter diagnostic for internal magnetic field fluctuation measurements in DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, J., E-mail: chenjie@ucla.edu; State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan 430074; Ding, W. X.
2016-11-15
Motivated by the need to measure fast equilibrium temporal dynamics, non-axisymmetric structures, and core magnetic fluctuations (coherent and broadband), a three-chord Faraday-effect polarimeter-interferometer system with fast time response and high phase resolution has recently been installed on the DIII-D tokamak. A novel detection scheme utilizing two probe beams and two detectors for each chord results in reduced phase noise and increased time response [δb ∼ 1G with up to 3 MHz bandwidth]. First measurement results were obtained during the recent DIII-D experimental campaign. Simultaneous Faraday and density measurements have been successfully demonstrated and high-frequency, up to 100 kHz, Faraday-effect perturbationsmore » have been observed. Preliminary comparisons with EFIT are used to validate diagnostic performance. Principle of the diagnostic and first experimental results is presented.« less
NASA Astrophysics Data System (ADS)
Pignon, Baptiste; Sobotka, Vincent; Boyard, Nicolas; Delaunay, Didier
2017-10-01
Two different analytical models were presented to determine cycle parameters of thermoplastics injection process. The aim of these models was to provide quickly a first set of data for mold temperature and cooling time. The first model is specific to amorphous polymers and the second one is dedicated to semi-crystalline polymers taking the crystallization into account. In both cases, the nature of the contact between the polymer and the mold could be considered as perfect or not (thermal contact resistance was considered). Results from models are compared with experimental data obtained with an instrumented mold for an acrylonitrile butadiene styrene (ABS) and a polypropylene (PP). Good agreements were obtained for mold temperature variation and for heat flux. In the case of the PP, the analytical crystallization times were compared with those given by a coupled model between heat transfer and crystallization kinetics.
Lakghomi, B; Lawryshyn, Y; Hofmann, R
2015-01-01
Computational fluid dynamics (CFD) models of dissolved air flotation (DAF) have shown formation of stratified flow (back and forth horizontal flow layers at the top of the separation zone) and its impact on improved DAF efficiency. However, there has been a lack of experimental validation of CFD predictions, especially in the presence of solid particles. In this work, for the first time, both two-phase (air-water) and three-phase (air-water-solid particles) CFD models were evaluated at pilot scale using measurements of residence time distribution, bubble layer position and bubble-particle contact efficiency. The pilot-scale results confirmed the accuracy of the CFD model for both two-phase and three-phase flows, but showed that the accuracy of the three-phase CFD model would partly depend on the estimation of bubble-particle attachment efficiency.
Palazoğlu, T K; Gökmen, V
2008-04-01
In this study, a numerical model was developed to simulate frying of potato strips and estimate acrylamide levels in French fries. Heat and mass transfer parameters determined during frying of potato strips and the formation and degradation kinetic parameters of acrylamide obtained with a sugar-asparagine model system were incorporated within the model. The effect of reducing sugar content (0.3 to 2.15 g/100 g dry matter), strip thickness (8.5 x 8.5 mm and 10 x 10 mm), and frying time (3, 4, 5, and 6 min) and temperature (150, 170, and 190 degrees C) on resultant acrylamide level in French fries was investigated both numerically and experimentally. The model appeared to closely estimate the acrylamide contents, and thereby may potentially save considerable time, money, and effort during the stages of process design and optimization.
Characterisation of vibration input to flywheel used on urban bus
NASA Astrophysics Data System (ADS)
Wang, L.; Kanarachos, S.; Christensen, J.
2016-09-01
Vibration induced from road surface has an impact on the durability and reliability of electrical and mechanical components attached on the vehicle. There is little research published relevant to the durability assessment of a flywheel energy recovery system installed on city and district buses. Relevant international standards and legislations were reviewed and large discrepancy was found among them, in addition, there are no standards exclusively developed for kinetic energy recovery systems on vehicles. This paper describes the experimentation of assessment of road surface vibration input to the flywheel on a bus as obtained at the MIRA Proving Ground. Power density spectra have been developed based on the raw data obtained during the experimentation. Validation of this model will be carried out using accelerated life time tests that will be carried out on a shaker rig using an accumulated profile based on the theory of fatigue damage equivalence in time and frequency domain aligned with the model predictions.
Effect of an environmental science curriculum on students' leisure time activities
NASA Astrophysics Data System (ADS)
Blum, Abraham
Cooley and Reed's active interest measurement approach was combined with Guttman's Facet Design to construct a systematic instrument for the assessment of the impact of an environmental science course on students' behavior outside school. A quasimatched design of teacher allocation to the experimental and control groups according to their preferred teaching style was used. A kind of dummy control curriculum was devised to enable valid comparative evaluation of a new course which differs from the traditional one in both content and goal. This made it possible to control most of the differing factors inherent in the old and new curriculum. The research instrument was given to 1000 students who were taught by 28 teachers. Students who learned according to the experimental curriculum increased their leisure time activities related to the environmental science curriculum significantly. There were no significant differences between boys and girls and between students with different achievement levels.
Maximization of fructose esters synthesis by response surface methodology.
Neta, Nair Sampaio; Peres, António M; Teixeira, José A; Rodrigues, Ligia R
2011-07-01
Enzymatic synthesis of fructose fatty acid ester was performed in organic solvent media, using a purified lipase from Candida antartica B immobilized in acrylic resin. Response surface methodology with a central composite rotatable design based on five levels was implemented to optimize three experimental operating conditions (temperature, agitation and reaction time). A statistical significant cubic model was established. Temperature and reaction time were found to be the most significant parameters. The optimum operational conditions for maximizing the synthesis of fructose esters were 57.1°C, 100 rpm and 37.8 h. The model was validated in the identified optimal conditions to check its adequacy and accuracy, and an experimental esterification percentage of 88.4% (±0.3%) was obtained. These results showed that an improvement of the enzymatic synthesis of fructose esters was obtained under the optimized conditions. Copyright © 2011 Elsevier B.V. All rights reserved.
Aeroheating Characteristics for a Two-Stage-To-Orbit Concept During Separation at Mach 6
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2005-01-01
An experimental study was conducted to determine the proximity aeroheating characteristics for a two-stage-to-orbit concept in close proximity in the NASA Langley 20-Inch Mach 6 Air Tunnel. A new hybrid discrete thin-film resistance gauge technique was evaluated in this study and used to measure experimental interference heating levels between the booster and the orbiter at a constant freestream Reynolds number of 8.25 x 10(exp 6)/m and a variety of separation and axial offset distances. It was found that, as the orbiter separates from the booster and the booster falls away, the windward centerline heating increased on the orbiter by as much as 13-times over the baseline, single model heating distribution, and on the booster by as much as 6-times. The aeroheating database developed can be used for computational fluid dynamic code validation.
2014-01-01
Background Identification of ligand-protein binding interactions is a critical step in drug discovery. Experimental screening of large chemical libraries, in spite of their specific role and importance in drug discovery, suffer from the disadvantages of being random, time-consuming and expensive. To accelerate the process, traditional structure- or ligand-based VLS approaches are combined with experimental high-throughput screening, HTS. Often a single protein or, at most, a protein family is considered. Large scale VLS benchmarking across diverse protein families is rarely done, and the reported success rate is very low. Here, we demonstrate the experimental HTS validation of a novel VLS approach, FINDSITEcomb, across a diverse set of medically-relevant proteins. Results For eight different proteins belonging to different fold-classes and from diverse organisms, the top 1% of FINDSITEcomb’s VLS predictions were tested, and depending on the protein target, 4%-47% of the predicted ligands were shown to bind with μM or better affinities. In total, 47 small molecule binders were identified. Low nanomolar (nM) binders for dihydrofolate reductase and protein tyrosine phosphatases (PTPs) and micromolar binders for the other proteins were identified. Six novel molecules had cytotoxic activity (<10 μg/ml) against the HCT-116 colon carcinoma cell line and one novel molecule had potent antibacterial activity. Conclusions We show that FINDSITEcomb is a promising new VLS approach that can assist drug discovery. PMID:24936211
Grootswagers, Tijl; Wardle, Susan G; Carlson, Thomas A
2017-04-01
Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain-computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to "decode" different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.
Arujuna, Aruna V; Housden, R James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D; Razavi, Reza; Rhode, Kawal S
2014-01-01
Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures.
Reconstruction of phonon relaxation times from systems featuring interfaces with unknown properties
NASA Astrophysics Data System (ADS)
Forghani, Mojtaba; Hadjiconstantinou, Nicolas G.
2018-05-01
We present a method for reconstructing the phonon relaxation-time function τω=τ (ω ) (including polarization) and associated phonon free-path distribution from thermal spectroscopy data for systems featuring interfaces with unknown properties. Our method does not rely on the effective thermal-conductivity approximation or a particular physical model of the interface behavior. The reconstruction is formulated as an optimization problem in which the relaxation times are determined as functions of frequency by minimizing the discrepancy between the experimentally measured temperature profiles and solutions of the Boltzmann transport equation for the same system. Interface properties such as transmissivities are included as unknowns in the optimization; however, because for the thermal spectroscopy problems considered here the reconstruction is not very sensitive to the interface properties, the transmissivities are only approximately reconstructed and can be considered as byproducts of the calculation whose primary objective is the accurate determination of the relaxation times. The proposed method is validated using synthetic experimental data obtained from Monte Carlo solutions of the Boltzmann transport equation. The method is shown to remain robust in the presence of uncertainty (noise) in the measurement.
NASA Astrophysics Data System (ADS)
Choi, Byung Sang
Compared to overwhelming technical data available in other advanced technologies, knowledge about particle technology, especially in particle synthesis from a solution, is still poor due to the lack of available equipment to study crystallization phenomena in a crystallizer. Recent technical advances in particle size measurement such as Coulter counter and laser light scattering have made in/ex situ study of some of particle synthesis, i.e., growth, attrition, and aggregation, possible with simple systems. Even with these advancements in measurement technology, to grasp fully the crystallization phenomena requires further theoretical and technical advances in understanding such particle synthesis mechanisms. Therefore, it is the motive of this work to establish the general processing parameters and to produce rigorous experimental data with reliable performance and characterization that rigorously account for the crystallization phenomena of nucleation, growth, aggregation, and breakage including their variations with time and space in a controlled continuous mixed-suspension mixed-product removal (CMSMPR) crystallizer. This dissertation reports the results and achievements in the following areas: (1) experimental programs to support the development and validation of the phenomenological models and generation of laboratory data for the purpose of testing, refining, and validating the crystallization process, (2) development of laboratory well-mixed crystallizer system and experimental protocols to generate crystal size distribution (CSD) data, (3) the effects of feed solution concentration, crystallization temperature, feed flow rate, and mixing speed, as well as different types of mixers resulting in the evolution of CSDs with time from a concentrated brine solution, (4) with statistically designed experiments the effects of processing variables on the resultant particle structure and CSD at steady state were quantified and related to each of those operating conditions by studying the detailed crystallization processes, such as nucleation, growth, and breakage, as well as agglomeration. The purification of CaCl2 solution involving the crystallization of NaCl from the solution mixture of CaCl2, KCl, and NaCl as shipped from Dow Chemical, Ludington, in a CMSMPR crystallizer was studied as our model system because of its nucleation and crystal growth tendencies with less agglomeration. This project also generated a significant body of experimental data that are available at URL that is http://www.che.utah.edu/˜ring/CrystallizationWeb.
A Validated Multiscale In-Silico Model for Mechano-sensitive Tumour Angiogenesis and Growth
Loizidou, Marilena; Stylianopoulos, Triantafyllos; Hawkes, David J.
2017-01-01
Vascularisation is a key feature of cancer growth, invasion and metastasis. To better understand the governing biophysical processes and their relative importance, it is instructive to develop physiologically representative mathematical models with which to compare to experimental data. Previous studies have successfully applied this approach to test the effect of various biochemical factors on tumour growth and angiogenesis. However, these models do not account for the experimentally observed dependency of angiogenic network evolution on growth-induced solid stresses. This work introduces two novel features: the effects of hapto- and mechanotaxis on vessel sprouting, and mechano-sensitive dynamic vascular remodelling. The proposed three-dimensional, multiscale, in-silico model of dynamically coupled angiogenic tumour growth is specified to in-vivo and in-vitro data, chosen, where possible, to provide a physiologically consistent description. The model is then validated against in-vivo data from murine mammary carcinomas, with particular focus placed on identifying the influence of mechanical factors. Crucially, we find that it is necessary to include hapto- and mechanotaxis to recapitulate observed time-varying spatial distributions of angiogenic vasculature. PMID:28125582
Experimental and modeling uncertainties in the validation of lower hybrid current drive
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poli, F. M.; Bonoli, P. T.; Chilenski, M.
Our work discusses sources of uncertainty in the validation of lower hybrid wave current drive simulations against experiments, by evolving self-consistently the magnetic equilibrium and the heating and current drive profiles, calculated with a combined toroidal ray tracing code and 3D Fokker–Planck solver. The simulations indicate a complex interplay of elements, where uncertainties in the input plasma parameters, in the models and in the transport solver combine and compensate each other, at times. It is concluded that ray-tracing calculations should include a realistic representation of the density and temperature in the region between the confined plasma and the wall, whichmore » is especially important in regimes where the LH waves are weakly damped and undergo multiple reflections from the plasma boundary. Uncertainties introduced in the processing of diagnostic data as well as uncertainties introduced by model approximations are assessed. We show that, by comparing the evolution of the plasma parameters in self-consistent simulations with available data, inconsistencies can be identified and limitations in the models or in the experimental data assessed.« less
Numerical and Experimental Study of Wake Redirection Techniques in a Boundary Layer Wind Tunnel
NASA Astrophysics Data System (ADS)
Wang, J.; Foley, S.; Nanos, E. M.; Yu, T.; Campagnolo, F.; Bottasso, C. L.; Zanotti, A.; Croce, A.
2017-05-01
The aim of the present paper is to validate a wind farm LES framework in the context of two distinct wake redirection techniques: yaw misalignment and individual cyclic pitch control. A test campaign was conducted using scaled wind turbine models in a boundary layer wind tunnel, where both particle image velocimetry and hot-wire thermo anemometers were used to obtain high quality measurements of the downstream flow. A LiDAR system was also employed to determine the non-uniformity of the inflow velocity field. A high-fidelity large-eddy simulation lifting-line model was used to simulate the aerodynamic behavior of the system, including the geometry of the wind turbine nacelle and tower. A tuning-free Lagrangian scale-dependent dynamic approach was adopted to improve the sub-grid scale modeling. Comparisons with experimental measurements are used to systematically validate the simulations. The LES results are in good agreement with the PIV and hot-wire data in terms of time-averaged wake profiles, turbulence intensity and Reynolds shear stresses. Discrepancies are also highlighted, to guide future improvements.
A numerical model of acoustic wave caused by a single positive corona source
NASA Astrophysics Data System (ADS)
Zhang, Bo; Li, Zhen; He, Jinliang
2017-10-01
Audible noise accompanies corona discharge, which is one of the most important electromagnetic environment issues of high voltage transmission lines. Most of the studies on the audible noise generated by corona discharge focused on statistical analysis of the experimental results and a series of empirical formulas were derived to predict the audible noise. However, few of them interpreted the generating mechanism of the audible noise. Sound wave in the air is actually the fluctuation of the air, which lead to the hypothesis that the sound wave is generated by the interaction of the charged particles and the air molecules in the discharge progress. To validate this hypothesis, experiments were carried out in this paper to study the relationship between the audible noise and the corona current, including the correlation both in time domain and in frequency domain. Based on the experimental results, the fluid equations of the particles in the air were introduced to study the interactions among the electrons, ions, and neutral molecules in the discharge, and a numerical model for the amplitude of corona acoustic emission was developed and validated.
NASA Astrophysics Data System (ADS)
Zhu, Ning; Sun, Shou-Guang; Li, Qiang; Zou, Hua
2014-12-01
One of the major problems in structural fatigue life analysis is establishing structural load spectra under actual operating conditions. This study conducts theoretical research and experimental validation of quasi-static load spectra on bogie frame structures of high-speed trains. The quasistatic load series that corresponds to quasi-static deformation modes are identified according to the structural form and bearing conditions of high-speed train bogie frames. Moreover, a force-measuring frame is designed and manufactured based on the quasi-static load series. The load decoupling model of the quasi-static load series is then established via calibration tests. Quasi-static load-time histories, together with online tests and decoupling analysis, are obtained for the intermediate range of the Beijing—Shanghai dedicated passenger line. The damage consistency calibration of the quasi-static discrete load spectra is performed according to a damage consistency criterion and a genetic algorithm. The calibrated damage that corresponds with the quasi-static discrete load spectra satisfies the safety requirements of bogie frames.
Study of indoor radon distribution using measurements and CFD modeling.
Chauhan, Neetika; Chauhan, R P; Joshi, M; Agarwal, T K; Aggarwal, Praveen; Sahoo, B K
2014-10-01
Measurement and/or prediction of indoor radon ((222)Rn) concentration are important due to the impact of radon on indoor air quality and consequent inhalation hazard. In recent times, computational fluid dynamics (CFD) based modeling has become the cost effective replacement of experimental methods for the prediction and visualization of indoor pollutant distribution. The aim of this study is to implement CFD based modeling for studying indoor radon gas distribution. This study focuses on comparison of experimentally measured and CFD modeling predicted spatial distribution of radon concentration for a model test room. The key inputs for simulation viz. radon exhalation rate and ventilation rate were measured as a part of this study. Validation experiments were performed by measuring radon concentration at different locations of test room using active (continuous radon monitor) and passive (pin-hole dosimeters) techniques. Modeling predictions have been found to be reasonably matching with the measurement results. The validated model can be used to understand and study factors affecting indoor radon distribution for more realistic indoor environment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Experimental Validation of Various Temperature Modells for Semi-Physical Tyre Model Approaches
NASA Astrophysics Data System (ADS)
Hackl, Andreas; Scherndl, Christoph; Hirschberg, Wolfgang; Lex, Cornelia
2017-10-01
With increasing level of complexity and automation in the area of automotive engineering, the simulation of safety relevant Advanced Driver Assistance Systems (ADAS) leads to increasing accuracy demands in the description of tyre contact forces. In recent years, with improvement in tyre simulation, the needs for coping with tyre temperatures and the resulting changes in tyre characteristics are rising significantly. Therefore, experimental validation of three different temperature model approaches is carried out, discussed and compared in the scope of this article. To investigate or rather evaluate the range of application of the presented approaches in combination with respect of further implementation in semi-physical tyre models, the main focus lies on the a physical parameterisation. Aside from good modelling accuracy, focus is held on computational time and complexity of the parameterisation process. To evaluate this process and discuss the results, measurements from a Hoosier racing tyre 6.0 / 18.0 10 LCO C2000 from an industrial flat test bench are used. Finally the simulation results are compared with the measurement data.
NASA Astrophysics Data System (ADS)
Zhou, Wanmeng; Wang, Hua; Tang, Guojin; Guo, Shuai
2016-09-01
The time-consuming experimental method for handling qualities assessment cannot meet the increasing fast design requirements for the manned space flight. As a tool for the aircraft handling qualities research, the model-predictive-control structured inverse simulation (MPC-IS) has potential applications in the aerospace field to guide the astronauts' operations and evaluate the handling qualities more effectively. Therefore, this paper establishes MPC-IS for the manual-controlled rendezvous and docking (RVD) and proposes a novel artificial neural network inverse simulation system (ANN-IS) to further decrease the computational cost. The novel system was obtained by replacing the inverse model of MPC-IS with the artificial neural network. The optimal neural network was trained by the genetic Levenberg-Marquardt algorithm, and finally determined by the Levenberg-Marquardt algorithm. In order to validate MPC-IS and ANN-IS, the manual-controlled RVD experiments on the simulator were carried out. The comparisons between simulation results and experimental data demonstrated the validity of two systems and the high computational efficiency of ANN-IS.
Tang, Hua; Chen, Wei; Lin, Hao
2016-04-01
Immunoglobulins, also called antibodies, are a group of cell surface proteins which are produced by the immune system in response to the presence of a foreign substance (called antigen). They play key roles in many medical, diagnostic and biotechnological applications. Correct identification of immunoglobulins is crucial to the comprehension of humoral immune function. With the avalanche of protein sequences identified in postgenomic age, it is highly desirable to develop computational methods to timely identify immunoglobulins. In view of this, we designed a predictor called "IGPred" by formulating protein sequences with the pseudo amino acid composition into which nine physiochemical properties of amino acids were incorporated. Jackknife cross-validated results showed that 96.3% of immunoglobulins and 97.5% of non-immunoglobulins can be correctly predicted, indicating that IGPred holds very high potential to become a useful tool for antibody analysis. For the convenience of most experimental scientists, a web-server for IGPred was established at http://lin.uestc.edu.cn/server/IGPred. We believe that the web-server will become a powerful tool to study immunoglobulins and to guide related experimental validations.
Brannock, M; Wang, Y; Leslie, G
2010-05-01
Membrane Bioreactors (MBRs) have been successfully used in aerobic biological wastewater treatment to solve the perennial problem of effective solids-liquid separation. The optimisation of MBRs requires knowledge of the membrane fouling, biokinetics and mixing. However, research has mainly concentrated on the fouling and biokinetics (Ng and Kim, 2007). Current methods of design for a desired flow regime within MBRs are largely based on assumptions (e.g. complete mixing of tanks) and empirical techniques (e.g. specific mixing energy). However, it is difficult to predict how sludge rheology and vessel design in full-scale installations affects hydrodynamics, hence overall performance. Computational Fluid Dynamics (CFD) provides a method for prediction of how vessel features and mixing energy usage affect the hydrodynamics. In this study, a CFD model was developed which accounts for aeration, sludge rheology and geometry (i.e. bioreactor and membrane module). This MBR CFD model was then applied to two full-scale MBRs and was successfully validated against experimental results. The effect of sludge settling and rheology was found to have a minimal impact on the bulk mixing (i.e. the residence time distribution).
Heisenberg's error-disturbance relations: A joint measurement-based experimental test
NASA Astrophysics Data System (ADS)
Zhao, Yuan-Yuan; Kurzyński, Paweł; Xiang, Guo-Yong; Li, Chuan-Feng; Guo, Guang-Can
2017-04-01
The original Heisenberg error-disturbance relation was recently shown to be not universally valid and two different approaches to reformulate it were proposed. The first one focuses on how the error and disturbance of two observables A and B depend on a particular quantum state. The second one asks how a joint measurement of A and B affects their eigenstates. Previous experiments focused on the first approach. Here we focus on the second one. First, we propose and implement an extendible method of quantum-walk-based joint measurements of noisy Pauli operators to test the error-disturbance relation for qubits introduced in the work of Busch et al. [Phys. Rev. A 89, 012129 (2014), 10.1103/PhysRevA.89.012129], where the polarization of the single photon, corresponding to a walker's auxiliary degree of freedom that is commonly known as a coin, undergoes a position- and time-dependent evolution. Then we formulate and experimentally test a universally valid state-dependent relation for three mutually unbiased observables. We therefore establish a method of testing error-disturbance relations.
Nonlinear system identification of smart structures under high impact loads
NASA Astrophysics Data System (ADS)
Sarp Arsava, Kemal; Kim, Yeesock; El-Korchi, Tahar; Park, Hyo Seon
2013-05-01
The main purpose of this paper is to develop numerical models for the prediction and analysis of the highly nonlinear behavior of integrated structure control systems subjected to high impact loading. A time-delayed adaptive neuro-fuzzy inference system (TANFIS) is proposed for modeling of the complex nonlinear behavior of smart structures equipped with magnetorheological (MR) dampers under high impact forces. Experimental studies are performed to generate sets of input and output data for training and validation of the TANFIS models. The high impact load and current signals are used as the input disturbance and control signals while the displacement and acceleration responses from the structure-MR damper system are used as the output signals. The benchmark adaptive neuro-fuzzy inference system (ANFIS) is used as a baseline. Comparisons of the trained TANFIS models with experimental results demonstrate that the TANFIS modeling framework is an effective way to capture nonlinear behavior of integrated structure-MR damper systems under high impact loading. In addition, the performance of the TANFIS model is much better than that of ANFIS in both the training and the validation processes.
Experimental and modeling uncertainties in the validation of lower hybrid current drive
Poli, F. M.; Bonoli, P. T.; Chilenski, M.; ...
2016-07-28
Our work discusses sources of uncertainty in the validation of lower hybrid wave current drive simulations against experiments, by evolving self-consistently the magnetic equilibrium and the heating and current drive profiles, calculated with a combined toroidal ray tracing code and 3D Fokker–Planck solver. The simulations indicate a complex interplay of elements, where uncertainties in the input plasma parameters, in the models and in the transport solver combine and compensate each other, at times. It is concluded that ray-tracing calculations should include a realistic representation of the density and temperature in the region between the confined plasma and the wall, whichmore » is especially important in regimes where the LH waves are weakly damped and undergo multiple reflections from the plasma boundary. Uncertainties introduced in the processing of diagnostic data as well as uncertainties introduced by model approximations are assessed. We show that, by comparing the evolution of the plasma parameters in self-consistent simulations with available data, inconsistencies can be identified and limitations in the models or in the experimental data assessed.« less
An Automated, Experimenter-Free Method for the Standardised, Operant Cognitive Testing of Rats
Rivalan, Marion; Munawar, Humaira; Fuchs, Anna; Winter, York
2017-01-01
Animal models of human pathology are essential for biomedical research. However, a recurring issue in the use of animal models is the poor reproducibility of behavioural and physiological findings within and between laboratories. The most critical factor influencing this issue remains the experimenter themselves. One solution is the use of procedures devoid of human intervention. We present a novel approach to experimenter-free testing cognitive abilities in rats, by combining undisturbed group housing with automated, standardized and individual operant testing. This experimenter-free system consisted of an automated-operant system (Bussey-Saksida rat touch screen) connected to a home cage containing group living rats via an automated animal sorter (PhenoSys). The automated animal sorter, which is based on radio-frequency identification (RFID) technology, functioned as a mechanical replacement of the experimenter. Rats learnt to regularly and individually enter the operant chamber and remained there for the duration of the experimental session only. Self-motivated rats acquired the complex touch screen task of trial-unique non-matching to location (TUNL) in half the time reported for animals that were manually placed into the operant chamber. Rat performance was similar between the two groups within our laboratory, and comparable to previously published results obtained elsewhere. This reproducibility, both within and between laboratories, confirms the validity of this approach. In addition, automation reduced daily experimental time by 80%, eliminated animal handling, and reduced equipment cost. This automated, experimenter-free setup is a promising tool of great potential for testing a large variety of functions with full automation in future studies. PMID:28060883
Numerical Analysis of a Pulse Detonation Cross Flow Heat Load Experiment
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Naples, Andrew .; Hoke, John L.; Schauer, Fred
2011-01-01
A comparison between experimentally measured and numerically simulated, time-averaged, point heat transfer rates in a pulse detonation (PDE) engine is presented. The comparison includes measurements and calculations for heat transfer to a cylinder in crossflow and to the tube wall itself using a novel spool design. Measurements are obtained at several locations and under several operating conditions. The measured and computed results are shown to be in substantial agreement, thereby validating the modeling approach. The model, which is based in computational fluid dynamics (CFD) is then used to interpret the results. A preheating of the incoming fuel charge is predicted, which results in increased volumetric flow and subsequent overfilling. The effect is validated with additional measurements.
Validation of the activity expansion method with ultrahigh pressure shock equations of state
NASA Astrophysics Data System (ADS)
Rogers, Forrest J.; Young, David A.
1997-11-01
Laser shock experiments have recently been used to measure the equation of state (EOS) of matter in the ultrahigh pressure region between condensed matter and a weakly coupled plasma. Some ultrahigh pressure data from nuclear-generated shocks are also available. Matter at these conditions has proven very difficult to treat theoretically. The many-body activity expansion method (ACTEX) has been used for some time to calculate EOS and opacity data in this region, for use in modeling inertial confinement fusion and stellar interior plasmas. In the present work, we carry out a detailed comparison with the available experimental data in order to validate the method. The agreement is good, showing that ACTEX adequately describes strongly shocked matter.
Recent Advances in Simulation of Eddy Current Testing of Tubes and Experimental Validations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reboud, C.; Premel, D.; Lesselier, D.
2007-03-21
Eddy current testing (ECT) is widely used in iron and steel industry for the inspection of tubes during manufacturing. A collaboration between CEA and the Vallourec Research Center led to the development of new numerical functionalities dedicated to the simulation of ECT of non-magnetic tubes by external probes. The achievement of experimental validations led us to the integration of these models into the CIVA platform. Modeling approach and validation results are discussed here. A new numerical scheme is also proposed in order to improve the accuracy of the model.
Recent Advances in Simulation of Eddy Current Testing of Tubes and Experimental Validations
NASA Astrophysics Data System (ADS)
Reboud, C.; Prémel, D.; Lesselier, D.; Bisiaux, B.
2007-03-01
Eddy current testing (ECT) is widely used in iron and steel industry for the inspection of tubes during manufacturing. A collaboration between CEA and the Vallourec Research Center led to the development of new numerical functionalities dedicated to the simulation of ECT of non-magnetic tubes by external probes. The achievement of experimental validations led us to the integration of these models into the CIVA platform. Modeling approach and validation results are discussed here. A new numerical scheme is also proposed in order to improve the accuracy of the model.
Integrated tokamak modeling: when physics informs engineering and research planning
NASA Astrophysics Data System (ADS)
Poli, Francesca
2017-10-01
Simulations that integrate virtually all the relevant engineering and physics aspects of a real tokamak experiment are a power tool for experimental interpretation, model validation and planning for both present and future devices. This tutorial will guide through the building blocks of an ``integrated'' tokamak simulation, such as magnetic flux diffusion, thermal, momentum and particle transport, external heating and current drive sources, wall particle sources and sinks. Emphasis is given to the connection and interplay between external actuators and plasma response, between the slow time scales of the current diffusion and the fast time scales of transport, and how reduced and high-fidelity models can contribute to simulate a whole device. To illustrate the potential and limitations of integrated tokamak modeling for discharge prediction, a helium plasma scenario for the ITER pre-nuclear phase is taken as an example. This scenario presents challenges because it requires core-edge integration and advanced models for interaction between waves and fast-ions, which are subject to a limited experimental database for validation and guidance. Starting from a scenario obtained by re-scaling parameters from the demonstration inductive ``ITER baseline'', it is shown how self-consistent simulations that encompass both core and edge plasma regions, as well as high-fidelity heating and current drive source models are needed to set constraints on the density, magnetic field and heating scheme. This tutorial aims at demonstrating how integrated modeling, when used with adequate level of criticism, can not only support design of operational scenarios, but also help to asses the limitations and gaps in the available models, thus indicating where improved modeling tools are required and how present experiments can help their validation and inform research planning. Work supported by DOE under DE-AC02-09CH1146.
Challenges of NDE simulation tool validation, optimization, and utilization for composites
NASA Astrophysics Data System (ADS)
Leckey, Cara A. C.; Seebo, Jeffrey P.; Juarez, Peter
2016-02-01
Rapid, realistic nondestructive evaluation (NDE) simulation tools can aid in inspection optimization and prediction of inspectability for advanced aerospace materials and designs. NDE simulation tools may someday aid in the design and certification of aerospace components; potentially shortening the time from material development to implementation by industry and government. Furthermore, ultrasound modeling and simulation are expected to play a significant future role in validating the capabilities and limitations of guided wave based structural health monitoring (SHM) systems. The current state-of-the-art in ultrasonic NDE/SHM simulation is still far from the goal of rapidly simulating damage detection techniques for large scale, complex geometry composite components/vehicles containing realistic damage types. Ongoing work at NASA Langley Research Center is focused on advanced ultrasonic simulation tool development. This paper discusses challenges of simulation tool validation, optimization, and utilization for composites. Ongoing simulation tool development work is described along with examples of simulation validation and optimization challenges that are more broadly applicable to all NDE simulation tools. The paper will also discuss examples of simulation tool utilization at NASA to develop new damage characterization methods for composites, and associated challenges in experimentally validating those methods.
Huchet, V; Pavan, S; Lochardet, A; Divanac'h, M L; Postollec, F; Thuault, D
2013-12-01
Molds are responsible for spoilage of bakery products during storage. A modeling approach to predict the effect of water activity (aw) and temperature on the appearance time of Aspergillus candidus was developed and validated on cakes. The gamma concept of Zwietering was adapted to model fungal growth, taking into account the impact of temperature and aw. We hypothesized that the same model could be used to calculate the time for mycelium to become visible (tv), by substituting the matrix parameter by tv. Cardinal values of A. candidus were determined on potato dextrose agar, and predicted tv were further validated by challenge-tests run on 51 pastries. Taking into account the aw dynamics recorded in pastries during reasonable conditions of storage, high correlation was shown between predicted and observed tv when the aw at equilibrium (after 14 days of storage) was used for modeling (Af = 1.072, Bf = 0.979). Validation studies on industrial cakes confirmed the experimental results and demonstrated the suitability of the model to predict tv in food as a function of aw and temperature. Copyright © 2013 Elsevier Ltd. All rights reserved.
Gómez-Carracedo, M P; Andrade, J M; Rutledge, D N; Faber, N M
2007-03-07
Selecting the correct dimensionality is critical for obtaining partial least squares (PLS) regression models with good predictive ability. Although calibration and validation sets are best established using experimental designs, industrial laboratories cannot afford such an approach. Typically, samples are collected in an (formally) undesigned way, spread over time and their measurements are included in routine measurement processes. This makes it hard to evaluate PLS model dimensionality. In this paper, classical criteria (leave-one-out cross-validation and adjusted Wold's criterion) are compared to recently proposed alternatives (smoothed PLS-PoLiSh and a randomization test) to seek out the optimum dimensionality of PLS models. Kerosene (jet fuel) samples were measured by attenuated total reflectance-mid-IR spectrometry and their spectra where used to predict eight important properties determined using reference methods that are time-consuming and prone to analytical errors. The alternative methods were shown to give reliable dimensionality predictions when compared to external validation. By contrast, the simpler methods seemed to be largely affected by the largest changes in the modeling capabilities of the first components.
Huntington Disease: Linking Pathogenesis to the Development of Experimental Therapeutics.
Mestre, Tiago A; Sampaio, Cristina
2017-02-01
Huntington disease (HD) is an autosomal dominant neurodegenerative condition caused by a CAG trinucleotide expansion in the huntingtin gene. At present, the HD field is experiencing exciting times with the assessment for the first time in human subjects of interventions aimed at core disease mechanisms. Out of a portfolio of interventions that claim a potential disease-modifying effect in HD, the target huntingtin has more robust validation. In this review, we discuss the spectrum of huntingtin-lowering therapies that are currently being considered. We provide a critical appraisal of the validation of huntingtin as a drug target, describing the advantages, challenges, and limitations of the proposed therapeutic interventions. The development of these new therapies relies strongly on the knowledge of HD pathogenesis and the ability to translate this knowledge into validated pharmacodynamic biomarkers. Altogether, the goal is to support a rational drug development that is ethical and cost-effective. Among the pharmacodynamic biomarkers under development, the quantification of mutant huntingtin in the cerebral spinal fluid and PET imaging targeting huntingtin or phosphodiesterase 10A deserve special attention. Huntingtin-lowering therapeutics are eagerly awaited as the first interventions that may be able to change the course of HD in a meaningful way.
Azevedo de Brito, Wanessa; Gomes Dantas, Monique; Andrade Nogueira, Fernando Henrique; Ferreira da Silva-Júnior, Edeildo; Xavier de Araújo-Júnior, João; Aquino, Thiago Mendonça de; Adélia Nogueira Ribeiro, Êurica; da Silva Solon, Lilian Grace; Soares Aragão, Cícero Flávio; Barreto Gomes, Ana Paula
2017-08-30
Guanylhydrazones are molecules with great pharmacological potential in various therapeutic areas, including antitumoral activity. Factorial design is an excellent tool in the optimization of a chromatographic method, because it is possible quickly change factors such as temperature, mobile phase composition, mobile phase pH, column length, among others to establish the optimal conditions of analysis. The aim of the present work was to develop and validate a HPLC and UHPLC methods for the simultaneous determination of guanylhydrazones with anticancer activity employing experimental design. Precise, exact, linear and robust HPLC and UHPLC methods were developed and validated for the simultaneous quantification of the guanylhydrazones LQM10, LQM14, and LQM17. The UHPLC method was more economic, with a four times less solvent consumption, and 20 times less injection volume, what allowed better column performance. Comparing the empirical approach employed in the HPLC method development to the DoE approach employed in the UHPLC method development, we can conclude that the factorial design made the method development faster, more practical and rational. This resulted in methods that can be employed in the analysis, evaluation and quality control of these new synthetic guanylhydrazones.
NASA Astrophysics Data System (ADS)
Joiner, N.; Esser, B.; Fertig, M.; Gülhan, A.; Herdrich, G.; Massuti-Ballester, B.
2016-12-01
This paper summarises the final synthesis of an ESA technology research programme entitled "Development of an Innovative Validation Strategy of Gas Surface Interaction Modelling for Re-entry Applications". The focus of the project was to demonstrate the correct pressure dependency of catalytic surface recombination, with an emphasis on Low Earth Orbit (LEO) re-entry conditions and thermal protection system materials. A physics-based model describing the prevalent recombination mechanisms was proposed for implementation into two CFD codes, TINA and TAU. A dedicated experimental campaign was performed to calibrate and validate the CFD model on TPS materials pertinent to the EXPERT space vehicle at a wide range of temperatures and pressures relevant to LEO. A new set of catalytic recombination data was produced that was able to improve the chosen model calibration for CVD-SiC and provide the first model calibration for the Nickel-Chromium super-alloy PM1000. The experimentally observed pressure dependency of catalytic recombination can only be reproduced by the Langmuir-Hinshelwood recombination mechanism. Due to decreasing degrees of (enthalpy and hence) dissociation with facility stagnation pressure, it was not possible to obtain catalytic recombination coefficients from the measurements at high experimental stagnation pressures. Therefore, the CFD model calibration has been improved by this activity based on the low pressure results. The results of the model calibration were applied to the existing EXPERT mission profile to examine the impact of the experimentally calibrated model at flight relevant conditions. The heat flux overshoot at the CVD-SiC/PM1000 junction on EXPERT is confirmed to produce radiative equilibrium temperatures in close proximity to the PM1000 melt temperature.This was anticipated within the margins of the vehicle design; however, due to the measurements made here for the first time at relevant temperatures for the junction, an increased confidence in this finding is placed on the computations.
Aliev, Abil E; Kulke, Martin; Khaneja, Harmeet S; Chudasama, Vijay; Sheppard, Tom D; Lanigan, Rachel M
2014-01-01
We propose a new approach for force field optimizations which aims at reproducing dynamics characteristics using biomolecular MD simulations, in addition to improved prediction of motionally averaged structural properties available from experiment. As the source of experimental data for dynamics fittings, we use 13C NMR spin-lattice relaxation times T1 of backbone and sidechain carbons, which allow to determine correlation times of both overall molecular and intramolecular motions. For structural fittings, we use motionally averaged experimental values of NMR J couplings. The proline residue and its derivative 4-hydroxyproline with relatively simple cyclic structure and sidechain dynamics were chosen for the assessment of the new approach in this work. Initially, grid search and simplexed MD simulations identified large number of parameter sets which fit equally well experimental J couplings. Using the Arrhenius-type relationship between the force constant and the correlation time, the available MD data for a series of parameter sets were analyzed to predict the value of the force constant that best reproduces experimental timescale of the sidechain dynamics. Verification of the new force-field (termed as AMBER99SB-ILDNP) against NMR J couplings and correlation times showed consistent and significant improvements compared to the original force field in reproducing both structural and dynamics properties. The results suggest that matching experimental timescales of motions together with motionally averaged characteristics is the valid approach for force field parameter optimization. Such a comprehensive approach is not restricted to cyclic residues and can be extended to other amino acid residues, as well as to the backbone. Proteins 2014; 82:195–215. © 2013 Wiley Periodicals, Inc. PMID:23818175
Improving the seismic small-scale modelling by comparison with numerical methods
NASA Astrophysics Data System (ADS)
Pageot, Damien; Leparoux, Donatienne; Le Feuvre, Mathieu; Durand, Olivier; Côte, Philippe; Capdeville, Yann
2017-10-01
The potential of experimental seismic modelling at reduced scale provides an intermediate step between numerical tests and geophysical campaigns on field sites. Recent technologies such as laser interferometers offer the opportunity to get data without any coupling effects. This kind of device is used in the Mesures Ultrasonores Sans Contact (MUSC) measurement bench for which an automated support system makes possible to generate multisource and multireceivers seismic data at laboratory scale. Experimental seismic modelling would become a great tool providing a value-added stage in the imaging process validation if (1) the experimental measurement chain is perfectly mastered, and thus if the experimental data are perfectly reproducible with a numerical tool, as well as if (2) the effective source is reproducible along the measurement setup. These aspects for a quantitative validation concerning devices with piezoelectrical sources and a laser interferometer have not been yet quantitatively studied in published studies. Thus, as a new stage for the experimental modelling approach, these two key issues are tackled in the proposed paper in order to precisely define the quality of the experimental small-scale data provided by the bench MUSC, which are available in the scientific community. These two steps of quantitative validation are dealt apart any imaging techniques in order to offer the opportunity to geophysicists who want to use such data (delivered as free data) of precisely knowing their quality before testing any imaging technique. First, in order to overcome the 2-D-3-D correction usually done in seismic processing when comparing 2-D numerical data with 3-D experimental measurement, we quantitatively refined the comparison between numerical and experimental data by generating accurate experimental line sources, avoiding the necessity of geometrical spreading correction for 3-D point-source data. The comparison with 2-D and 3-D numerical modelling is based on the Spectral Element Method. The approach shows the relevance of building a line source by sampling several source points, except the boundaries effects on later arrival times. Indeed, the experimental results highlight the amplitude feature and the delay equal to π/4 provided by a line source in the same manner than numerical data. In opposite, the 2-D corrections applied on 3-D data showed discrepancies which are higher on experimental data than on numerical ones due to the source wavelet shape and interferences between different arrivals. The experimental results from the approach proposed here show that discrepancies are avoided, especially for the reflected echoes. Concerning the second point aiming to assess the experimental reproducibility of the source, correlation coefficients of recording from a repeated source impact on a homogeneous model are calculated. The quality of the results, that is, higher than 0.98, allow to calculate a mean source wavelet by inversion of a mean data set. Results obtained on a more realistic model simulating clays on limestones, confirmed the reproducibility of the source impact.
Viability of Cross-Flow Fan with Helical Blades for Vertical Take-off and Landing Aircraft
2012-09-01
fluid dynamics (CFD) software, ANSYS - CFX , a three-dimensional (3-D) straight-bladed model was validated against previous study’s experimental results...computational fluid dynamics software (CFD), ANSYS - CFX , a three-dimensional (3-D) straight-bladed model was validated against previous study’s experimental...37 B. SIZING PARAMETERS AND ILLUSTRATION ................................. 37 APPENDIX B. ANSYS CFX PARAMETERS
2011-09-01
a quality evaluation with limited data, a model -based assessment must be...that affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a ...affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a wide range
NASA Astrophysics Data System (ADS)
Rizzo, Axel; Vaglio-Gaudard, Claire; Martin, Julie-Fiona; Noguère, Gilles; Eschbach, Romain
2017-09-01
DARWIN2.3 is the reference package used for fuel cycle applications in France. It solves the Boltzmann and Bateman equations in a coupling way, with the European JEFF-3.1.1 nuclear data library, to compute the fuel cycle values of interest. It includes both deterministic transport codes APOLLO2 (for light water reactors) and ERANOS2 (for fast reactors), and the DARWIN/PEPIN2 depletion code, each of them being developed by CEA/DEN with the support of its industrial partners. The DARWIN2.3 package has been experimentally validated for pressurized and boiling water reactors, as well as for sodium fast reactors; this experimental validation relies on the analysis of post-irradiation experiments (PIE). The DARWIN2.3 experimental validation work points out some isotopes for which the depleted concentration calculation can be improved. Some other nuclides have no available experimental validation, and their concentration calculation uncertainty is provided by the propagation of a priori nuclear data uncertainties. This paper describes the work plan of studies initiated this year to improve the accuracy of the DARWIN2.3 depleted material balance calculation concerning some nuclides of interest for the fuel cycle.
NASA Astrophysics Data System (ADS)
Channumsin, Sittiporn; Ceriotti, Matteo; Radice, Gianmarco; Watson, Ian
2017-09-01
Multilayer insulation (MLI) is a recently-discovered type of debris originating from delamination of aging spacecraft; it is mostly detected near the geosynchronous orbit (GEO). Observation data indicates that these objects are characterised by high reflectivity, high area-to-mass ratio (HAMR), fast rotation, high sensitivity to perturbations (especially solar radiation pressure) and change of area-to-mass ratio (AMR) over time. As a result, traditional models (e.g. cannonball) are unsuitable to represent and predict this debris' orbital evolution. Previous work by the authors effectively modelled the flexible debris by means of multibody dynamics to improve the prediction accuracy. The orbit evolution with the flexible model resulted significantly different from using the rigid model. This paper aims to present a methodology to determine the dynamic properties of thin membranes with the purpose to validate the deformation characteristics of the flexible model. A high-vacuum chamber (10-4 mbar) to significantly decrease air friction, inside which a thin membrane is hinged at one end but free at the other provides the experimental setup. A free motion test is used to determine the damping characteristics and natural frequency of the thin membrane via logarithmic decrement and frequency response. The membrane can swing freely in the chamber and the motion is tracked by a static, optical camera, and a Kalman filter technique is implemented in the tracking algorithm to reduce noise and increase the tracking accuracy of the oscillating motion. Then, the effect of solar radiation pressure on the thin membrane is investigated: a high power spotlight (500-2000 W) is used to illuminate the sample and any displacement of the membrane is measured by means of a high-resolution laser sensor. Analytic methods from the natural frequency response and Finite Element Analysis (FEA) including multibody simulations of both experimental setups are used for the validation of the flexible model by comparing the experimental results of amplitude decay, natural frequencies and deformation. The experimental results show good agreement with both analytical results and finite element methods.
Rapidly-administered short forms of the Wechsler Adult Intelligence Scale-3rd edition.
Donnell, Alison J; Pliskin, Neil; Holdnack, James; Axelrod, Bradley; Randolph, Christopher
2007-11-01
Although the Wechsler Full Scale IQ (FSIQ) is a common component of most neuropsychological evaluations, there are many clinical situations where the complete administration of this battery is precluded by various constraints, including limitations of time and patient compliance. These constraints are particularly true for dementia evaluations involving elderly patients. The present study reports data on two short forms particularly suited to dementia evaluations, each requiring less than 20min of administration time. One of the short forms was previously validated in dementia for the WAIS-R [Randolph, C., Mohr, E., & Chase, T. N. (1993). Assessment of intellectual function in dementing disorders: Validity of WAIS-R short forms for patients with Alzheimer's, Huntington's, and Parkinson's disease. Journal of Clinical and Experimental Neuropsychology, 15, 743-753]; the second was developed specifically for patients with motor disabilities. These short forms were validated using the WAIS-III normative standardization sample (N=2450), neurologic sample (N=63), and matched controls (N=49), and a separate mixed clinical sample (N=70). The results suggest that each short form provides an accurate and reliable estimate of WAIS-III FSIQ, validating their use in appropriate clinical contexts. The present data support the use of these short forms for dementia evaluations, and suggests that they may be applicable for the evaluation of other neurological and neuropsychiatric disorders that involve acquired neurocognitive impairment.
Improved patch-based learning for image deblurring
NASA Astrophysics Data System (ADS)
Dong, Bo; Jiang, Zhiguo; Zhang, Haopeng
2015-05-01
Most recent image deblurring methods only use valid information found in input image as the clue to fill the deblurring region. These methods usually have the defects of insufficient prior information and relatively poor adaptiveness. Patch-based method not only uses the valid information of the input image itself, but also utilizes the prior information of the sample images to improve the adaptiveness. However the cost function of this method is quite time-consuming and the method may also produce ringing artifacts. In this paper, we propose an improved non-blind deblurring algorithm based on learning patch likelihoods. On one hand, we consider the effect of the Gaussian mixture model with different weights and normalize the weight values, which can optimize the cost function and reduce running time. On the other hand, a post processing method is proposed to solve the ringing artifacts produced by traditional patch-based method. Extensive experiments are performed. Experimental results verify that our method can effectively reduce the execution time, suppress the ringing artifacts effectively, and keep the quality of deblurred image.
Time pressure and attention allocation effect on upper limb motion steadiness.
Liu, Sicong; Eklund, Robert C; Tenenbaum, Gershon
2015-01-01
Following ironic process theory (IPT), the authors aimed at investigating how attentional allocation affects participants' upper limb motion steadiness under low and high levels of mental load. A secondary purpose was to examine the validity of skin conductance level in measuring perception of pressure. The study consisted of 1 within-participant factor (i.e., phase: baseline, test) and 4 between-participant factors (i.e., gender: male, female; mental load: fake time constraints, no time constraints; attention: positive, suppressive; order: baseline → → → test, test → → baseline). Eighty college students (40 men and 40 women, Mage = 20.20 years, SD(age) = 1.52 years) participated in the study. Gender-stratified random assignment was employed in a 2 × 2 × 2 × 2 × 2 mixed experimental design. The findings generally support IPT but its predictions on motor performance under mental load may not be entirely accurate. Unlike men, women's performance was not susceptible to manipulations of mental load and attention allocation. The validity of skin conductance readings as an index of pressure perception was called into question.
Real-Time Performance of Mechatronic PZT Module Using Active Vibration Feedback Control.
Aggogeri, Francesco; Borboni, Alberto; Merlo, Angelo; Pellegrini, Nicola; Ricatto, Raffaele
2016-09-25
This paper proposes an innovative mechatronic piezo-actuated module to control vibrations in modern machine tools. Vibrations represent one of the main issues that seriously compromise the quality of the workpiece. The active vibration control (AVC) device is composed of a host part integrated with sensors and actuators synchronized by a regulator; it is able to make a self-assessment and adjust to alterations in the environment. In particular, an innovative smart actuator has been designed and developed to satisfy machining requirements during active vibration control. This study presents the mechatronic model based on the kinematic and dynamic analysis of the AVC device. To ensure a real time performance, a H2-LQG controller has been developed and validated by simulations involving a machine tool, PZT actuator and controller models. The Hardware in the Loop (HIL) architecture is adopted to control and attenuate the vibrations. A set of experimental tests has been performed to validate the AVC module on a commercial machine tool. The feasibility of the real time vibration damping is demonstrated and the simulation accuracy is evaluated.
Real-Time Performance of Mechatronic PZT Module Using Active Vibration Feedback Control
Aggogeri, Francesco; Borboni, Alberto; Merlo, Angelo; Pellegrini, Nicola; Ricatto, Raffaele
2016-01-01
This paper proposes an innovative mechatronic piezo-actuated module to control vibrations in modern machine tools. Vibrations represent one of the main issues that seriously compromise the quality of the workpiece. The active vibration control (AVC) device is composed of a host part integrated with sensors and actuators synchronized by a regulator; it is able to make a self-assessment and adjust to alterations in the environment. In particular, an innovative smart actuator has been designed and developed to satisfy machining requirements during active vibration control. This study presents the mechatronic model based on the kinematic and dynamic analysis of the AVC device. To ensure a real time performance, a H2-LQG controller has been developed and validated by simulations involving a machine tool, PZT actuator and controller models. The Hardware in the Loop (HIL) architecture is adopted to control and attenuate the vibrations. A set of experimental tests has been performed to validate the AVC module on a commercial machine tool. The feasibility of the real time vibration damping is demonstrated and the simulation accuracy is evaluated. PMID:27681732
ColE1-Plasmid Production in Escherichia coli: Mathematical Simulation and Experimental Validation.
Freudenau, Inga; Lutter, Petra; Baier, Ruth; Schleef, Martin; Bednarz, Hanna; Lara, Alvaro R; Niehaus, Karsten
2015-01-01
Plasmids have become very important as pharmaceutical gene vectors in the fields of gene therapy and genetic vaccination in the past years. In this study, we present a dynamic model to simulate the ColE1-like plasmid replication control, once for a DH5α-strain carrying a low copy plasmid (DH5α-pSUP 201-3) and once for a DH5α-strain carrying a high copy plasmid (DH5α-pCMV-lacZ) by using ordinary differential equations and the MATLAB software. The model includes the plasmid replication control by two regulatory RNA molecules (RNAI and RNAII) as well as the replication control by uncharged tRNA molecules. To validate the model, experimental data like RNAI- and RNAII concentration, plasmid copy number (PCN), and growth rate for three different time points in the exponential phase were determined. Depending on the sampled time point, the measured RNAI- and RNAII concentrations for DH5α-pSUP 201-3 reside between 6 ± 0.7 and 34 ± 7 RNAI molecules per cell and 0.44 ± 0.1 and 3 ± 0.9 RNAII molecules per cell. The determined PCNs averaged between 46 ± 26 and 48 ± 30 plasmids per cell. The experimentally determined data for DH5α-pCMV-lacZ reside between 345 ± 203 and 1086 ± 298 RNAI molecules per cell and 22 ± 2 and 75 ± 10 RNAII molecules per cell with an averaged PCN of 1514 ± 1301 and 5806 ± 4828 depending on the measured time point. As the model was shown to be consistent with the experimentally determined data, measured at three different time points within the growth of the same strain, we performed predictive simulations concerning the effect of uncharged tRNA molecules on the ColE1-like plasmid replication control. The hypothesis is that these tRNA molecules would have an enhancing effect on the plasmid production. The in silico analysis predicts that uncharged tRNA molecules would indeed increase the plasmid DNA production.
Nerurkar, Nandan L; Mauck, Robert L; Elliott, Dawn M
2008-12-01
Integrating theoretical and experimental approaches for annulus fibrosus (AF) functional tissue engineering. Apply a hyperelastic constitutive model to characterize the evolution of engineered AF via scalar model parameters. Validate the model and predict the response of engineered constructs to physiologic loading scenarios. There is need for a tissue engineered replacement for degenerate AF. When evaluating engineered replacements for load-bearing tissues, it is necessary to evaluate mechanical function with respect to the native tissue, including nonlinearity and anisotropy. Aligned nanofibrous poly-epsilon-caprolactone scaffolds with prescribed fiber angles were seeded with bovine AF cells and analyzed over 8 weeks, using experimental (mechanical testing, biochemistry, histology) and theoretical methods (a hyperelastic fiber-reinforced constitutive model). The linear region modulus for phi = 0 degrees constructs increased by approximately 25 MPa, and for phi = 90 degrees by approximately 2 MPa from 1 day to 8 weeks in culture. Infiltration and proliferation of AF cells into the scaffold and abundant deposition of s-GAG and aligned collagen was observed. The constitutive model had excellent fits to experimental data to yield matrix and fiber parameters that increased with time in culture. Correlations were observed between biochemical measures and model parameters. The model was successfully validated and used to simulate time-varying responses of engineered AF under shear and biaxial loading. AF cells seeded on nanofibrous scaffolds elaborated an organized, anisotropic AF-like extracellular matrix, resulting in improved mechanical properties. A hyperelastic fiber-reinforced constitutive model characterized the functional evolution of engineered AF constructs, and was used to simulate physiologically relevant loading configurations. Model predictions demonstrated that fibers resist shear even when the shearing direction does not coincide with the fiber direction. Further, the model suggested that the native AF fiber architecture is uniquely designed to support shear stresses encountered under multiple loading configurations.
Test system stability and natural variability of a Lemna gibba L. bioassay.
Scherr, Claudia; Simon, Meinhard; Spranger, Jörg; Baumgartner, Stephan
2008-09-04
In ecotoxicological and environmental studies Lemna spp. are used as test organisms due to their small size, rapid predominantly vegetative reproduction, easy handling and high sensitivity to various chemicals. However, there is not much information available concerning spatial and temporal stability of experimental set-ups used for Lemna bioassays, though this is essential for interpretation and reliability of results. We therefore investigated stability and natural variability of a Lemna gibba bioassay assessing area-related and frond number-related growth rates under controlled laboratory conditions over about one year. Lemna gibba L. was grown in beakers with Steinberg medium for one week. Area-related and frond number-related growth rates (r(area) and r(num)) were determined with a non-destructive image processing system. To assess inter-experimental stability, 35 independent experiments were performed with 10 beakers each in the course of one year. We observed changes in growth rates by a factor of two over time. These did not correlate well with temperature or relative humidity in the growth chamber. In order to assess intra-experimental stability, we analysed six systematic negative control experiments (nontoxicant tests) with 96 replicate beakers each. Evaluation showed that the chosen experimental set-up was stable and did not produce false positive results. The coefficient of variation was lower for r(area) (2.99%) than for r(num) (4.27%). It is hypothesised that the variations in growth rates over time under controlled conditions are partly due to endogenic periodicities in Lemna gibba. The relevance of these variations for toxicity investigations should be investigated more closely. Area-related growth rate seems to be more precise as non-destructive calculation parameter than number-related growth rate. Furthermore, we propose two new validity criteria for Lemna gibba bioassays: variability of average specific and section-by-section segmented growth rate, complementary to average specific growth rate as the only validity criterion existing in guidelines for duckweed bioassays.
Test System Stability and Natural Variability of a Lemna Gibba L. Bioassay
Scherr, Claudia; Simon, Meinhard; Spranger, Jörg; Baumgartner, Stephan
2008-01-01
Background In ecotoxicological and environmental studies Lemna spp. are used as test organisms due to their small size, rapid predominantly vegetative reproduction, easy handling and high sensitivity to various chemicals. However, there is not much information available concerning spatial and temporal stability of experimental set-ups used for Lemna bioassays, though this is essential for interpretation and reliability of results. We therefore investigated stability and natural variability of a Lemna gibba bioassay assessing area-related and frond number-related growth rates under controlled laboratory conditions over about one year. Methology/Principal Findings Lemna gibba L. was grown in beakers with Steinberg medium for one week. Area-related and frond number-related growth rates (r(area) and r(num)) were determined with a non-destructive image processing system. To assess inter-experimental stability, 35 independent experiments were performed with 10 beakers each in the course of one year. We observed changes in growth rates by a factor of two over time. These did not correlate well with temperature or relative humidity in the growth chamber. In order to assess intra-experimental stability, we analysed six systematic negative control experiments (nontoxicant tests) with 96 replicate beakers each. Evaluation showed that the chosen experimental set-up was stable and did not produce false positive results. The coefficient of variation was lower for r(area) (2.99%) than for r(num) (4.27%). Conclusions/Significance It is hypothesised that the variations in growth rates over time under controlled conditions are partly due to endogenic periodicities in Lemna gibba. The relevance of these variations for toxicity investigations should be investigated more closely. Area-related growth rate seems to be more precise as non-destructive calculation parameter than number-related growth rate. Furthermore, we propose two new validity criteria for Lemna gibba bioassays: variability of average specific and section-by-section segmented growth rate, complementary to average specific growth rate as the only validity criterion existing in guidelines for duckweed bioassays. PMID:18769541
Studies on the Parametric Effects of Plasma Arc Welding of 2205 Duplex Stainless Steel
NASA Astrophysics Data System (ADS)
Selva Bharathi, R.; Siva Shanmugam, N.; Murali Kannan, R.; Arungalai Vendan, S.
2018-03-01
This research study attempts to create an optimized parametric window by employing Taguchi algorithm for Plasma Arc Welding (PAW) of 2 mm thick 2205 duplex stainless steel. The parameters considered for experimentation and optimization are the welding current, welding speed and pilot arc length respectively. The experimentation involves the parameters variation and subsequently recording the depth of penetration and bead width. Welding current of 60-70 A, welding speed of 250-300 mm/min and pilot arc length of 1-2 mm are the range between which the parameters are varied. Design of experiments is used for the experimental trials. Back propagation neural network, Genetic algorithm and Taguchi techniques are used for predicting the bead width, depth of penetration and validated with experimentally achieved results which were in good agreement. Additionally, micro-structural characterizations are carried out to examine the weld quality. The extrapolation of these optimized parametric values yield enhanced weld strength with cost and time reduction.
An acoustic experimental and theoretical investigation of single disc propellers
NASA Technical Reports Server (NTRS)
Bumann, Elizabeth A.; Korkan, Kenneth D.
1989-01-01
An experimental study of the acoustic field associated with two, three, and four blade propeller configurations with a blade root angle of 50 deg was performed in the Texas A&M University 5 ft. x 6 ft. acoustically-insulated subsonic wind tunnel. A waveform analysis package was utilized to obtain experimental acoustic time histories, frequency spectra, and overall sound pressure level (OASPL) and served as a basis for comparison to the theoretical acoustic compact source theory of Succi (1979). Valid for subsonic tip speeds, the acoustic analysis replaced each blade by an array of spiraling point sources which exhibited a unique force vector and volume. The computer analysis of Succi was modified to include a propeller performance strip analysis which used a NACA 4-digit series airfoil data bank to calculate lift and drag for each blade segment given the geometry and motion of the propeller. Theoretical OASPL predictions were found to moderately overpredict experimental values for all operating conditions and propeller configurations studied.
Ahmad, Ajaz; Alkharfy, Khalid M; Wani, Tanveer A; Raish, Mohammad
2015-01-01
The objective of the present work was to study the ultrasonic assisted extraction and optimization of polysaccharides from Paeonia emodi and evaluation of its anti-inflammatory response. Specifically, the optimization of polysaccharides was carried out using Box-Behnken statistical experimental design. Response surface methodology (RSM) of three factors (extraction temperature, extraction time and liquid solid ratio) was employed to optimize the percentage yield of the polysaccharides. The experimental data were fitted to quadratic response surface models using multiple regression analysis with high coefficient of determination value (R) of 0.9906. The highest polysaccharide yield (8.69%) as per the Derringer's desirability prediction tool was obtained under the optimal extraction condition (extraction temperature 47.03 °C, extraction time 15.68 min, and liquid solid ratio 1.29 ml/g) with a desirability value of 0.98. These optimized values of tested parameters were validated under similar conditions (n = 6), an average of 8.13 ± 2.08% of polysaccharide yield was obtained in an optimized extraction conditions with 93.55% validity. The anti-inflammatory effect of polysaccharides of P. emodi were studied on carrageenan induced paw edema. In vivo results showed that the P. emodi 200mg/kg of polysaccharide extract exhibited strong potential against inflammatory response induced by 1% suspension of carrageenean in normal saline. Copyright © 2014 Elsevier B.V. All rights reserved.
Baigent, Susan J.; Nair, Venugopal K.; Le Galludec, Hervé
2016-01-01
CVI988/Rispens vaccine, the ‘gold standard’ vaccine against Marek’s disease in poultry, is not easily distinguishable from virulent strains of Marek’s disease herpesvirus (MDV). Accurate differential measurement of CVI988 and virulent MDV is commercially important to confirm successful vaccination, to diagnose Marek’s disease, and to investigate causes of vaccine failure. A real-time quantitative PCR assay to distinguish CVI988 and virulent MDV based on a consistent single nucleotide polymorphism in the pp38 gene, was developed, optimised and validated using common primers to amplify both viruses, but differential detection of PCR products using two short probes specific for either CVI988 or virulent MDV. Both probes showed perfect specificity for three commercial preparations of CVI988 and 12 virulent MDV strains. Validation against BAC-sequence-specific and US2-sequence-specific q-PCR, on spleen samples from experimental chickens co-infected with BAC-cloned pCVI988 and wild-type virulent MDV, demonstrated that CVI988 and virulent MDV could be quantified very accurately. The assay was then used to follow kinetics of replication of commercial CVI988 and virulent MDV in feather tips and blood of vaccinated and challenged experimental chickens. The assay is a great improvement in enabling accurate differential quantification of CVI988 and virulent MDV over a biologically relevant range of virus levels. PMID:26973285
Austin, Caitlin M.; Stoy, William; Su, Peter; Harber, Marie C.; Bardill, J. Patrick; Hammer, Brian K.; Forest, Craig R.
2014-01-01
Biosensors exploiting communication within genetically engineered bacteria are becoming increasingly important for monitoring environmental changes. Currently, there are a variety of mathematical models for understanding and predicting how genetically engineered bacteria respond to molecular stimuli in these environments, but as sensors have miniaturized towards microfluidics and are subjected to complex time-varying inputs, the shortcomings of these models have become apparent. The effects of microfluidic environments such as low oxygen concentration, increased biofilm encapsulation, diffusion limited molecular distribution, and higher population densities strongly affect rate constants for gene expression not accounted for in previous models. We report a mathematical model that accurately predicts the biological response of the autoinducer N-acyl homoserine lactone-mediated green fluorescent protein expression in reporter bacteria in microfluidic environments by accommodating these rate constants. This generalized mass action model considers a chain of biomolecular events from input autoinducer chemical to fluorescent protein expression through a series of six chemical species. We have validated this model against experimental data from our own apparatus as well as prior published experimental results. Results indicate accurate prediction of dynamics (e.g., 14% peak time error from a pulse input) and with reduced mean-squared error with pulse or step inputs for a range of concentrations (10 μM–30 μM). This model can help advance the design of genetically engineered bacteria sensors and molecular communication devices. PMID:25379076
The role of numerical simulation for the development of an advanced HIFU system
NASA Astrophysics Data System (ADS)
Okita, Kohei; Narumi, Ryuta; Azuma, Takashi; Takagi, Shu; Matumoto, Yoichiro
2014-10-01
High-intensity focused ultrasound (HIFU) has been used clinically and is under clinical trials to treat various diseases. An advanced HIFU system employs ultrasound techniques for guidance during HIFU treatment instead of magnetic resonance imaging in current HIFU systems. A HIFU beam imaging for monitoring the HIFU beam and a localized motion imaging for treatment validation of tissue are introduced briefly as the real-time ultrasound monitoring techniques. Numerical simulations have a great impact on the development of real-time ultrasound monitoring as well as the improvement of the safety and efficacy of treatment in advanced HIFU systems. A HIFU simulator was developed to reproduce ultrasound propagation through the body in consideration of the elasticity of tissue, and was validated by comparison with in vitro experiments in which the ultrasound emitted from the phased-array transducer propagates through the acrylic plate acting as a bone phantom. As the result, the defocus and distortion of the ultrasound propagating through the acrylic plate in the simulation quantitatively agree with that in the experimental results. Therefore, the HIFU simulator accurately reproduces the ultrasound propagation through the medium whose shape and physical properties are well known. In addition, it is experimentally confirmed that simulation-assisted focus control of the phased-array transducer enables efficient assignment of the focus to the target. Simulation-assisted focus control can contribute to design of transducers and treatment planning.
NASA Astrophysics Data System (ADS)
De Lorenzo, Danilo; De Momi, Elena; Beretta, Elisa; Cerveri, Pietro; Perona, Franco; Ferrigno, Giancarlo
2009-02-01
Computer Assisted Orthopaedic Surgery (CAOS) systems improve the results and the standardization of surgical interventions. Anatomical landmarks and bone surface detection is straightforward to either register the surgical space with the pre-operative imaging space and to compute biomechanical parameters for prosthesis alignment. Surface points acquisition increases the intervention invasiveness and can be influenced by the soft tissue layer interposition (7-15mm localization errors). This study is aimed at evaluating the accuracy of a custom-made A-mode ultrasound (US) system for non invasive detection of anatomical landmarks and surfaces. A-mode solutions eliminate the necessity of US images segmentation, offers real-time signal processing and requires less invasive equipment. The system consists in a single transducer US probe optically tracked, a pulser/receiver and an FPGA-based board, which is responsible for logic control command generation and for real-time signal processing and three custom-made board (signal acquisition, blanking and synchronization). We propose a new calibration method of the US system. The experimental validation was then performed measuring the length of known-shape polymethylmethacrylate boxes filled with pure water and acquiring bone surface points on a bovine bone phantom covered with soft-tissue mimicking materials. Measurement errors were computed through MR and CT images acquisitions of the phantom. Points acquisition on bone surface with the US system demonstrated lower errors (1.2mm) than standard pointer acquisition (4.2mm).
Ràfols, Clara; Bosch, Elisabeth; Barbas, Rafael; Prohens, Rafel
2016-07-01
A study about the suitability of the chelation reaction of Ca(2+)with ethylenediaminetetraacetic acid (EDTA) as a validation standard for Isothermal Titration Calorimeter measurements has been performed exploring the common experimental variables (buffer, pH, ionic strength and temperature). Results obtained in a variety of experimental conditions have been amended according to the side reactions involved in the main process and to the experimental ionic strength and, finally, validated by contrast with the potentiometric reference values. It is demonstrated that the chelation reaction performed in acetate buffer 0.1M and 25°C shows accurate and precise results and it is robust enough to be adopted as a standard calibration process. Copyright © 2016 Elsevier B.V. All rights reserved.
Experimental validation of an ultrasonic flowmeter for unsteady flows
NASA Astrophysics Data System (ADS)
Leontidis, V.; Cuvier, C.; Caignaert, G.; Dupont, P.; Roussette, O.; Fammery, S.; Nivet, P.; Dazin, A.
2018-04-01
An ultrasonic flowmeter was developed for further applications in cryogenic conditions and for measuring flow rate fluctuations in the range of 0 to 70 Hz. The prototype was installed in a flow test rig, and was validated experimentally both in steady and unsteady water flow conditions. A Coriolis flowmeter was used for the calibration under steady state conditions, whereas in the unsteady case the validation was done simultaneously against two methods: particle image velocimetry (PIV), and with pressure transducers installed flush on the wall of the pipe. The results show that the developed flowmeter and the proposed methodology can accurately measure the frequency and amplitude of unsteady fluctuations in the experimental range of 0-9 l s-1 of the mean main flow rate and 0-70 Hz of the imposed disturbances.
Flight Research and Validation Formerly Experimental Capabilities Supersonic Project
NASA Technical Reports Server (NTRS)
Banks, Daniel
2009-01-01
This slide presentation reviews the work of the Experimental Capabilities Supersonic project, that is being reorganized into Flight Research and Validation. The work of Experimental Capabilities Project in FY '09 is reviewed, and the specific centers that is assigned to do the work is given. The portfolio of the newly formed Flight Research and Validation (FRV) group is also reviewed. The various projects for FY '10 for the FRV are detailed. These projects include: Eagle Probe, Channeled Centerbody Inlet Experiment (CCIE), Supersonic Boundary layer Transition test (SBLT), Aero-elastic Test Wing-2 (ATW-2), G-V External Vision Systems (G5 XVS), Air-to-Air Schlieren (A2A), In Flight Background Oriented Schlieren (BOS), Dynamic Inertia Measurement Technique (DIM), and Advanced In-Flight IR Thermography (AIR-T).
Aerodynamic Database Development for Mars Smart Lander Vehicle Configurations
NASA Technical Reports Server (NTRS)
Bobskill, Glenn J.; Parikh, Paresh C.; Prabhu, Ramadas K.; Tyler, Erik D.
2002-01-01
An aerodynamic database has been generated for the Mars Smart Lander Shelf-All configuration using computational fluid dynamics (CFD) simulations. Three different CFD codes, USM3D and FELISA, based on unstructured grid technology and LAURA, an established and validated structured CFD code, were used. As part of this database development, the results for the Mars continuum were validated with experimental data and comparisons made where applicable. The validation of USM3D and LAURA with the Unitary experimental data, the use of intermediate LAURA check analyses, as well as the validation of FELISA with the Mach 6 CF(sub 4) experimental data provided a higher confidence in the ability for CFD to provide aerodynamic data in order to determine the static trim characteristics for longitudinal stability. The analyses of the noncontinuum regime showed the existence of multiple trim angles of attack that can be unstable or stable trim points. This information is needed to design guidance controller throughout the trajectory.
Issues and approach to develop validated analysis tools for hypersonic flows: One perspective
NASA Technical Reports Server (NTRS)
Deiwert, George S.
1993-01-01
Critical issues concerning the modeling of low density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools, and the activity in the NASA Ames Research Center's Aerothermodynamics Branch is described. Inherent in the process is a strong synergism between ground test and real gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flowfield simulation codes are discussed. These models were partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions is sparse and reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high enthalpy flow facilities, such as shock tubes and ballistic ranges.
NASA Technical Reports Server (NTRS)
Baumeister, Joseph F.
1994-01-01
A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.
Issues and approach to develop validated analysis tools for hypersonic flows: One perspective
NASA Technical Reports Server (NTRS)
Deiwert, George S.
1992-01-01
Critical issues concerning the modeling of low-density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools. A description of the activity in the Ames Research Center's Aerothermodynamics Branch is also given. Inherent in the process is a strong synergism between ground test and real-gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flow-field simulation codes are discussed. These models have been partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions are sparse; reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground-based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high-enthalpy flow facilities, such as shock tubes and ballistic ranges.
Integration of system identification and finite element modelling of nonlinear vibrating structures
NASA Astrophysics Data System (ADS)
Cooper, Samson B.; DiMaio, Dario; Ewins, David J.
2018-03-01
The Finite Element Method (FEM), Experimental modal analysis (EMA) and other linear analysis techniques have been established as reliable tools for the dynamic analysis of engineering structures. They are often used to provide solutions to small and large structures and other variety of cases in structural dynamics, even those exhibiting a certain degree of nonlinearity. Unfortunately, when the nonlinear effects are substantial or the accuracy of the predicted response is of vital importance, a linear finite element model will generally prove to be unsatisfactory. As a result, the validated linear FE model requires further enhancement so that it can represent and predict the nonlinear behaviour exhibited by the structure. In this paper, a pragmatic approach to integrating test-based system identification and FE modelling of a nonlinear structure is presented. This integration is based on three different phases: the first phase involves the derivation of an Underlying Linear Model (ULM) of the structure, the second phase includes experiment-based nonlinear identification using measured time series and the third phase covers augmenting the linear FE model and experimental validation of the nonlinear FE model. The proposed case study is demonstrated on a twin cantilever beam assembly coupled with a flexible arch shaped beam. In this case, polynomial-type nonlinearities are identified and validated with force-controlled stepped-sine test data at several excitation levels.
NASA Astrophysics Data System (ADS)
Alpert, Peter A.; Knopf, Daniel A.
2016-02-01
Immersion freezing is an important ice nucleation pathway involved in the formation of cirrus and mixed-phase clouds. Laboratory immersion freezing experiments are necessary to determine the range in temperature, T, and relative humidity, RH, at which ice nucleation occurs and to quantify the associated nucleation kinetics. Typically, isothermal (applying a constant temperature) and cooling-rate-dependent immersion freezing experiments are conducted. In these experiments it is usually assumed that the droplets containing ice nucleating particles (INPs) all have the same INP surface area (ISA); however, the validity of this assumption or the impact it may have on analysis and interpretation of the experimental data is rarely questioned. Descriptions of ice active sites and variability of contact angles have been successfully formulated to describe ice nucleation experimental data in previous research; however, we consider the ability of a stochastic freezing model founded on classical nucleation theory to reproduce previous results and to explain experimental uncertainties and data scatter. A stochastic immersion freezing model based on first principles of statistics is presented, which accounts for variable ISA per droplet and uses parameters including the total number of droplets, Ntot, and the heterogeneous ice nucleation rate coefficient, Jhet(T). This model is applied to address if (i) a time and ISA-dependent stochastic immersion freezing process can explain laboratory immersion freezing data for different experimental methods and (ii) the assumption that all droplets contain identical ISA is a valid conjecture with subsequent consequences for analysis and interpretation of immersion freezing. The simple stochastic model can reproduce the observed time and surface area dependence in immersion freezing experiments for a variety of methods such as: droplets on a cold-stage exposed to air or surrounded by an oil matrix, wind and acoustically levitated droplets, droplets in a continuous-flow diffusion chamber (CFDC), the Leipzig aerosol cloud interaction simulator (LACIS), and the aerosol interaction and dynamics in the atmosphere (AIDA) cloud chamber. Observed time-dependent isothermal frozen fractions exhibiting non-exponential behavior can be readily explained by this model considering varying ISA. An apparent cooling-rate dependence of Jhet is explained by assuming identical ISA in each droplet. When accounting for ISA variability, the cooling-rate dependence of ice nucleation kinetics vanishes as expected from classical nucleation theory. The model simulations allow for a quantitative experimental uncertainty analysis for parameters Ntot, T, RH, and the ISA variability. The implications of our results for experimental analysis and interpretation of the immersion freezing process are discussed.
NASA Astrophysics Data System (ADS)
Percoco, Gianluca; Sánchez Salmerón, Antonio J.
2015-09-01
The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.
Rating of Perceived Exertion During Circuit Weight Training: A Concurrent Validation Study.
Aniceto, Rodrigo R; Ritti-Dias, Raphael M; Dos Prazeres, Thaliane M P; Farah, Breno Q; de Lima, Fábio F M; do Prado, Wagner L
2015-12-01
The aim of this study was to determine whether rating of perceived exertion (RPE) is a valid method to control the effort during the circuit weight training (CWT) in trained men. Ten men (21.3 ± 3.3 years) with previous experience in resistance training (13.1 ± 6.3 months) performed 3 sessions: 1 orientation session and 2 experimental sessions. The subjects were randomly counterbalanced to 2 experimental sessions: CWT or multiple-set resistance training (control). In both sessions, 8 exercises (bench press, leg press 45°, seated row, leg curl, triceps pulley, leg extension, biceps curl, and adductor chair) were performed with the same work: 60% of 1 repetition maximum, 24 stations (3 circuits) or 24 sets (3 sets/exercise), 10 repetitions, 1 second in the concentric and eccentric phases, and rest intervals between sets and exercise of 60 seconds. Active muscle RPEs were measured after each 3 station/sets using the OMNI-Resistance Exercise Scale (OMNI-RES). In this same time, blood lactate was collected. Compared with baseline, both levels of blood lactate and RPE increased during whole workout in both sessions, the RPE at third, 23rd, and 27th minute and the blood lactate at third, seventh, 11th, 15th, 27th, and 31st minute were higher in multiple set compared with CWT. Positive correlation between blood lactate and RPE was observed in both experimental sessions. The results indicated that the RPE is a valid method to control the effort during the CWT in trained men and can be used to manipulate the intensity without the need to perform invasive assessments.
NASA Astrophysics Data System (ADS)
Pompili, R.; Anania, M. P.; Bellaveglia, M.; Biagioni, A.; Castorina, G.; Chiadroni, E.; Cianchi, A.; Croia, M.; Di Giovenale, D.; Ferrario, M.; Filippi, F.; Gallo, A.; Gatti, G.; Giorgianni, F.; Giribono, A.; Li, W.; Lupi, S.; Mostacci, A.; Petrarca, M.; Piersanti, L.; Di Pirro, G.; Romeo, S.; Scifo, J.; Shpakov, V.; Vaccarezza, C.; Villa, F.
2016-08-01
The generation of ultra-short electron bunches with ultra-low timing-jitter relative to the photo-cathode (PC) laser has been experimentally proved for the first time at the SPARC_LAB test-facility (INFN-LNF, Frascati) exploiting a two-stage hybrid compression scheme. The first stage employs RF-based compression (velocity-bunching), which shortens the bunch and imprints an energy chirp on it. The second stage is performed in a non-isochronous dogleg line, where the compression is completed resulting in a final bunch duration below 90 fs (rms). At the same time, the beam arrival timing-jitter with respect to the PC laser has been measured to be lower than 20 fs (rms). The reported results have been validated with numerical simulations.
Parametric, nonparametric and parametric modelling of a chaotic circuit time series
NASA Astrophysics Data System (ADS)
Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.
2000-09-01
The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.
Zubair, Abdulrazaq; Pappoe, Michael; James, Lesley A; Hawboldt, Kelly
2015-12-18
This paper presents an important new approach to improving the timeliness of Total Petroleum Hydrocarbon (TPH) analysis in the soil by Gas Chromatography - Flame Ionization Detector (GC-FID) using the CCME Canada-Wide Standard reference method. The Canada-Wide Standard (CWS) method is used for the analysis of petroleum hydrocarbon compounds across Canada. However, inter-laboratory application of this method for the analysis of TPH in the soil has often shown considerable variability in the results. This could be due, in part, to the different gas chromatography (GC) conditions, other steps involved in the method, as well as the soil properties. In addition, there are differences in the interpretation of the GC results, which impacts the determination of the effectiveness of remediation at hydrocarbon-contaminated sites. In this work, multivariate experimental design approach was used to develop and validate the analytical method for a faster quantitative analysis of TPH in (contaminated) soil. A fractional factorial design (fFD) was used to screen six factors to identify the most significant factors impacting the analysis. These factors included: injection volume (μL), injection temperature (°C), oven program (°C/min), detector temperature (°C), carrier gas flow rate (mL/min) and solvent ratio (v/v hexane/dichloromethane). The most important factors (carrier gas flow rate and oven program) were then optimized using a central composite response surface design. Robustness testing and validation of model compares favourably with the experimental results with percentage difference of 2.78% for the analysis time. This research successfully reduced the method's standard analytical time from 20 to 8min with all the carbon fractions eluting. The method was successfully applied for fast TPH analysis of Bunker C oil contaminated soil. A reduced analytical time would offer many benefits including an improved laboratory reporting times, and overall improved clean up efficiency. The method was successfully applied for the analysis of TPH of Bunker C oil in contaminated soil. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Validation Results for LEWICE 2.0
NASA Technical Reports Server (NTRS)
Wright, William B.; Rutkowski, Adam
1999-01-01
A research project is underway at NASA Lewis to produce a computer code which can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce results accurately for different spacing and time step criteria across computing platform. It also differs in the extensive amount of effort undertaken to compare the results in a quantified manner against the database of ice shapes which have been generated in the NASA Lewis Icing Research Tunnel (IRT). The results of the shape comparisons are analyzed to determine the range of meteorological conditions under which LEWICE 2.0 is within the experimental repeatability. This comparison shows that the average variation of LEWICE 2.0 from the experimental data is 7.2% while the overall variability of the experimental data is 2.5%.
Experimental and computational fluid dynamics studies of mixing of complex oral health products
NASA Astrophysics Data System (ADS)
Cortada-Garcia, Marti; Migliozzi, Simona; Weheliye, Weheliye Hashi; Dore, Valentina; Mazzei, Luca; Angeli, Panagiota; ThAMes Multiphase Team
2017-11-01
Highly viscous non-Newtonian fluids are largely used in the manufacturing of specialized oral care products. Mixing often takes place in mechanically stirred vessels where the flow fields and mixing times depend on the geometric configuration and the fluid physical properties. In this research, we study the mixing performance of complex non-Newtonian fluids using Computational Fluid Dynamics models and validate them against experimental laser-based optical techniques. To this aim, we developed a scaled-down version of an industrial mixer. As test fluids, we used mixtures of glycerol and a Carbomer gel. The viscosities of the mixtures against shear rate at different temperatures and phase ratios were measured and found to be well described by the Carreau model. The numerical results were compared against experimental measurements of velocity fields from Particle Image Velocimetry (PIV) and concentration profiles from Planar Laser Induced Fluorescence (PLIF).
An integrated approach to model strain localization bands in magnesium alloys
NASA Astrophysics Data System (ADS)
Baxevanakis, K. P.; Mo, C.; Cabal, M.; Kontsos, A.
2018-02-01
Strain localization bands (SLBs) that appear at early stages of deformation of magnesium alloys have been recently associated with heterogeneous activation of deformation twinning. Experimental evidence has demonstrated that such "Lüders-type" band formations dominate the overall mechanical behavior of these alloys resulting in sigmoidal type stress-strain curves with a distinct plateau followed by pronounced anisotropic hardening. To evaluate the role of SLB formation on the local and global mechanical behavior of magnesium alloys, an integrated experimental/computational approach is presented. The computational part is developed based on custom subroutines implemented in a finite element method that combine a plasticity model with a stiffness degradation approach. Specific inputs from the characterization and testing measurements to the computational approach are discussed while the numerical results are validated against such available experimental information, confirming the existence of load drops and the intensification of strain accumulation at the time of SLB initiation.
Experimental and modelling of Arthrospira platensis cultivation in open raceway ponds.
Ranganathan, Panneerselvam; Amal, J C; Savithri, S; Haridas, Ajith
2017-10-01
In this study, the growth of Arthrospira platensis was studied in an open raceway pond. Furthermore, dynamic model for algae growth and CFD modelling of hydrodynamics in open raceway pond were developed. The dynamic behaviour of the algal system was developed by solving mass balance equations of various components, considering light intensity and gas-liquid mass transfer. A CFD modelling of the hydrodynamics of open raceway pond was developed by solving mass and momentum balance equations of the liquid medium. The prediction of algae concentration from the dynamic model was compared with the experimental data. The hydrodynamic behaviour of the open raceway pond was compared with the literature data for model validation. The model predictions match the experimental findings. Furthermore, the hydrodynamic behaviour and residence time distribution in our small raceway pond were predicted. These models can serve as a tool to assess the pond performance criteria. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kerry, Matthew J.; Embretson, Susan E.
2018-01-01
Future time perspective (FTP) is defined as “perceptions of the future as being limited or open-ended” (Lang and Carstensen, 2002; p. 125). The construct figures prominently in both workplace and retirement domains, but the age-predictions are competing: Workplace research predicts decreasing FTP age-change, in contrast, retirement scholars predict increasing FTP age-change. For the first time, these competing predictions are pitted in an experimental manipulation of subjective life expectancy (SLE). A sample of N = 207 older adults (age 45–60) working full-time (>30-h/week) were randomly assigned to SLE questions framed as either ‘Live-to’ or ‘Die-by’ to evaluate competing predictions for FTP. Results indicate general support for decreasing age-change in FTP, indicated by independent-sample t-tests showing lower FTP in the ‘Die-by’ framing condition. Further general-linear model analyses were conducted to test for interaction effects of retirement planning with experimental framings on FTP and intended retirement; While retirement planning buffered FTP’s decrease, simple-effects also revealed that retirement planning increased intentions for sooner retirement, but lack of planning increased intentions for later retirement. Discussion centers on practical implications of our findings and consequences validity evidence in future empirical research of FTP in both workplace and retirement domains. PMID:29375435
Extension of local front reconstruction method with controlled coalescence model
NASA Astrophysics Data System (ADS)
Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.
2018-02-01
The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.
A new method for calculation of the chlorine demand of natural and treated waters.
Warton, Ben; Heitz, Anna; Joll, Cynthia; Kagi, Robert
2006-08-01
Conventional methods of calculating chlorine demand are dose dependent, making intercomparison of samples difficult, especially in cases where the samples contain substantially different concentrations of dissolved organic carbon (DOC), or other chlorine-consuming species. Using the method presented here, the values obtained for chlorine demand are normalised, allowing valid comparison of chlorine demand between samples, independent of the chlorine dose. Since the method is not dose dependent, samples with substantially differing water quality characteristics can be reliably compared. In our method, we dosed separate aliquots of a water sample with different chlorine concentrations, and periodically measured the residual chlorine concentrations in these subsamples. The chlorine decay data obtained in this way were then fitted to first-order exponential decay functions, corresponding to short-term demand (0-4h) and long-term demand (4-168 h). From the derived decay functions, the residual concentrations at a given time within the experimental time window were calculated and plotted against the corresponding initial chlorine concentrations, giving a linear relationship. From this linear function, it was then possible to determine the residual chlorine concentration for any initial concentration (i.e. dose). Thus, using this method, the initial chlorine dose required to give any residual chlorine concentration can be calculated for any time within the experimental time window, from a single set of experimental data.
NASA Astrophysics Data System (ADS)
Gimenez, Juan M.; González, Leo M.
2015-03-01
In this paper, a new generation of the particle method known as Particle Finite Element Method (PFEM), which combines convective particle movement and a fixed mesh resolution, is applied to free surface flows. This interesting variant, previously described in the literature as PFEM-2, is able to use larger time steps when compared to other similar numerical tools which implies shorter computational times while maintaining the accuracy of the computation. PFEM-2 has already been extended to free surface problems, being the main topic of this paper a deep validation of this methodology for a wider range of flows. To accomplish this task, different improved versions of discontinuous and continuous enriched basis functions for the pressure field have been developed to capture the free surface dynamics without artificial diffusion or undesired numerical effects when different density ratios are involved. A collection of problems has been carefully selected such that a wide variety of Froude numbers, density ratios and dominant dissipative cases are reported with the intention of presenting a general methodology, not restricted to a particular range of parameters, and capable of using large time-steps. The results of the different free-surface problems solved, which include: Rayleigh-Taylor instability, sloshing problems, viscous standing waves and the dam break problem, are compared to well validated numerical alternatives or experimental measurements obtaining accurate approximations for such complex flows.
Computer Simulations of Coronary Blood Flow Through a Constriction
2014-03-01
interventional procedures (e.g., stent deployment). Building off previous models that have been partially validated with experimental data, this thesis... stent deployment). Building off previous models that have been partially validated with experimental data, this thesis continues to develop the...the artery and increase blood flow. Generally a stent , or a mesh wire tube, is permanently inserted in order to scaffold open the artery wall
Perceptions vs Reality: A Longitudinal Experiment in Influenced Judgement Performance
2003-03-25
validity were manifested equally between treatment and control groups , thereby lending further validity to the experimental research design . External...Stanley (1975) identify this as a True Experimental Design : Pretest- Posttest Control Group Design . However, due to the longitudinal aspect required to...1975:43). Nonequivalence will be ruled out as pretest equivalence is shown between treatment and control groups (1975:47). For quasi
Analysis of Xrage and Flag High Explosive Burn Models with PBX 9404 Cylinder Tests
NASA Astrophysics Data System (ADS)
Harrier, Danielle; Fessenden, Julianna; Ramsey, Scott
2016-11-01
High explosives are energetic materials that release their chemical energy in a short interval of time. They are able to generate extreme heat and pressure by a shock driven chemical decomposition reaction, which makes them valuable tools that must be understood. This study investigated the accuracy and performance of two Los Alamos National Laboratory hydrodynamic codes, which are used to determine the behavior of explosives within a variety of systems: xRAGE which utilizes an Eulerian mesh, and FLAG with utilizes a Lagrangian mesh. Various programmed and reactive burn models within both codes were tested, using a copper cylinder expansion test. The test was based off of a recent experimental setup which contained the plastic bonded explosive PBX 9404. Detonation velocity versus time curves for this explosive were obtained from the experimental velocity data collected using Photon Doppler Velocimetry (PDV). The modeled results from each of the burn models tested were then compared to one another and to the experimental results using the Jones-Wilkins-Lee (JWL) equation of state parameters that were determined and adjusted from the experimental tests. This study is important to validate the accuracy of our high explosive burn models and the calibrated EOS parameters, which are important for many research topics in physical sciences.
Thermal Management Using Pulsating Jet Cooling Technology
NASA Astrophysics Data System (ADS)
Alimohammadi, S.; Dinneen, P.; Persoons, T.; Murray, D. B.
2014-07-01
The existing methods of heat removal from compact electronic devises are known to be deficient as the evolving technology demands more power density and accordingly better cooling techniques. Impinging jets can be used as a satisfactory method for thermal management of electronic devices with limited space and volume. Pulsating flows can produce an additional enhancement in heat transfer rate compared to steady flows. This article is part of a comprehensive experimental and numerical study performed on pulsating jet cooling technology. The experimental approach explores heat transfer performance of a pulsating air jet impinging onto a flat surface for nozzle-to-surface distances 1 <= H/D <= 6, Reynolds numbers 1,300 <= Re <= 2,800 pulsation frequency 2Hz <= f <= 65Hz, and Strouhal number 0.0012 <= Sr = fD/Um <= 0.084. The time-resolved velocity at the nozzle exit is measured to quantify the turbulence intensity profile. The numerical methodology is firstly validated using the experimental local Nusselt number distribution for the steady jet with the same geometry and boundary conditions. For a time-averaged Reynolds number of 6,000, the heat transfer enhancement using the pulsating jet for 9Hz <= f <= 55Hz and 0.017 <= Sr <= 0.102 and 1 <= H/D <= 6 are calculated. For the same range of Sr number, the numerical and experimental methods show consistent results.
Assessing the stability of human locomotion: a review of current measures
Bruijn, S. M.; Meijer, O. G.; Beek, P. J.; van Dieën, J. H.
2013-01-01
Falling poses a major threat to the steadily growing population of the elderly in modern-day society. A major challenge in the prevention of falls is the identification of individuals who are at risk of falling owing to an unstable gait. At present, several methods are available for estimating gait stability, each with its own advantages and disadvantages. In this paper, we review the currently available measures: the maximum Lyapunov exponent (λS and λL), the maximum Floquet multiplier, variability measures, long-range correlations, extrapolated centre of mass, stabilizing and destabilizing forces, foot placement estimator, gait sensitivity norm and maximum allowable perturbation. We explain what these measures represent and how they are calculated, and we assess their validity, divided up into construct validity, predictive validity in simple models, convergent validity in experimental studies, and predictive validity in observational studies. We conclude that (i) the validity of variability measures and λS is best supported across all levels, (ii) the maximum Floquet multiplier and λL have good construct validity, but negative predictive validity in models, negative convergent validity and (for λL) negative predictive validity in observational studies, (iii) long-range correlations lack construct validity and predictive validity in models and have negative convergent validity, and (iv) measures derived from perturbation experiments have good construct validity, but data are lacking on convergent validity in experimental studies and predictive validity in observational studies. In closing, directions for future research on dynamic gait stability are discussed. PMID:23516062
Gaytan, Francisco; Morales, Concepción; Leon, Silvia; Heras, Violeta; Barroso, Alexia; Avendaño, Maria S.; Vazquez, Maria J.; Castellano, Juan M.; Roa, Juan; Tena-Sempere, Manuel
2017-01-01
Puberty is a key developmental event whose primary regulatory mechanisms remain poorly understood. Precise dating of puberty is crucial for experimental (preclinical) studies on its complex neuroendocrine controlling networks. In female laboratory rodents, external signs of puberty, such as vaginal opening (VO) and epithelial cell cornification (i.e., first vaginal estrus, FE), are indirectly related to the maturational state of the ovary and first ovulation, which is the unequivocal marker of puberty. Whereas in rats, VO and FE are almost simultaneous with the first ovulation, these events are not so closely associated in mice. Moreover, external signs of puberty can be uncoupled with first ovulation in both species under certain experimental conditions. We propose herein the Pubertal Ovarian Maturation Score (Pub-score), as novel, reliable method to assess peripubertal ovarian maturation in rats and mice. This method is founded on histological evaluation of pre-pubertal ovarian maturation, based on antral follicle development, and the precise timing of first ovulation, by retrospective dating of maturational and regressive changes in corpora lutea. This approach allows exact timing of puberty within a time-window of at least two weeks after VO in both species, thus facilitating the identification and precise dating of advanced or delayed puberty under various experimental conditions. PMID:28401948
Zarabadi, Atefeh S; Pawliszyn, Janusz
2015-02-17
Analysis in the frequency domain is considered a powerful tool to elicit precise information from spectroscopic signals. In this study, the Fourier transformation technique is employed to determine the diffusion coefficient (D) of a number of proteins in the frequency domain. Analytical approaches are investigated for determination of D from both experimental and data treatment viewpoints. The diffusion process is modeled to calculate diffusion coefficients based on the Fourier transformation solution to Fick's law equation, and its results are compared to time domain results. The simulations characterize optimum spatial and temporal conditions and demonstrate the noise tolerance of the method. The proposed model is validated by its application for the electropherograms from the diffusion path of a set of proteins. Real-time dynamic scanning is conducted to monitor dispersion by employing whole column imaging detection technology in combination with capillary isoelectric focusing (CIEF) and the imaging plug flow (iPF) experiment. These experimental techniques provide different peak shapes, which are utilized to demonstrate the Fourier transformation ability in extracting diffusion coefficients out of irregular shape signals. Experimental results confirmed that the Fourier transformation procedure substantially enhanced the accuracy of the determined values compared to those obtained in the time domain.
Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David
2012-10-01
The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.
NASA Astrophysics Data System (ADS)
Szymanski, Marek Z.; Kulszewicz-Bajer, Irena; Faure-Vincent, Jérôme; Djurado, David
2012-05-01
Space-charge-limited current transients (also referred as time resolved dark injection) is an attractive technique for mobility measurements in low mobility materials, particularly the organic semiconductors. Transients are generally analyzed in terms of the Many-Rakavy theory, which is an approximate analytical solution of the time-dependent drift-diffusion problem after application of a voltage step. In this contribution, we perform full time-dependent drift-diffusion simulation and compare simulated and experimental transients measured on a sample of triaryl-amine based electroactive dendrimer (experimental conditions: μ≈10-5 cm2/(Vs), L=300 nm, E<105 V/cm). We have found that the Many-Rakavy theory is indeed valid for estimating the mobility value, but it fails to predict quantitatively the time-dependent current response. In order to obtain a good agreement in between simulation and experiment, trapping and quasi-ohmic contact models were needed to be taken into account. In the case of the studied electroactive dendrimer, the experimental results were apparently consistent with the constant mobility Many-Rakavy theory, but with this model, a large uncertainty of 20% was found for the mobility value. We show that this uncertainty can be significantly reduced to 10% if a field-dependent mobility is taken into account in the framework of the extended Gaussian disorder model. Finally, we demonstrate that this fitting procedure between simulated and experimental transient responses also permits to unambiguously provide the values of the contact barrier, the trap concentration, the trap depth in addition to that of the mobility of carriers.
Time constraints in the alcohol purchase task.
Kaplan, Brent A; Reed, Derek D; Murphy, James G; Henley, Amy J; Reed, Florence D DiGennaro; Roma, Peter G; Hursh, Steven R
2017-06-01
Hypothetical purchase tasks have advanced behavioral economic evaluations of demand by circumventing practical and ethical restrictions associated with delivering drug reinforcers to participants. Numerous studies examining the reliability and validity of purchase task methodology suggest that it is a valuable method for assessing demand that warrants continued use and evaluation. Within the literature examining purchase tasks, the alcohol purchase task (APT) has received the most investigation, and currently represents the most experimentally validated variant. However, inconsistencies in purchase task methodology between studies exist, even within APT studies, and, to date, none have assessed the influence of experimental economic constraints on responding. This study examined changes in Q0 (reported consumption when drinks are free), breakpoint (price that suppresses consumption), and α (rate of change in demand elasticity) in the presence of different hypothetical durations of access to alcohol in an APT. One hundred seventy-nine participants (94 males, 85 females) from Amazon Mechanical Turk completed 3 APTs that varied in the duration of time at a party (i.e., access to alcoholic beverages) as described in the APT instructions (i.e., vignette). The 3 durations included 5-hr (used by Murphy et al., 2013), 1-hr, and 9-hr time frames. We found that hypothetical duration of access was significantly related to Q0 and breakpoint at the individual level. Additionally, group-level mean α decreased significantly with increases in duration of access, thus indicating relatively higher demand for alcohol with longer durations of access. We discuss implications for conducting hypothetical purchase task research and alcohol misuse prevention efforts. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Assessment of vertical excursions and open-sea psychological performance at depths to 250 fsw.
Miller, J W; Bachrach, A J; Walsh, J M
1976-12-01
A series of 10 two-man descending vertical excursion dives was carried out in the open sea from an ocean-floor habitat off the coast of Puerto Rico by four aquanauts saturated on a normoxic-nitrogen breathing mixture at a depth of 106 fsw. The purpose of these dives was two-fold: to validate laboratory findings with respect to decompression schedules and to determine whether such excursions would produce evidence of adaptation to nitrogen narcosis. For the latter, tests designed to measure time estimation, short-term memory, and auditory vigilance were used. The validation of experimental excursion tables was carried out without incidence of decompression sickness. Although no signs of nitrogen narcosis were noted during testing, all subjects made significantly longer time estimates in the habitat and during the excursions than on the surface. Variability and incomplete data prevented a statistical analysis of the short-term memory results, and the auditory vigilance proved unusable in the water.
NASA Technical Reports Server (NTRS)
Mark, W. D.
1977-01-01
Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.