Sample records for extensive experimental validation

  1. Experimental measurement of flexion-extension movement in normal and corpse prosthetic elbow joint.

    PubMed

    TarniŢă, Daniela; TarniŢă, DănuŢ Nicolae

    2016-01-01

    This paper presents a comparative experimental study of flexion-extension movement in healthy elbow and in the prosthetic elbow joint fixed on an original experimental bench. Measurements were carried out in order to validate the functional morphology and a new elbow prosthesis type ball head. The three-dimensional (3D) model and the physical prototype of our experimental bench used to test elbow endoprosthesis at flexion-extension and pronation-supination movements is presented. The measurements were carried out on a group of nine healthy subjects and on the prosthetic corpse elbow, the experimental data being obtained for flexion-extension movement cycles. Experimental data for the two different flexion-extension tests for the nine subjects and for the corpse prosthetic elbow were acquired using SimiMotion video system. Experimental data were processed statistically. The corresponding graphs were obtained for all subjects in the experimental group, and for corpse prosthetic elbow for both flexion-extension tests. The statistical analysis has proved that the flexion angles of healthy elbows were significantly close to the values measured at the prosthetic elbow fixed on the experimental bench. The studied elbow prosthesis manages to re-establish the mobility for the elbow joint as close to the normal one.

  2. Methodological convergence of program evaluation designs.

    PubMed

    Chacón-Moscoso, Salvador; Anguera, M Teresa; Sanduvete-Chaves, Susana; Sánchez-Martín, Milagrosa

    2014-01-01

    Nowadays, the confronting dichotomous view between experimental/quasi-experimental and non-experimental/ethnographic studies still exists but, despite the extensive use of non-experimental/ethnographic studies, the most systematic work on methodological quality has been developed based on experimental and quasi-experimental studies. This hinders evaluators and planners' practice of empirical program evaluation, a sphere in which the distinction between types of study is changing continually and is less clear. Based on the classical validity framework of experimental/quasi-experimental studies, we carry out a review of the literature in order to analyze the convergence of design elements in methodological quality in primary studies in systematic reviews and ethnographic research. We specify the relevant design elements that should be taken into account in order to improve validity and generalization in program evaluation practice in different methodologies from a practical methodological and complementary view. We recommend ways to improve design elements so as to enhance validity and generalization in program evaluation practice.

  3. A validation of LTRAN2 with high frequency extensions by comparisons with experimental measurements of unsteady transonic flows

    NASA Technical Reports Server (NTRS)

    Hessenius, K. A.; Goorjian, P. M.

    1981-01-01

    A high frequency extension of the unsteady, transonic code LTRAN2 was created and is evaluated by comparisons with experimental results. The experimental test case is a NACA 64A010 airfoil in pitching motion at a Mach number of 0.8 over a range of reduced frequencies. Comparisons indicate that the modified code is an improvement of the original LTRAN2 and provides closer agreement with experimental lift and moment coefficients. A discussion of the code modifications, which involve the addition of high frequency terms of the boundary conditions of the numerical algorithm, is included.

  4. Progress Towards a Microgravity CFD Validation Study Using the ISS SPHERES-SLOSH Experiment

    NASA Technical Reports Server (NTRS)

    Storey, Jedediah M.; Kirk, Daniel; Marsell, Brandon (Editor); Schallhorn, Paul (Editor)

    2017-01-01

    Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecrafts mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many CFD programs have been validated by slosh experiments using various fluids in earth gravity, but prior to the ISS SPHERES-Slosh experiment1, little experimental data for long-duration, zero-gravity slosh existed. This paper presents the current status of an ongoing CFD validation study using the ISS SPHERES-Slosh experimental data.

  5. Progress Towards a Microgravity CFD Validation Study Using the ISS SPHERES-SLOSH Experiment

    NASA Technical Reports Server (NTRS)

    Storey, Jed; Kirk, Daniel (Editor); Marsell, Brandon (Editor); Schallhorn, Paul (Editor)

    2017-01-01

    Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecrafts mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many CFD programs have been validated by slosh experiments using various fluids in earth gravity, but prior to the ISS SPHERES-Slosh experiment, little experimental data for long-duration, zero-gravity slosh existed. This paper presents the current status of an ongoing CFD validation study using the ISS SPHERES-Slosh experimental data.

  6. Code Validation Studies of High-Enthalpy Flows

    DTIC Science & Technology

    2006-12-01

    stage of future hypersonic vehicles. The development and design of such vehicles is aided by the use of experimentation and numerical simulation... numerical predictions and experimental measurements. 3. Summary of Previous Work We have studied extensively hypersonic double-cone flows with and in...the experimental measurements and the numerical predictions. When we accounted for that effect in numerical simulations, and also augmented the

  7. Theory and simulations of covariance mapping in multiple dimensions for data analysis in high-event-rate experiments

    NASA Astrophysics Data System (ADS)

    Zhaunerchyk, V.; Frasinski, L. J.; Eland, J. H. D.; Feifel, R.

    2014-05-01

    Multidimensional covariance analysis and its validity for correlation of processes leading to multiple products are investigated from a theoretical point of view. The need to correct for false correlations induced by experimental parameters which fluctuate from shot to shot, such as the intensity of self-amplified spontaneous emission x-ray free-electron laser pulses, is emphasized. Threefold covariance analysis based on simple extension of the two-variable formulation is shown to be valid for variables exhibiting Poisson statistics. In this case, false correlations arising from fluctuations in an unstable experimental parameter that scale linearly with signals can be eliminated by threefold partial covariance analysis, as defined here. Fourfold covariance based on the same simple extension is found to be invalid in general. Where fluctuations in an unstable parameter induce nonlinear signal variations, a technique of contingent covariance analysis is proposed here to suppress false correlations. In this paper we also show a method to eliminate false correlations associated with fluctuations of several unstable experimental parameters.

  8. Kinematics of the thoracic T10-T11 motion segment: locus of instantaneous axes of rotation in flexion and extension.

    PubMed

    Qiu, Tian-Xia; Teo, Ee-Chon; Lee, Kim-Kheng; Ng, Hong-Wan; Yang, Kai

    2004-04-01

    The purpose of this study was to determine the locations and loci of instantaneous axes of rotation (IARs) of the T10-T11 motion segment in flexion and extension. An anatomically accurate three-dimensional model of thoracic T10-T11 functional spinal unit (FSU) was developed and validated against published experimental data under flexion, extension, lateral bending, and axial rotation loading configurations. The validated model was exercised under six load configurations that produced motions only in the sagittal plane to characterize the loci of IARs for flexion and extension. The IARs for both flexion and extension under these six load types were directly below the geometric center of the moving vertebra, and all the loci of IARs were tracked superoanteriorly for flexion and inferoposteriorly for extension with rotation. These findings may offer an insight to better understanding of the kinematics of the human thoracic spine and provide clinically relevant information for the evaluation of spinal stability and implant device functionality.

  9. Development and validation of a 10-year-old child ligamentous cervical spine finite element model.

    PubMed

    Dong, Liqiang; Li, Guangyao; Mao, Haojie; Marek, Stanley; Yang, King H

    2013-12-01

    Although a number of finite element (FE) adult cervical spine models have been developed to understand the injury mechanisms of the neck in automotive related crash scenarios, there have been fewer efforts to develop a child neck model. In this study, a 10-year-old ligamentous cervical spine FE model was developed for application in the improvement of pediatric safety related to motor vehicle crashes. The model geometry was obtained from medical scans and meshed using a multi-block approach. Appropriate properties based on review of literature in conjunction with scaling were assigned to different parts of the model. Child tensile force-deformation data in three segments, Occipital-C2 (C0-C2), C4-C5 and C6-C7, were used to validate the cervical spine model and predict failure forces and displacements. Design of computer experiments was performed to determine failure properties for intervertebral discs and ligaments needed to set up the FE model. The model-predicted ultimate displacements and forces were within the experimental range. The cervical spine FE model was validated in flexion and extension against the child experimental data in three segments, C0-C2, C4-C5 and C6-C7. Other model predictions were found to be consistent with the experimental responses scaled from adult data. The whole cervical spine model was also validated in tension, flexion and extension against the child experimental data. This study provided methods for developing a child ligamentous cervical spine FE model and to predict soft tissue failures in tension.

  10. Cutting the Wires: Modularization of Cellular Networks for Experimental Design

    PubMed Central

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-01

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. PMID:24411264

  11. Three-dimensional computational fluid dynamics modelling and experimental validation of the Jülich Mark-F solid oxide fuel cell stack

    NASA Astrophysics Data System (ADS)

    Nishida, R. T.; Beale, S. B.; Pharoah, J. G.; de Haart, L. G. J.; Blum, L.

    2018-01-01

    This work is among the first where the results of an extensive experimental research programme are compared to performance calculations of a comprehensive computational fluid dynamics model for a solid oxide fuel cell stack. The model, which combines electrochemical reactions with momentum, heat, and mass transport, is used to obtain results for an established industrial-scale fuel cell stack design with complex manifolds. To validate the model, comparisons with experimentally gathered voltage and temperature data are made for the Jülich Mark-F, 18-cell stack operating in a test furnace. Good agreement is obtained between the model and experiment results for cell voltages and temperature distributions, confirming the validity of the computational methodology for stack design. The transient effects during ramp up of current in the experiment may explain a lower average voltage than model predictions for the power curve.

  12. Experimental StudyHigh Altitude Forced Convective Cooling of Electromechanical Actuation Systems

    DTIC Science & Technology

    2016-01-01

    experimental validation at altitudes above 16,000 feet, relevant to commercial and military aircraft. The convective heat transfer coefficient at altitudes...and natural occurring phenomena. Figure 1.3 also shows that a typical flight ceiling for commercial and military air breathing aircraft is about...However, they have not been extensively vetted in atmospheric conditions experienced by commercial and tactical military aircraft. 1.3 Purpose

  13. Design and experimental validation of a flutter suppression controller for the active flexible wing

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and extensive simulation based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite modeling errors in predicted flutter dynamic pressure and flutter frequency. The flutter suppression controller was also successfully operated in combination with another controller to perform flutter suppression during rapid rolling maneuvers.

  14. Cutting the wires: modularization of cellular networks for experimental design.

    PubMed

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-07

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  15. Investigation of the effect of the ejector on the performance of the pulse detonation engine nozzle extension

    NASA Astrophysics Data System (ADS)

    Korobov, A. E.; Golovastov, S. V.

    2015-11-01

    Influence of an ejector nozzle extension on gas flow at a pulse detonation engine was investigated numerically and experimentally. Detonation formation was organized in stoichiometric hydrogen-oxygen mixture in cylindrical detonation tube. Cylindrical ejector was constructed and mounted at the open end of the tube. Thrust, air consumption and parameters of the detonation were measured in single and multiple regimes of operation. Axisymmetric model was used in numerical investigation. Equations of Navies-Stokes were solved using a finite-difference scheme Roe of second order of accuracy. Initial conditions were estimated on a base of experimental data. Numerical results were validated with experiments data.

  16. Fail Safe, High Temperature Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Minihan, Thomas; Palazzolo, Alan; Kim, Yeonkyu; Lei, Shu-Liang; Kenny, Andrew; Na, Uhn Joo; Tucker, Randy; Preuss, Jason; Hunt, Andrew; Carter, Bart; hide

    2002-01-01

    This paper contributes to the magnetic bearing literature in two distinct areas: high temperature and redundant actuation. Design considerations and test results are given for the first published combined 538 C (1000 F) high speed rotating test performance of a magnetic bearing. Secondly, a significant extension of the flux isolation based, redundant actuator control algorithm is proposed to eliminate the prior deficiency of changing position stiffness after failure. The benefit of the novel extension was not experimentally demonstrated due to a high active stiffness requirement. In addition, test results are given for actuator failure tests at 399 C (750 F), 12,500 rpm. Finally, simulation results are presented confirming the experimental data and validating the redundant control algorithm.

  17. Flutter suppression for the Active Flexible Wing - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, M. R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of a control law for an active flutter suppression system for the Active Flexible Wing wind-tunnel model is presented. The design was accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach relied on a fundamental understanding of the flutter mechanism to formulate understanding of the flutter mechanism to formulate a simple control law structure. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in the design model. The flutter suppression controller was also successfully operated in combination with a rolling maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  18. Motivation Interventions in Education: A Meta-Analytic Review

    ERIC Educational Resources Information Center

    Lazowski, Rory A.; Hulleman, Chris S.

    2016-01-01

    This meta-analysis provides an extensive and organized summary of intervention studies in education that are grounded in motivation theory. We identified 74 published and unpublished papers that experimentally manipulated an independent variable and measured an authentic educational outcome within an ecologically valid educational context. Our…

  19. A Phenomenological Model and Validation of Shortening Induced Force Depression during Muscle Contractions

    PubMed Central

    McGowan, C.P.; Neptune, R.R.; Herzog, W.

    2009-01-01

    History dependent effects on muscle force development following active changes in length have been measured in a number of experimental studies. However, few muscle models have included these properties or examined their impact on force and power output in dynamic cyclic movements. The goal of this study was to develop and validate a modified Hill-type muscle model that includes shortening induced force depression and assess its influence on locomotor performance. The magnitude of force depression was defined by empirical relationships based on muscle mechanical work. To validate the model, simulations incorporating force depression were developed to emulate single muscle in situ and whole muscle group leg extension experiments. There was excellent agreement between simulation and experimental values, with in situ force patterns closely matching the experimental data (average RMS error < 1.5 N) and force depression in the simulated leg extension exercise being similar in magnitude to experimental values (6.0% vs 6.5%, respectively). To examine the influence of force depression on locomotor performance, simulations of maximum power pedaling with and without force depression were generated. Force depression decreased maximum crank power by 20% – 40%, depending on the relationship between force depression and muscle work used. These results indicate that force depression has the potential to substantially influence muscle power output in dynamic cyclic movements. However, to fully understand the impact of this phenomenon on human movement, more research is needed to characterize the relationship between force depression and mechanical work in large muscles with different morphologies. PMID:19879585

  20. Measuring landscape esthetics: the scenic beauty estimation method

    Treesearch

    Terry C. Daniel; Ron S. Boster

    1976-01-01

    The Scenic Beauty Estimation Method (SBE) provides quantitative measures of esthetic preferences for alternative wildland management systems. Extensive experimentation and testing with user, interest, and professional groups validated the method. SBE shows promise as an efficient and objective means for assessing the scenic beauty of public forests and wildlands, and...

  1. Comparative Bacterial Proteomics: Analysis of the Core Genome Concept

    PubMed Central

    Callister, Stephen J.; McCue, Lee Ann; Turse, Joshua E.; Monroe, Matthew E.; Auberry, Kenneth J.; Smith, Richard D.; Adkins, Joshua N.; Lipton, Mary S.

    2008-01-01

    While comparative bacterial genomic studies commonly predict a set of genes indicative of common ancestry, experimental validation of the existence of this core genome requires extensive measurement and is typically not undertaken. Enabled by an extensive proteome database developed over six years, we have experimentally verified the expression of proteins predicted from genomic ortholog comparisons among 17 environmental and pathogenic bacteria. More exclusive relationships were observed among the expressed protein content of phenotypically related bacteria, which is indicative of the specific lifestyles associated with these organisms. Although genomic studies can establish relative orthologous relationships among a set of bacteria and propose a set of ancestral genes, our proteomics study establishes expressed lifestyle differences among conserved genes and proposes a set of expressed ancestral traits. PMID:18253490

  2. Comparison of Aircraft Icing Growth Assessment Software

    NASA Technical Reports Server (NTRS)

    Wright, William; Potapczuk, Mark G.; Levinson, Laurie H.

    2011-01-01

    A research project is underway to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. An extensive comparison of the results in a quantifiable manner against the database of ice shapes that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has been performed, including additional data taken to extend the database in the Super-cooled Large Drop (SLD) regime. The project shows the differences in ice shape between LEWICE 3.2.2, GlennICE, and experimental data. The project addresses the validation of the software against a recent set of ice-shape data in the SLD regime. This validation effort mirrors a similar effort undertaken for previous validations of LEWICE. Those reports quantified the ice accretion prediction capabilities of the LEWICE software. Several ice geometry features were proposed for comparing ice shapes in a quantitative manner. The resulting analysis showed that LEWICE compared well to the available experimental data.

  3. CPV cells cooling system based on submerged jet impingement: CFD modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Montorfano, Davide; Gaetano, Antonio; Barbato, Maurizio C.; Ambrosetti, Gianluca; Pedretti, Andrea

    2014-09-01

    Concentrating photovoltaic (CPV) cells offer higher efficiencies with regard to the PV ones and allow to strongly reduce the overall solar cell area. However, to operate correctly and exploit their advantages, their temperature has to be kept low and as uniform as possible and the cooling circuit pressure drops need to be limited. In this work an impingement water jet cooling system specifically designed for an industrial HCPV receiver is studied. Through the literature and by means of accurate computational fluid dynamics (CFD) simulations, the nozzle to plate distance, the number of jets and the nozzle pitch, i.e. the distance between adjacent jets, were optimized. Afterwards, extensive experimental tests were performed to validate pressure drops and cooling power simulation results.

  4. Experimental, Numerical, and Analytical Slosh Dynamics of Water and Liquid Nitrogen in a Spherical Tank

    NASA Technical Reports Server (NTRS)

    Storey, Jedediah Morse

    2016-01-01

    Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecraft's mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many experimental and numerical studies of water slosh have been conducted. However, slosh data for cryogenic liquids is lacking. Water and cryogenic liquid nitrogen are used in various ground-based tests with a spherical tank to characterize damping, slosh mode frequencies, and slosh forces. A single ring baffle is installed in the tank for some of the tests. Analytical models for slosh modes, slosh forces, and baffle damping are constructed based on prior work. Select experiments are simulated using a commercial CFD software, and the numerical results are compared to the analytical and experimental results for the purposes of validation and methodology-improvement.

  5. Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.

    PubMed

    Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2017-06-30

    Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.

  6. Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do

    PubMed Central

    2017-01-01

    Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113

  7. The Social Psychology of Perception Experiments: Hills, Backpacks, Glucose, and the Problem of Generalizability

    ERIC Educational Resources Information Center

    Durgin, Frank H.; Klein, Brennan; Spiegel, Ariana; Strawser, Cassandra J.; Williams, Morgan

    2012-01-01

    Experiments take place in a physical environment but also a social environment. Generalizability from experimental manipulations to more typical contexts may be limited by violations of ecological validity with respect to either the physical or the social environment. A replication and extension of a recent study (a blood glucose manipulation) was…

  8. Development of Officer Selection Battery Forms 3 and 4. Technical Report 603.

    ERIC Educational Resources Information Center

    Fischl, M. A.; And Others

    This report describes the development, standardization, and validation of two parallel forms of the Officer Selection Battery, a 2-hour, group administrable, paper and pencil test for assessing men and women applying for the Reserve Officers Training Corps (ROTC). Based on an extensive job analysis, 1,400 experimental items in 12 job areas were…

  9. Experimental validation of a new heterogeneous mechanical test design

    NASA Astrophysics Data System (ADS)

    Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.

    2018-05-01

    Standard material parameters identification strategies generally use an extensive number of classical tests for collecting the required experimental data. However, a great effort has been made recently by the scientific and industrial communities to support this experimental database on heterogeneous tests. These tests can provide richer information on the material behavior allowing the identification of a more complete set of material parameters. This is a result of the recent development of full-field measurements techniques, like digital image correlation (DIC), that can capture the heterogeneous deformation fields on the specimen surface during the test. Recently, new specimen geometries were designed to enhance the richness of the strain field and capture supplementary strain states. The butterfly specimen is an example of these new geometries, designed through a numerical optimization procedure where an indicator capable of evaluating the heterogeneity and the richness of strain information. However, no experimental validation was yet performed. The aim of this work is to experimentally validate the heterogeneous butterfly mechanical test in the parameter identification framework. For this aim, DIC technique and a Finite Element Model Up-date inverse strategy are used together for the parameter identification of a DC04 steel, as well as the calculation of the indicator. The experimental tests are carried out in a universal testing machine with the ARAMIS measuring system to provide the strain states on the specimen surface. The identification strategy is accomplished with the data obtained from the experimental tests and the results are compared to a reference numerical solution.

  10. Three-dimensional deformation response of a NiTi shape memory helical-coil actuator during thermomechanical cycling: experimentally validated numerical model

    NASA Astrophysics Data System (ADS)

    Dhakal, B.; Nicholson, D. E.; Saleeb, A. F.; Padula, S. A., II; Vaidyanathan, R.

    2016-09-01

    Shape memory alloy (SMA) actuators often operate under a complex state of stress for an extended number of thermomechanical cycles in many aerospace and engineering applications. Hence, it becomes important to account for multi-axial stress states and deformation characteristics (which evolve with thermomechanical cycling) when calibrating any SMA model for implementation in large-scale simulation of actuators. To this end, the present work is focused on the experimental validation of an SMA model calibrated for the transient and cyclic evolutionary behavior of shape memory Ni49.9Ti50.1, for the actuation of axially loaded helical-coil springs. The approach requires both experimental and computational aspects to appropriately assess the thermomechanical response of these multi-dimensional structures. As such, an instrumented and controlled experimental setup was assembled to obtain temperature, torque, degree of twist and extension, while controlling end constraints during heating and cooling of an SMA spring under a constant externally applied axial load. The computational component assesses the capabilities of a general, multi-axial, SMA material-modeling framework, calibrated for Ni49.9Ti50.1 with regard to its usefulness in the simulation of SMA helical-coil spring actuators. Axial extension, being the primary response, was examined on an axially-loaded spring with multiple active coils. Two different conditions of end boundary constraint were investigated in both the numerical simulations as well as the validation experiments: Case (1) where the loading end is restrained against twist (and the resulting torque measured as the secondary response) and Case (2) where the loading end is free to twist (and the degree of twist measured as the secondary response). The present study focuses on the transient and evolutionary response associated with the initial isothermal loading and the subsequent thermal cycles under applied constant axial load. The experimental results for the helical-coil actuator under two different boundary conditions are found to be within error to their counterparts in the numerical simulations. The numerical simulation and the experimental validation demonstrate similar transient and evolutionary behavior in the deformation response under the complex, inhomogeneous, multi-axial stress-state and large deformations of the helical-coil actuator. This response, although substantially different in magnitude, exhibited similar evolutionary characteristics to the simple, uniaxial, homogeneous, stress-state of the isobaric tensile tests results used for the model calibration. There was no significant difference in the axial displacement (primary response) magnitudes observed between Cases (1) and (2) for the number of cycles investigated here. The simulated secondary responses of the two cases evolved in a similar manner when compared to the experimental validation of the respective cases.

  11. Titanium Honeycomb Panel Testing

    NASA Technical Reports Server (NTRS)

    Richards, W. Lance; Thompson, Randolph C.

    1996-01-01

    Thermal-mechanical tests were performed on a titanium honeycomb sandwich panel to experimentally validate the hypersonic wing panel concept and compare test data with analysis. Details of the test article, test fixture development, instrumentation, and test results are presented. After extensive testing to 900 deg. F, non-destructive evaluation of the panel has not detected any significant structural degradation caused by the applied thermal-mechanical loads.

  12. Experimental investigation of hypersonic aerodynamics

    NASA Technical Reports Server (NTRS)

    Heinemann, K.; Intrieri, Peter F.

    1987-01-01

    An extensive series of ballistic range tests are currently being conducted at the Ames Research Center. These tests are intended to investigate the hypersonic aerodynamic characteristics of two basic configurations, which are: the blunt-cone Galileo probe which is scheduled to be launched in late 1989 and will enter the atmosphere of Jupiter in 1994, and a generic slender cone configuration to provide experimental aerodynamic data including good flow-field definition which computational aerodynamicists could use to validate their computer codes. Some of the results obtained thus far are presented and work for the near future is discussed.

  13. Coke formation in the thermal cracking of hydrocarbons. 4: Modeling of coke formation in naphtha cracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reyniers, G.C.; Froment, G.F.; Kopinke, F.D.

    1994-11-01

    An extensive experimental program has been carried out in a pilot unit for the thermal cracking of hydrocarbons. On the basis of the experimental information and the insight in the mechanisms for coke formation in pyrolysis reactors, a mathematical model describing the coke formation has been derived. This model has been incorporated in the existing simulation tools at the Laboratorium voor Petrochemische Techniek, and the run length of an industrial naphtha cracking furnace has been accurately simulated. In this way the coking model has been validated.

  14. Experimental Study of Supercooled Large Droplet Impingement Effects

    NASA Technical Reports Server (NTRS)

    Papadakis, M.; Rachman, A.; Wong, S. C.; Hung, K. E.; Vu, G. T.

    2003-01-01

    Typically, ice accretion results from small supercooled droplets (droplets cooled below freezing), usually 5 to 50 microns in diameter, which can freeze upon impact with an aircraft surface. Recently, ice accretions resulting from supercooled large droplet (SLD) conditions have become a safety concern. Current ice accretion codes have been extensively tested for Title 14 Code of Federal Regulations Part 25, Appendix C icing conditions but have not been validated for SLD icing conditions. This report presents experimental methods for investigating large droplet impingement dynamics and for obtaining small and large water droplet impingement data.

  15. Photons Revisited

    NASA Astrophysics Data System (ADS)

    Batic, Matej; Begalli, Marcia; Han, Min Cheol; Hauf, Steffen; Hoff, Gabriela; Kim, Chan Hyeong; Kim, Han Sung; Grazia Pia, Maria; Saracco, Paolo; Weidenspointner, Georg

    2014-06-01

    A systematic review of methods and data for the Monte Carlo simulation of photon interactions is in progress: it concerns a wide set of theoretical modeling approaches and data libraries available for this purpose. Models and data libraries are assessed quantitatively with respect to an extensive collection of experimental measurements documented in the literature to determine their accuracy; this evaluation exploits rigorous statistical analysis methods. The computational performance of the associated modeling algorithms is evaluated as well. An overview of the assessment of photon interaction models and results of the experimental validation are presented.

  16. Systematic Validation of Protein Force Fields against Experimental Data

    PubMed Central

    Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.

    2012-01-01

    Molecular dynamics simulations provide a vehicle for capturing the structures, motions, and interactions of biological macromolecules in full atomic detail. The accuracy of such simulations, however, is critically dependent on the force field—the mathematical model used to approximate the atomic-level forces acting on the simulated molecular system. Here we present a systematic and extensive evaluation of eight different protein force fields based on comparisons of experimental data with molecular dynamics simulations that reach a previously inaccessible timescale. First, through extensive comparisons with experimental NMR data, we examined the force fields' abilities to describe the structure and fluctuations of folded proteins. Second, we quantified potential biases towards different secondary structure types by comparing experimental and simulation data for small peptides that preferentially populate either helical or sheet-like structures. Third, we tested the force fields' abilities to fold two small proteins—one α-helical, the other with β-sheet structure. The results suggest that force fields have improved over time, and that the most recent versions, while not perfect, provide an accurate description of many structural and dynamical properties of proteins. PMID:22384157

  17. Family Implementation of Positive Behavior Support for a Child with Autism: Longitudinal, Single-Case, Experimental, and Descriptive Replication and Extension

    ERIC Educational Resources Information Center

    Lucyshyn, Joseph M.; Albin, Richard W.; Horner, Robert H.; Mann, Jane C.; Mann, James A.; Wadsworth, Gina

    2007-01-01

    This study examined the efficacy, social validity, and durability of a positive behavior support (PBS) approach with the family of a girl with autism and severe problem behavior. The study was conducted across a 10-year period beginning when the child was 5 years old. A multiple baseline across family routines design evaluated the functional…

  18. Computational identification of structural factors affecting the mutagenic potential of aromatic amines: study design and experimental validation.

    PubMed

    Slavov, Svetoslav H; Stoyanova-Slavova, Iva; Mattes, William; Beger, Richard D; Brüschweiler, Beat J

    2018-07-01

    A grid-based, alignment-independent 3D-SDAR (three-dimensional spectral data-activity relationship) approach based on simulated 13 C and 15 N NMR chemical shifts augmented with through-space interatomic distances was used to model the mutagenicity of 554 primary and 419 secondary aromatic amines. A robust modeling strategy supported by extensive validation including randomized training/hold-out test set pairs, validation sets, "blind" external test sets as well as experimental validation was applied to avoid over-parameterization and build Organization for Economic Cooperation and Development (OECD 2004) compliant models. Based on an experimental validation set of 23 chemicals tested in a two-strain Salmonella typhimurium Ames assay, 3D-SDAR was able to achieve performance comparable to 5-strain (Ames) predictions by Lhasa Limited's Derek and Sarah Nexus for the same set. Furthermore, mapping of the most frequently occurring bins on the primary and secondary aromatic amine structures allowed the identification of molecular features that were associated either positively or negatively with mutagenicity. Prominent structural features found to enhance the mutagenic potential included: nitrobenzene moieties, conjugated π-systems, nitrothiophene groups, and aromatic hydroxylamine moieties. 3D-SDAR was also able to capture "true" negative contributions that are particularly difficult to detect through alternative methods. These include sulphonamide, acetamide, and other functional groups, which not only lack contributions to the overall mutagenic potential, but are known to actively lower it, if present in the chemical structures of what otherwise would be potential mutagens.

  19. Mathematical, numerical and experimental analysis of the swirling flow at a Kaplan runner outlet

    NASA Astrophysics Data System (ADS)

    Muntean, S.; Ciocan, T.; Susan-Resiga, R. F.; Cervantes, M.; Nilsson, H.

    2012-11-01

    The paper presents a novel mathematical model for a-priori computation of the swirling flow at Kaplan runners outlet. The model is an extension of the initial version developed by Susan-Resiga et al [1], to include the contributions of non-negligible radial velocity and of the variable rothalpy. Simple analytical expressions are derived for these additional data from three-dimensional numerical simulations of the Kaplan turbine. The final results, i.e. velocity components profiles, are validated against experimental data at two operating points, with the same Kaplan runner blades opening, but variable discharge.

  20. Hierarchical atom type definitions and extensible all-atom force fields.

    PubMed

    Jin, Zhao; Yang, Chunwei; Cao, Fenglei; Li, Feng; Jing, Zhifeng; Chen, Long; Shen, Zhe; Xin, Liang; Tong, Sijia; Sun, Huai

    2016-03-15

    The extensibility of force field is a key to solve the missing parameter problem commonly found in force field applications. The extensibility of conventional force fields is traditionally managed in the parameterization procedure, which becomes impractical as the coverage of the force field increases above a threshold. A hierarchical atom-type definition (HAD) scheme is proposed to make extensible atom type definitions, which ensures that the force field developed based on the definitions are extensible. To demonstrate how HAD works and to prepare a foundation for future developments, two general force fields based on AMBER and DFF functional forms are parameterized for common organic molecules. The force field parameters are derived from the same set of quantum mechanical data and experimental liquid data using an automated parameterization tool, and validated by calculating molecular and liquid properties. The hydration free energies are calculated successfully by introducing a polarization scaling factor to the dispersion term between the solvent and solute molecules. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  1. LBflow: An extensible lattice Boltzmann framework for the simulation of geophysical flows. Part II: usage and validation

    NASA Astrophysics Data System (ADS)

    Llewellin, E. W.

    2010-02-01

    LBflow is a flexible, extensible implementation of the lattice Boltzmann method, developed with geophysical applications in mind. The theoretical basis for LBflow, and its implementation, are presented in the companion paper, 'Part I'. This article covers the practical usage of LBflow and presents guidelines for obtaining optimal results from available computing power. The relationships among simulation resolution, accuracy, runtime and memory requirements are investigated in detail. Particular attention is paid to the origin, quantification and minimization of errors. LBflow is validated against analytical, numerical and experimental results for a range of three-dimensional flow geometries. The fluid conductance of prismatic pipes with various cross sections is calculated with LBflow and found to be in excellent agreement with published results. Simulated flow along sinusoidally constricted pipes gives good agreement with experimental data for a wide range of Reynolds number. The permeability of packs of spheres is determined and shown to be in excellent agreement with analytical results. The accuracy of internal flow patterns within the investigated geometries is also in excellent quantitative agreement with published data. The development of vortices within a sinusoidally constricted pipe with increasing Reynolds number is shown, demonstrating the insight that LBflow can offer as a 'virtual laboratory' for fluid flow.

  2. Investigating Mechanisms of Chronic Kidney Disease in Mouse Models

    PubMed Central

    Eddy, Allison A.; Okamura, Daryl M.; Yamaguchi, Ikuyo; López-Guisa, Jesús M.

    2011-01-01

    Animal models of chronic kidney disease (CKD) are important experimental tools that are used to investigate novel mechanistic pathways and to validate potential new therapeutic interventions prior to pre-clinical testing in humans. Over the past several years, mouse CKD models have been extensively used for these purposes. Despite significant limitations, the model of unilateral ureteral obstruction (UUO) has essentially become the high throughput in vivo model, as it recapitulates the fundamental pathogenetic mechanisms that typify all forms of CKD in a relatively short time span. In addition, several alternative mouse models are available that can be used to validate new mechanistic paradigms and/or novel therapies. Several models are reviewed – both genetic and experimentally induced – that provide investigators with an opportunity to include renal functional study end-points together with quantitative measures of fibrosis severity, something that is not possible with the UUO model. PMID:21695449

  3. Experimental validation of the intrinsic spatial efficiency method over a wide range of sizes for cylindrical sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Ramŕez, Pablo, E-mail: rapeitor@ug.uchile.cl; Larroquette, Philippe; Camilla, S.

    The intrinsic spatial efficiency method is a new absolute method to determine the efficiency of a gamma spectroscopy system for any extended source. In the original work the method was experimentally demonstrated and validated for homogeneous cylindrical sources containing {sup 137}Cs, whose sizes varied over a small range (29.5 mm radius and 15.0 to 25.9 mm height). In this work we present an extension of the validation over a wide range of sizes. The dimensions of the cylindrical sources vary between 10 to 40 mm height and 8 to 30 mm radius. The cylindrical sources were prepared using the referencemore » material IAEA-372, which had a specific activity of 11320 Bq/kg at july 2006. The obtained results were better for the sources with 29 mm radius showing relative bias lesser than 5% and for the sources with 10 mm height showing relative bias lesser than 6%. In comparison with the obtained results in the work where we present the method, the majority of these results show an excellent agreement.« less

  4. Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick; Klein, Vladislav

    2011-01-01

    Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.

  5. Toward reliable biomarker signatures in the age of liquid biopsies - how to standardize the small RNA-Seq workflow

    PubMed Central

    Buschmann, Dominik; Haberberger, Anna; Kirchner, Benedikt; Spornraft, Melanie; Riedmaier, Irmgard; Schelling, Gustav; Pfaffl, Michael W.

    2016-01-01

    Small RNA-Seq has emerged as a powerful tool in transcriptomics, gene expression profiling and biomarker discovery. Sequencing cell-free nucleic acids, particularly microRNA (miRNA), from liquid biopsies additionally provides exciting possibilities for molecular diagnostics, and might help establish disease-specific biomarker signatures. The complexity of the small RNA-Seq workflow, however, bears challenges and biases that researchers need to be aware of in order to generate high-quality data. Rigorous standardization and extensive validation are required to guarantee reliability, reproducibility and comparability of research findings. Hypotheses based on flawed experimental conditions can be inconsistent and even misleading. Comparable to the well-established MIQE guidelines for qPCR experiments, this work aims at establishing guidelines for experimental design and pre-analytical sample processing, standardization of library preparation and sequencing reactions, as well as facilitating data analysis. We highlight bottlenecks in small RNA-Seq experiments, point out the importance of stringent quality control and validation, and provide a primer for differential expression analysis and biomarker discovery. Following our recommendations will encourage better sequencing practice, increase experimental transparency and lead to more reproducible small RNA-Seq results. This will ultimately enhance the validity of biomarker signatures, and allow reliable and robust clinical predictions. PMID:27317696

  6. Criterion-Related Validity of Sit-and-Reach Tests for Estimating Hamstring and Lumbar Extensibility: a Meta-Analysis

    PubMed Central

    Mayorga-Vega, Daniel; Merino-Marban, Rafael; Viciana, Jesús

    2014-01-01

    The main purpose of the present meta-analysis was to examine the scientific literature on the criterion-related validity of sit-and-reach tests for estimating hamstring and lumbar extensibility. For this purpose relevant studies were searched from seven electronic databases dated up through December 2012. Primary outcomes of criterion-related validity were Pearson´s zero-order correlation coefficients (r) between sit-and-reach tests and hamstrings and/or lumbar extensibility criterion measures. Then, from the included studies, the Hunter- Schmidt´s psychometric meta-analysis approach was conducted to estimate population criterion- related validity of sit-and-reach tests. Firstly, the corrected correlation mean (rp), unaffected by statistical artefacts (i.e., sampling error and measurement error), was calculated separately for each sit-and-reach test. Subsequently, the three potential moderator variables (sex of participants, age of participants, and level of hamstring extensibility) were examined by a partially hierarchical analysis. Of the 34 studies included in the present meta-analysis, 99 correlations values across eight sit-and-reach tests and 51 across seven sit-and-reach tests were retrieved for hamstring and lumbar extensibility, respectively. The overall results showed that all sit-and-reach tests had a moderate mean criterion-related validity for estimating hamstring extensibility (rp = 0.46-0.67), but they had a low mean for estimating lumbar extensibility (rp = 0. 16-0.35). Generally, females, adults and participants with high levels of hamstring extensibility tended to have greater mean values of criterion-related validity for estimating hamstring extensibility. When the use of angular tests is limited such as in a school setting or in large scale studies, scientists and practitioners could use the sit-and-reach tests as a useful alternative for hamstring extensibility estimation, but not for estimating lumbar extensibility. Key Points Overall sit-and-reach tests have a moderate mean criterion-related validity for estimating hamstring extensibility, but they have a low mean validity for estimating lumbar extensibility. Among all the sit-and-reach test protocols, the Classic sit-and-reach test seems to be the best option to estimate hamstring extensibility. End scores (e.g., the Classic sit-and-reach test) are a better indicator of hamstring extensibility than the modifications that incorporate fingers-to-box distance (e.g., the Modified sit-and-reach test). When angular tests such as straight leg raise or knee extension tests cannot be used, sit-and-reach tests seem to be a useful field test alternative to estimate hamstring extensibility, but not to estimate lumbar extensibility. PMID:24570599

  7. Assessment of the neutron dose field around a biomedical cyclotron: FLUKA simulation and experimental measurements.

    PubMed

    Infantino, Angelo; Cicoria, Gianfranco; Lucconi, Giulia; Pancaldi, Davide; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano; Marengo, Mario

    2016-12-01

    In the planning of a new cyclotron facility, an accurate knowledge of the radiation field around the accelerator is fundamental for the design of shielding, the protection of workers, the general public and the environment. Monte Carlo simulations can be very useful in this process, and their use is constantly increasing. However, few data have been published so far as regards the proper validation of Monte Carlo simulation against experimental measurements, particularly in the energy range of biomedical cyclotrons. In this work a detailed model of an existing installation of a GE PETtrace 16.5MeV cyclotron was developed using FLUKA. An extensive measurement campaign of the neutron ambient dose equivalent H ∗ (10) in marked positions around the cyclotron was conducted using a neutron rem-counter probe and CR39 neutron detectors. Data from a previous measurement campaign performed by our group using TLDs were also re-evaluated. The FLUKA model was then validated by comparing the results of high-statistics simulations with experimental data. In 10 out of 12 measurement locations, FLUKA simulations were in agreement within uncertainties with all the three different sets of experimental data; in the remaining 2 positions, the agreement was with 2/3 of the measurements. Our work allows to quantitatively validate our FLUKA simulation setup and confirms that Monte Carlo technique can produce accurate results in the energy range of biomedical cyclotrons. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. A simplified conjoint recognition paradigm for the measurement of gist and verbatim memory.

    PubMed

    Stahl, Christoph; Klauer, Karl Christoph

    2008-05-01

    The distinction between verbatim and gist memory traces has furthered the understanding of numerous phenomena in various fields, such as false memory research, research on reasoning and decision making, and cognitive development. To measure verbatim and gist memory empirically, an experimental paradigm and multinomial measurement model has been proposed but rarely applied. In the present article, a simplified conjoint recognition paradigm and multinomial model is introduced and validated as a measurement tool for the separate assessment of verbatim and gist memory processes. A Bayesian metacognitive framework is applied to validate guessing processes. Extensions of the model toward incorporating the processes of phantom recollection and erroneous recollection rejection are discussed.

  9. Evaluation of a computational model to predict elbow range of motion

    PubMed Central

    Nishiwaki, Masao; Johnson, James A.; King, Graham J. W.; Athwal, George S.

    2014-01-01

    Computer models capable of predicting elbow flexion and extension range of motion (ROM) limits would be useful for assisting surgeons in improving the outcomes of surgical treatment of patients with elbow contractures. A simple and robust computer-based model was developed that predicts elbow joint ROM using bone geometries calculated from computed tomography image data. The model assumes a hinge-like flexion-extension axis, and that elbow passive ROM limits can be based on terminal bony impingement. The model was validated against experimental results with a cadaveric specimen, and was able to predict the flexion and extension limits of the intact joint to 0° and 3°, respectively. The model was also able to predict the flexion and extension limits to 1° and 2°, respectively, when simulated osteophytes were inserted into the joint. Future studies based on this approach will be used for the prediction of elbow flexion-extension ROM in patients with primary osteoarthritis to help identify motion-limiting hypertrophic osteophytes, and will eventually permit real-time computer-assisted navigated excisions. PMID:24841799

  10. Sum Frequency Generation of Interfacial Lipid Monolayers Shows Polarization Dependence on Experimental Geometries.

    PubMed

    Li, Bolin; Li, Xu; Ma, Yong-Hao; Han, Xiaofeng; Wu, Fu-Gen; Guo, Zhirui; Chen, Zhan; Lu, Xiaolin

    2016-07-19

    Sum frequency generation (SFG) vibrational spectroscopy has been widely employed to investigate molecular structures of biological surfaces and interfaces including model cell membranes. A variety of lipid monolayers or bilayers serving as model cell membranes and their interactions with many different molecules have been extensively studied using SFG. Here, we conducted an in-depth investigation on polarization-dependent SFG signals collected from interfacial lipid monolayers using different experimental geometries, i.e., the prism geometry (total internal reflection) and the window geometry (external reflection). The different SFG spectral features of interfacial lipid monolayers detected using different experimental geometries are due to the interplay between the varied Fresnel coefficients and second-order nonlinear susceptibility tensor terms of different vibrational modes (i.e., ss and as modes of methyl groups), which were analyzed in detail in this study. Therefore, understanding the interplay between the interfacial Fresnel coefficients and χ((2)) tensors is a prerequisite for correctly understanding the SFG spectral features with respect to different experimental geometries. More importantly, the derived information in this paper should not be limited to the methyl groups with a C3v symmetry; valid extension to interfacial functional groups with different molecular symmetries and even chiral interfaces could be expected.

  11. A turbulence model for iced airfoils and its validation

    NASA Technical Reports Server (NTRS)

    Shin, Jaiwon; Chen, Hsun H.; Cebeci, Tuncer

    1992-01-01

    A turbulence model based on the extension of the algebraic eddy viscosity formulation of Cebeci and Smith developed for two dimensional flows over smooth and rough surfaces is described for iced airfoils and validated for computed ice shapes obtained for a range of total temperatures varying from 28 to -15 F. The validation is made with an interactive boundary layer method which uses a panel method to compute the inviscid flow and an inverse finite difference boundary layer method to compute the viscous flow. The interaction between inviscid and viscous flows is established by the use of the Hilbert integral. The calculated drag coefficients compare well with recent experimental data taken at the NASA-Lewis Icing Research Tunnel (IRT) and show that, in general, the drag increase due to ice accretion can be predicted well and efficiently.

  12. Performance analysis and experimental study on rainfall water purification with an extensive green roof matrix layer in Shanghai, China.

    PubMed

    Guo, Jiankang; Zhang, Yanting; Che, Shengquan

    2018-02-01

    Current research has validated the purification of rainwater by a substrate layer of green roofs to some extent, though the effects of the substrate layer on rainwater purification have not been adequately quantified. The present study set up nine extensive green roof experiment combinations based on the current conditions of precipitation characteristics observed in Shanghai, China. Different rain with pollutants were simulated, and the orthogonal design L9 (33) test was conducted to measure purification performance. The purification influences of the extensive green roof substrate layer were quantitatively analyzed in Shanghai to optimize the thickness, proportion of substrate, and sodium polyacrylate content. The experimental outcomes resulted in ammonium nitrogen (NH 4 + -N), lead (Pb), and zinc (Zn) removal of up to 93.87%, 98.81%, and 94.55% in the artificial rainfall, respectively, and NH 4 + -N, Pb, and Zn event mean concentration (EMC) was depressed to 0.263 mg/L, 0.002 mg/L and 0.018 mg/L, respectively, which were all well below the pollutant concentrations of artificial rainfall. With reference to the rainfall chemical characteristics of Shanghai, a combination of a 200 mm thickness, proportions of 1:1:2 of Loam: Perlite: Cocopeat and 2 g/L sodium polyacrylate content was suggested for the design of an extensive green roof substrate to purify NH 4 + -N, Pb and Zn.

  13. R. A. Fisher and his advocacy of randomization.

    PubMed

    Hall, Nancy S

    2007-01-01

    The requirement of randomization in experimental design was first stated by R. A. Fisher, statistician and geneticist, in 1925 in his book Statistical Methods for Research Workers. Earlier designs were systematic and involved the judgment of the experimenter; this led to possible bias and inaccurate interpretation of the data. Fisher's dictum was that randomization eliminates bias and permits a valid test of significance. Randomization in experimenting had been used by Charles Sanders Peirce in 1885 but the practice was not continued. Fisher developed his concepts of randomizing as he considered the mathematics of small samples, in discussions with "Student," William Sealy Gosset. Fisher published extensively. His principles of experimental design were spread worldwide by the many "voluntary workers" who came from other institutions to Rothamsted Agricultural Station in England to learn Fisher's methods.

  14. Development and Validation of a Statistical Shape Modeling-Based Finite Element Model of the Cervical Spine Under Low-Level Multiple Direction Loading Conditions

    PubMed Central

    Bredbenner, Todd L.; Eliason, Travis D.; Francis, W. Loren; McFarland, John M.; Merkle, Andrew C.; Nicolella, Daniel P.

    2014-01-01

    Cervical spinal injuries are a significant concern in all trauma injuries. Recent military conflicts have demonstrated the substantial risk of spinal injury for the modern warfighter. Finite element models used to investigate injury mechanisms often fail to examine the effects of variation in geometry or material properties on mechanical behavior. The goals of this study were to model geometric variation for a set of cervical spines, to extend this model to a parametric finite element model, and, as a first step, to validate the parametric model against experimental data for low-loading conditions. Individual finite element models were created using cervical spine (C3–T1) computed tomography data for five male cadavers. Statistical shape modeling (SSM) was used to generate a parametric finite element model incorporating variability of spine geometry, and soft-tissue material property variation was also included. The probabilistic loading response of the parametric model was determined under flexion-extension, axial rotation, and lateral bending and validated by comparison to experimental data. Based on qualitative and quantitative comparison of the experimental loading response and model simulations, we suggest that the model performs adequately under relatively low-level loading conditions in multiple loading directions. In conclusion, SSM methods coupled with finite element analyses within a probabilistic framework, along with the ability to statistically validate the overall model performance, provide innovative and important steps toward describing the differences in vertebral morphology, spinal curvature, and variation in material properties. We suggest that these methods, with additional investigation and validation under injurious loading conditions, will lead to understanding and mitigating the risks of injury in the spine and other musculoskeletal structures. PMID:25506051

  15. Ship Air Wake Detection Using a Small Fixed Wing Unmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Phelps, David M.

    A ship's air wake is dynamically detected using an airborne inertial measurement unit (IMU) and global positioning system (GPS) attached to a fixed wing unmanned aerial system. A fixed wing unmanned aerial system (UAS) was flown through the air wake created by an underway 108 ft (32.9m) long research vessel in pre designated flight paths. The instrumented aircraft was used to validate computational fluid dynamic (CFD) simulations of naval ship air wakes. Computer models of the research ship and the fixed wing UAS were generated and gridded using NASA's TetrUSS software. Simulations were run using Kestrel, a Department of Defense CFD software to validate the physical experimental data collection method. Air wake simulations were run at various relative wind angles and speeds. The fixed wing UAS was subjected to extensive wind tunnel testing to generate a table of aerodynamic coefficients as a function of control surface deflections, angle of attack and sideslip. The wind tunnel experimental data was compared against similarly structured CFD experiments to validate the grid and model of fixed wing UAS. Finally, a CFD simulation of the fixed wing UAV flying through the generated wake was completed. Forces on the instrumented aircraft were calculated from the data collected by the IMU. Comparison of experimental and simulation data showed that the fixed wing UAS could detect interactions with the ship air wake.

  16. Validation of Cross Sections for Monte Carlo Simulation of the Photoelectric Effect

    NASA Astrophysics Data System (ADS)

    Han, Min Cheol; Kim, Han Sung; Pia, Maria Grazia; Basaglia, Tullio; Batič, Matej; Hoff, Gabriela; Kim, Chan Hyeong; Saracco, Paolo

    2016-04-01

    Several total and partial photoionization cross section calculations, based on both theoretical and empirical approaches, are quantitatively evaluated with statistical analyses using a large collection of experimental data retrieved from the literature to identify the state of the art for modeling the photoelectric effect in Monte Carlo particle transport. Some of the examined cross section models are available in general purpose Monte Carlo systems, while others have been implemented and subjected to validation tests for the first time to estimate whether they could improve the accuracy of particle transport codes. The validation process identifies Scofield's 1973 non-relativistic calculations, tabulated in the Evaluated Photon Data Library (EPDL), as the one best reproducing experimental measurements of total cross sections. Specialized total cross section models, some of which derive from more recent calculations, do not provide significant improvements. Scofield's non-relativistic calculations are not surpassed regarding the compatibility with experiment of K and L shell photoionization cross sections either, although in a few test cases Ebel's parameterization produces more accurate results close to absorption edges. Modifications to Biggs and Lighthill's parameterization implemented in Geant4 significantly reduce the accuracy of total cross sections at low energies with respect to its original formulation. The scarcity of suitable experimental data hinders a similar extensive analysis for the simulation of the photoelectron angular distribution, which is limited to a qualitative appraisal.

  17. Experimental validation of an analytical kinetic model for edge-localized modes in JET-ITER-like wall

    NASA Astrophysics Data System (ADS)

    Guillemaut, C.; Metzger, C.; Moulton, D.; Heinola, K.; O’Mullane, M.; Balboa, I.; Boom, J.; Matthews, G. F.; Silburn, S.; Solano, E. R.; contributors, JET

    2018-06-01

    The design and operation of future fusion devices relying on H-mode plasmas requires reliable modelling of edge-localized modes (ELMs) for precise prediction of divertor target conditions. An extensive experimental validation of simple analytical predictions of the time evolution of target plasma loads during ELMs has been carried out here in more than 70 JET-ITER-like wall H-mode experiments with a wide range of conditions. Comparisons of these analytical predictions with diagnostic measurements of target ion flux density, power density, impact energy and electron temperature during ELMs are presented in this paper and show excellent agreement. The analytical predictions tested here are made with the ‘free-streaming’ kinetic model (FSM) which describes ELMs as a quasi-neutral plasma bunch expanding along the magnetic field lines into the Scrape-Off Layer without collisions. Consequences of the FSM on energy reflection and deposition on divertor targets during ELMs are also discussed.

  18. Survey of computer programs for prediction of crash response and of its experimental validation

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1976-01-01

    The author seeks to critically assess the potentialities of the mathematical and hybrid simulators which predict post-impact response of transportation vehicles. A strict rigorous numerical analysis of a complex phenomenon like crash may leave a lot to be desired with regard to the fidelity of mathematical simulation. Hybrid simulations on the other hand which exploit experimentally observed features of deformations appear to hold a lot of promise. MARC, ANSYS, NONSAP, DYCAST, ACTION, WHAM II and KRASH are among some of the simulators examined for their capabilities with regard to prediction of post impact response of vehicles. A review of these simulators reveals that much more by way of an analysis capability may be desirable than what is currently available. NASA's crashworthiness testing program in conjunction with similar programs of various other agencies, besides generating a large data base, will be equally useful in the validation of new mathematical concepts of nonlinear analysis and in the successful extension of other techniques in crashworthiness.

  19. Experimental validation of phase-only pre-compensation over 494  m free-space propagation.

    PubMed

    Brady, Aoife; Berlich, René; Leonhard, Nina; Kopf, Teresa; Böttner, Paul; Eberhardt, Ramona; Reinlein, Claudia

    2017-07-15

    It is anticipated that ground-to-geostationary orbit (GEO) laser communication will benefit from pre-compensation of atmospheric turbulence for laser beam propagation through the atmosphere. Theoretical simulations and laboratory experiments have determined its feasibility; extensive free-space experimental validation has, however, yet to be fulfilled. Therefore, we designed and implemented an adaptive optical (AO)-box which pre-compensates an outgoing laser beam (uplink) using the measurements of an incoming beam (downlink). The setup was designed to approximate the baseline scenario over a horizontal test range of 0.5 km and consisted of a ground terminal with the AO-box and a simplified approximation of a satellite terminal. Our results confirmed that we could focus the uplink beam on the satellite terminal using AO under a point-ahead angle of 28 μrad. Furthermore, we demonstrated a considerable increase in the intensity received at the satellite. These results are further testimony to AO pre-compensation being a viable technique to enhance Earth-to-GEO optical communication.

  20. On the comparison of stochastic model predictive control strategies applied to a hydrogen-based microgrid

    NASA Astrophysics Data System (ADS)

    Velarde, P.; Valverde, L.; Maestre, J. M.; Ocampo-Martinez, C.; Bordons, C.

    2017-03-01

    In this paper, a performance comparison among three well-known stochastic model predictive control approaches, namely, multi-scenario, tree-based, and chance-constrained model predictive control is presented. To this end, three predictive controllers have been designed and implemented in a real renewable-hydrogen-based microgrid. The experimental set-up includes a PEM electrolyzer, lead-acid batteries, and a PEM fuel cell as main equipment. The real experimental results show significant differences from the plant components, mainly in terms of use of energy, for each implemented technique. Effectiveness, performance, advantages, and disadvantages of these techniques are extensively discussed and analyzed to give some valid criteria when selecting an appropriate stochastic predictive controller.

  1. Simulation of router action on a lathe to test the cutting tool performance in edge-trimming of graphite/epoxy composite

    NASA Astrophysics Data System (ADS)

    Ramulu, M.; Rogers, E.

    1994-04-01

    The predominant machining application with graphite/epoxy composite materials in aerospace industry is peripheral trimming. The computer numerically controlled (CNC) high speed routers required to do edge trimming work are generally scheduled for production work in industry and are not available for extensive cutter testing. Therefore, an experimental method of simulating the conditions of periphery trim using a lathe is developed in this paper. The validity of the test technique will be demonstrated by conducting carbide tool wear tests under dry cutting conditions. The experimental results will be analyzed to characterize the wear behavior of carbide cutting tools in machining the composite materials.

  2. Tranpsort phenomena in solidification processing of functionally graded materials

    NASA Astrophysics Data System (ADS)

    Gao, Juwen

    A combined numerical and experimental study of the transport phenomena during solidification processing of metal matrix composite functionally graded materials (FGMs) is conducted in this work. A multiphase transport model for the solidification of metal-matrix composite FGMs has been developed that accounts for macroscopic particle segregation due to liquid-particle flow and particle-solid interactions. An experimental study has also been conducted to gain physical insight as well as to validate the model. A novel method to in-situ measure the particle volume fraction using fiber optic probes is developed for transparent analogue solidification systems. The model is first applied to one-dimensional pure matrix FGM solidification under gravity or centrifugal field and is extensively validated against the experimental results. The mechanisms for the formation of particle concentration gradient are identified. Two-dimensional solidification of pure matrix FGM with convection is then studied using the model as well as experiments. The interaction among convection flow, solidification process and the particle transport is demonstrated. The results show the importance of convection in the particle concentration gradient formation. Then, simulations for alloy FGM solidification are carried out for unidirectional solidification as well as two-dimensional solidification with convection. The interplay among heat and species transport, convection and particle motion is investigated. Finally, future theoretical and experimental work is outlined.

  3. Dark Zones of Solid Propellant Flames: Critical Assessment and Quantitative Modeling of Experimental Datasets With Analysis of Chemical Pathways and Sensitivities

    DTIC Science & Technology

    2011-01-01

    with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1...Research Associate at ARL with WRA, and largely completed more recently while at Dept. of Chem., SUNY, Cortland, NY. Currently unaffiliated. †Former...promised to provide an extensive, definitive review critically assessing our current understanding of DZ structure and chemistry, and providing a documented

  4. Using Genotype Abundance to Improve Phylogenetic Inference

    PubMed Central

    Mesin, Luka; Victora, Gabriel D; Minin, Vladimir N; Matsen, Frederick A

    2018-01-01

    Abstract Modern biological techniques enable very dense genetic sampling of unfolding evolutionary histories, and thus frequently sample some genotypes multiple times. This motivates strategies to incorporate genotype abundance information in phylogenetic inference. In this article, we synthesize a stochastic process model with standard sequence-based phylogenetic optimality, and show that tree estimation is substantially improved by doing so. Our method is validated with extensive simulations and an experimental single-cell lineage tracing study of germinal center B cell receptor affinity maturation. PMID:29474671

  5. Parametric convergence sensitivity and validation of a finite element model of the human lumbar spine.

    PubMed

    Ayturk, Ugur M; Puttlitz, Christian M

    2011-08-01

    The primary objective of this study was to generate a finite element model of the human lumbar spine (L1-L5), verify mesh convergence for each tissue constituent and perform an extensive validation using both kinematic/kinetic and stress/strain data. Mesh refinement was accomplished via convergence of strain energy density (SED) predictions for each spinal tissue. The converged model was validated based on range of motion, intradiscal pressure, facet force transmission, anterolateral cortical bone strain and anterior longitudinal ligament deformation predictions. Changes in mesh resolution had the biggest impact on SED predictions under axial rotation loading. Nonlinearity of the moment-rotation curves was accurately simulated and the model predictions on the aforementioned parameters were in good agreement with experimental data. The validated and converged model will be utilised to study the effects of degeneration on the lumbar spine biomechanics, as well as to investigate the mechanical underpinning of the contemporary treatment strategies.

  6. Validation of a C2-C7 cervical spine finite element model using specimen-specific flexibility data.

    PubMed

    Kallemeyn, Nicole; Gandhi, Anup; Kode, Swathi; Shivanna, Kiran; Smucker, Joseph; Grosland, Nicole

    2010-06-01

    This study presents a specimen-specific C2-C7 cervical spine finite element model that was developed using multiblock meshing techniques. The model was validated using in-house experimental flexibility data obtained from the cadaveric specimen used for mesh development. The C2-C7 specimen was subjected to pure continuous moments up to +/-1.0 N m in flexion, extension, lateral bending, and axial rotation, and the motions at each level were obtained. Additionally, the specimen was divided into C2-C3, C4-C5, and C6-C7 functional spinal units (FSUs) which were tested in the intact state as well as after sequential removal of the interspinous, ligamentum flavum, and capsular ligaments. The finite element model was initially assigned baseline material properties based on the literature, but was calibrated using the experimental motion data which was obtained in-house, while utlizing the ranges of material property values as reported in the literature. The calibrated model provided good agreement with the nonlinear experimental loading curves, and can be used to further study the response of the cervical spine to various biomechanical investigations. Copyright 2010 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. A novel left heart simulator for the multi-modality characterization of native mitral valve geometry and fluid mechanics.

    PubMed

    Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Yoganathan, Ajit P

    2013-02-01

    Numerical models of the mitral valve have been used to elucidate mitral valve function and mechanics. These models have evolved from simple two-dimensional approximations to complex three-dimensional fully coupled fluid structure interaction models. However, to date these models lack direct one-to-one experimental validation. As computational solvers vary considerably, experimental benchmark data are critically important to ensure model accuracy. In this study, a novel left heart simulator was designed specifically for the validation of numerical mitral valve models. Several distinct experimental techniques were collectively performed to resolve mitral valve geometry and hemodynamics. In particular, micro-computed tomography was used to obtain accurate and high-resolution (39 μm voxel) native valvular anatomy, which included the mitral leaflets, chordae tendinae, and papillary muscles. Three-dimensional echocardiography was used to obtain systolic leaflet geometry. Stereoscopic digital particle image velocimetry provided all three components of fluid velocity through the mitral valve, resolved every 25 ms in the cardiac cycle. A strong central filling jet (V ~ 0.6 m/s) was observed during peak systole with minimal out-of-plane velocities. In addition, physiologic hemodynamic boundary conditions were defined and all data were synchronously acquired through a central trigger. Finally, the simulator is a precisely controlled environment, in which flow conditions and geometry can be systematically prescribed and resultant valvular function and hemodynamics assessed. Thus, this work represents the first comprehensive database of high fidelity experimental data, critical for extensive validation of mitral valve fluid structure interaction simulations.

  8. Mathematical Model Formulation And Validation Of Water And Solute Transport In Whole Hamster Pancreatic Islets

    PubMed Central

    Benson, Charles T.; Critser, John K.

    2014-01-01

    Optimization of cryopreservation protocols for cells and tissues requires accurate models of heat and mass transport. Model selection often depends on the configuration of the tissue. Here, a mathematical and conceptual model of water and solute transport for whole hamster pancreatic islets has been developed and experimentally validated incorporating fundamental biophysical data from previous studies on individual hamster islet cells while retaining whole-islet structural information. It describes coupled transport of water and solutes through the islet by three methods: intracellularly, intercellularly, and in combination. In particular we use domain decomposition techniques to couple a transmembrane flux model with an interstitial mass transfer model. The only significant undetermined variable is the cellular surface area which is in contact with the intercellularly transported solutes, Ais. The model was validated and Ais determined using a 3 × 3 factorial experimental design blocked for experimental day. Whole islet physical experiments were compared with model predictions at three temperatures, three perfusing solutions, and three islet size groups. A mean of 4.4 islets were compared at each of the 27 experimental conditions and found to correlate with a coefficient of determination of 0.87 ± 0.06 (mean ± S.D.). Only the treatment variable of perfusing solution was found to be significant (p < 0.05). We have devised a model that retains much of the intrinsic geometric configuration of the system, and thus fewer laboratory experiments are needed to determine model parameters and thus to develop new optimized cryopreservation protocols. Additionally, extensions to ovarian follicles and other concentric tissue structures may be made. PMID:24950195

  9. A Novel Left Heart Simulator for the Multi-modality Characterization of Native Mitral Valve Geometry and Fluid Mechanics

    PubMed Central

    Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Yoganathan, Ajit P.

    2012-01-01

    Numerical models of the mitral valve have been used to elucidate mitral valve function and mechanics. These models have evolved from simple two-dimensional approximations to complex three-dimensional fully coupled fluid structure interaction models. However, to date these models lack direct one-to-one experimental validation. As computational solvers vary considerably, experimental benchmark data are critically important to ensure model accuracy. In this study, a novel left heart simulator was designed specifically for the validation of numerical mitral valve models. Several distinct experimental techniques were collectively performed to resolve mitral valve geometry and hemodynamics. In particular, micro-computed tomography was used to obtain accurate and high-resolution (39 µm voxel) native valvular anatomy, which included the mitral leaflets, chordae tendinae, and papillary muscles. Threedimensional echocardiography was used to obtain systolic leaflet geometry for direct comparison of resultant leaflet kinematics. Stereoscopic digital particle image velocimetry provided all three components of fluid velocity through the mitral valve, resolved every 25 ms in the cardiac cycle. A strong central filling jet was observed during peak systole, with minimal out-of-plane velocities (V~0.6m/s). In addition, physiologic hemodynamic boundary conditions were defined and all data were synchronously acquired through a central trigger. Finally, the simulator is a precisely controlled environment, in which flow conditions and geometry can be systematically prescribed and resultant valvular function and hemodynamics assessed. Thus, these data represent the first comprehensive database of high fidelity experimental data, critical for extensive validation of mitral valve fluid structure interaction simulations. PMID:22965640

  10. Extension of research data repository system to support direct compute access to biomedical datasets: enhancing Dataverse to support large datasets.

    PubMed

    McKinney, Bill; Meyer, Peter A; Crosas, Mercè; Sliz, Piotr

    2017-01-01

    Access to experimental X-ray diffraction image data is important for validation and reproduction of macromolecular models and indispensable for the development of structural biology processing methods. In response to the evolving needs of the structural biology community, we recently established a diffraction data publication system, the Structural Biology Data Grid (SBDG, data.sbgrid.org), to preserve primary experimental datasets supporting scientific publications. All datasets published through the SBDG are freely available to the research community under a public domain dedication license, with metadata compliant with the DataCite Schema (schema.datacite.org). A proof-of-concept study demonstrated community interest and utility. Publication of large datasets is a challenge shared by several fields, and the SBDG has begun collaborating with the Institute for Quantitative Social Science at Harvard University to extend the Dataverse (dataverse.org) open-source data repository system to structural biology datasets. Several extensions are necessary to support the size and metadata requirements for structural biology datasets. In this paper, we describe one such extension-functionality supporting preservation of file system structure within Dataverse-which is essential for both in-place computation and supporting non-HTTP data transfers. © 2016 New York Academy of Sciences.

  11. Experimental Evidence of Weak Excluded Volume Effects for Nanochannel Confined DNA

    NASA Astrophysics Data System (ADS)

    Gupta, Damini; Miller, Jeremy J.; Muralidhar, Abhiram; Mahshid, Sara; Reisner, Walter; Dorfman, Kevin D.

    In the classical de Gennes picture of weak polymer nanochannel confinement, the polymer contour is envisioned as divided into a series of isometric blobs. Strong excluded volume interactions are present both within a blob and between blobs. In contrast, for semiflexible polymers like DNA, excluded volume interactions are of borderline strength within a blob but appreciable between blobs, giving rise to a chain description consisting of a string of anisometric blobs. We present experimental validation of this subtle effect of excluded volume for DNA nanochannel confinement by performing measurements of variance in chain extension of T4 DNA molecules as a function of effective nanochannel size (305-453 nm). Additionally, we show an approach to systematically reduce the effect of molecular weight dispersity of DNA samples, a typical experimental artifact, by combining confinement spectroscopy with simulations.

  12. Biomechanical changes of the lumbar segment after total disc replacement : charite(r), prodisc(r) and maverick(r) using finite element model study.

    PubMed

    Kim, Ki-Tack; Lee, Sang-Hun; Suk, Kyung-Soo; Lee, Jung-Hee; Jeong, Bi-O

    2010-06-01

    The purpose of this study was to analyze the biomechanical effects of three different constrained types of an artificial disc on the implanted and adjacent segments in the lumbar spine using a finite element model (FEM). The created intact model was validated by comparing the flexion-extension response without pre-load with the corresponding results obtained from the published experimental studies. The validated intact lumbar model was tested after implantation of three artificial discs at L4-5. Each implanted model was subjected to a combination of 400 N follower load and 5 Nm of flexion/extension moments. ABAQUS version 6.5 (ABAQUS Inc., Providence, RI, USA) and FEMAP version 8.20 (Electronic Data Systems Corp., Plano, TX, USA) were used for meshing and analysis of geometry of the intact and implanted models. Under the flexion load, the intersegmental rotation angles of all the implanted models were similar to that of the intact model, but under the extension load, the values were greater than that of the intact model. The facet contact loads of three implanted models were greater than the loads observed with the intact model. Under the flexion load, three types of the implanted model at the L4-5 level showed the intersegmental rotation angle similar to the one measured with the intact model. Under the extension load, all of the artificial disc implanted models demonstrated an increased extension rotational angle at the operated level (L4-5), resulting in an increase under the facet contact load when compared with the adjacent segments. The increased facet load may lead to facet degeneration.

  13. TMATS/ IHAL/ DDML Schema Validation

    DTIC Science & Technology

    2017-02-01

    task was to create a method for performing IRIG eXtensible Markup Language (XML) schema validation. As opposed to XML instance document validation...TMATS / IHAL / DDML Schema Validation, RCC 126-17, February 2017 vii Acronyms DDML Data Display Markup Language HUD heads-up display iNET...system XML eXtensible Markup Language TMATS / IHAL / DDML Schema Validation, RCC 126-17, February 2017 viii This page intentionally left blank

  14. Physics based modeling of a series parallel battery pack for asymmetry analysis, predictive control and life extension

    NASA Astrophysics Data System (ADS)

    Ganesan, Nandhini; Basu, Suman; Hariharan, Krishnan S.; Kolake, Subramanya Mayya; Song, Taewon; Yeo, Taejung; Sohn, Dong Kee; Doo, Seokgwang

    2016-08-01

    Lithium-Ion batteries used for electric vehicle applications are subject to large currents and various operation conditions, making battery pack design and life extension a challenging problem. With increase in complexity, modeling and simulation can lead to insights that ensure optimal performance and life extension. In this manuscript, an electrochemical-thermal (ECT) coupled model for a 6 series × 5 parallel pack is developed for Li ion cells with NCA/C electrodes and validated against experimental data. Contribution of the cathode to overall degradation at various operating conditions is assessed. Pack asymmetry is analyzed from a design and an operational perspective. Design based asymmetry leads to a new approach of obtaining the individual cell responses of the pack from an average ECT output. Operational asymmetry is demonstrated in terms of effects of thermal gradients on cycle life, and an efficient model predictive control technique is developed. Concept of reconfigurable battery pack is studied using detailed simulations that can be used for effective monitoring and extension of battery pack life.

  15. ThermoData Engine (TDE): software implementation of the dynamic data evaluation concept. 9. Extensible thermodynamic constraints for pure compounds and new model developments.

    PubMed

    Diky, Vladimir; Chirico, Robert D; Muzny, Chris D; Kazakov, Andrei F; Kroenlein, Kenneth; Magee, Joseph W; Abdulagatov, Ilmutdin; Frenkel, Michael

    2013-12-23

    ThermoData Engine (TDE) is the first full-scale software implementation of the dynamic data evaluation concept, as reported in this journal. The present article describes the background and implementation for new additions in latest release of TDE. Advances are in the areas of program architecture and quality improvement for automatic property evaluations, particularly for pure compounds. It is shown that selection of appropriate program architecture supports improvement of the quality of the on-demand property evaluations through application of a readily extensible collection of constraints. The basis and implementation for other enhancements to TDE are described briefly. Other enhancements include the following: (1) implementation of model-validity enforcement for specific equations that can provide unphysical results if unconstrained, (2) newly refined group-contribution parameters for estimation of enthalpies of formation for pure compounds containing carbon, hydrogen, and oxygen, (3) implementation of an enhanced group-contribution method (NIST-Modified UNIFAC) in TDE for improved estimation of phase-equilibrium properties for binary mixtures, (4) tools for mutual validation of ideal-gas properties derived through statistical calculations and those derived independently through combination of experimental thermodynamic results, (5) improvements in program reliability and function that stem directly from the recent redesign of the TRC-SOURCE Data Archival System for experimental property values, and (6) implementation of the Peng-Robinson equation of state for binary mixtures, which allows for critical evaluation of mixtures involving supercritical components. Planned future developments are summarized.

  16. Comparative analysis of international standards for the fatigue testing of posterior spinal fixation systems: the importance of preload in ISO 12189.

    PubMed

    La Barbera, Luigi; Ottardi, Claudia; Villa, Tomaso

    2015-10-01

    Preclinical evaluation of the mechanical reliability of fixation devices is a mandatory activity before their introduction into market. There are two standardized protocols for preclinical testing of spinal implants. The American Society for Testing Materials (ASTM) recommends the F1717 standard, which describes a vertebrectomy condition that is relatively simple to implement, whereas the International Organization for Standardization (ISO) suggests the 12189 standard, which describes a more complex physiological anterior support-based setup. Moreover, ASTM F1717 is nowadays well established, whereas ISO 12189 has received little attention: A few studies tried to accurately describe the ISO experimental procedure through numeric models, but these studies totally neglect the recommended precompression step. This study aimed to build up a reliable, validated numeric model capable of describing the stress on the rods of a spinal fixator assembled according to ISO 12189 standard procedure. Such a model would more adequately represent the in vitro testing condition. This study used finite element (FE) simulations and experimental validation testing. An FE model of the ISO setup was built to calculate the stress on the rods. Simulation was validated by comparison with experimental strain gauges measurements. The same fixator has been previously virtually mounted in an L2-L4 FE model of the lumbar spine, and stresses in the rods were calculated when the spine was subjected to physiological forces and moments. The comparison between the FE predictions and experimental measurements is in good agreement, thus confirming the suitability of the FE method to evaluate the stresses in the device. The initial precompression induces a significant extension of the assembled construct. As the applied load increases, the initial extension is gradually compensated, so that at peak load the rods are bent in flexion: The final stress value predicted is thus reduced to about 50%, if compared with the previous model where the precompression was not considered. Neglecting the initial preload due to the assembly of the overall construct according to ISO 12189 standard could lead to an overestimation of the stress on the rods up to 50%. To correctly describe the state of stress on the posterior spinal fixator, tested according to the ISO procedure, it is important to take into account the initial preload due to the assembly of the overall construct. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Shear Strength and Cracking Process of Non-persistent Jointed Rocks: An Extensive Experimental Investigation

    NASA Astrophysics Data System (ADS)

    Asadizadeh, Mostafa; Moosavi, Mahdi; Hossaini, Mohammad Farouq; Masoumi, Hossein

    2018-02-01

    In this paper, a number of artificial rock specimens with two parallel (stepped and coplanar) non-persistent joints were subjected to direct shearing. The effects of bridge length ( L), bridge angle ( γ), joint roughness coefficient (JRC) and normal stress ( σ n) on shear strength and cracking process of non-persistent jointed rock were studied extensively. The experimental program was designed based on Taguchi method, and the validity of the resulting data was assessed using analysis of variance. The results revealed that σ n and γ have the maximum and minimum effects on shear strength, respectively. Also, increase in L from 10 to 60 mm led to decrease in shear strength where high level of JRC profile and σ n led to the initiation of tensile cracks due to asperity interlocking. Such tensile cracks are known as "interlocking cracks" which normally initiate from the asperity and then propagate toward the specimen boundaries. Finally, the cracking process of specimens was classified into three categories, namely tensile cracking, shear cracking and combination of tension and shear or mixed mode tensile-shear cracking.

  18. Accurate green water loads calculation using naval hydro pack

    NASA Astrophysics Data System (ADS)

    Jasak, H.; Gatin, I.; Vukčević, V.

    2017-12-01

    An extensive verification and validation of Finite Volume based CFD software Naval Hydro based on foam-extend is presented in this paper for green water loads. Two-phase numerical model with advanced methods for treating the free surface is employed. Pressure loads on horizontal deck of Floating Production Storage and Offloading vessel (FPSO) model are compared to experimental results from [1] for three incident regular waves. Pressure peaks and integrals of pressure in time are measured on ten different locations on deck for each case. Pressure peaks and integrals are evaluated as average values among the measured incident wave periods, where periodic uncertainty is assessed for both numerical and experimental results. Spatial and temporal discretization refinement study is performed providing numerical discretization uncertainties.

  19. Measurement of Laser Weld Temperatures for 3D Model Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dagel, Daryl; Grossetete, Grant; Maccallum, Danny O.

    Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defectsmore » and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.« less

  20. Model Predictions and Observed Performance of JWST's Cryogenic Position Metrology System

    NASA Technical Reports Server (NTRS)

    Lunt, Sharon R.; Rhodes, David; DiAntonio, Andrew; Boland, John; Wells, Conrad; Gigliotti, Trevis; Johanning, Gary

    2016-01-01

    The James Webb Space Telescope cryogenic testing requires measurement systems that both obtain a very high degree of accuracy and can function in that environment. Close-range photogrammetry was identified as meeting those criteria. Testing the capability of a close-range photogrammetric system prior to its existence is a challenging problem. Computer simulation was chosen over building a scaled mock-up to allow for increased flexibility in testing various configurations. Extensive validation work was done to ensure that the actual as-built system meet accuracy and repeatability requirements. The simulated image data predicted the uncertainty in measurement to be within specification and this prediction was borne out experimentally. Uncertainty at all levels was verified experimentally to be less than 0.1 millimeters.

  1. Comparison of LIDAR system performance for alternative single-mode receiver architectures: modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Toliver, Paul; Ozdur, Ibrahim; Agarwal, Anjali; Woodward, T. K.

    2013-05-01

    In this paper, we describe a detailed performance comparison of alternative single-pixel, single-mode LIDAR architectures including (i) linear-mode APD-based direct-detection, (ii) optically-preamplified PIN receiver, (iii) PINbased coherent-detection, and (iv) Geiger-mode single-photon-APD counting. Such a comparison is useful when considering next-generation LIDAR on a chip, which would allow one to leverage extensive waveguide-based structures and processing elements developed for telecom and apply them to small form-factor sensing applications. Models of four LIDAR transmit and receive systems are described in detail, which include not only the dominant sources of receiver noise commonly assumed in each of the four detection limits, but also additional noise terms present in realistic implementations. These receiver models are validated through the analysis of detection statistics collected from an experimental LIDAR testbed. The receiver is reconfigurable into four modes of operation, while transmit waveforms and channel characteristics are held constant. The use of a diffuse hard target highlights the importance of including speckle noise terms in the overall system analysis. All measurements are done at 1550 nm, which offers multiple system advantages including less stringent eye safety requirements and compatibility with available telecom components, optical amplification, and photonic integration. Ultimately, the experimentally-validated detection statistics can be used as part of an end-to-end system model for projecting rate, range, and resolution performance limits and tradeoffs of alternative integrated LIDAR architectures.

  2. Accurate identification of motor unit discharge patterns from high-density surface EMG and validation with a novel signal-based performance metric

    NASA Astrophysics Data System (ADS)

    Holobar, A.; Minetto, M. A.; Farina, D.

    2014-02-01

    Objective. A signal-based metric for assessment of accuracy of motor unit (MU) identification from high-density surface electromyograms (EMG) is introduced. This metric, so-called pulse-to-noise-ratio (PNR), is computationally efficient, does not require any additional experimental costs and can be applied to every MU that is identified by the previously developed convolution kernel compensation technique. Approach. The analytical derivation of the newly introduced metric is provided, along with its extensive experimental validation on both synthetic and experimental surface EMG signals with signal-to-noise ratios ranging from 0 to 20 dB and muscle contraction forces from 5% to 70% of the maximum voluntary contraction. Main results. In all the experimental and simulated signals, the newly introduced metric correlated significantly with both sensitivity and false alarm rate in identification of MU discharges. Practically all the MUs with PNR > 30 dB exhibited sensitivity >90% and false alarm rates <2%. Therefore, a threshold of 30 dB in PNR can be used as a simple method for selecting only reliably decomposed units. Significance. The newly introduced metric is considered a robust and reliable indicator of accuracy of MU identification. The study also shows that high-density surface EMG can be reliably decomposed at contraction forces as high as 70% of the maximum.

  3. Turbulence Modeling Validation, Testing, and Development

    NASA Technical Reports Server (NTRS)

    Bardina, J. E.; Huang, P. G.; Coakley, T. J.

    1997-01-01

    The primary objective of this work is to provide accurate numerical solutions for selected flow fields and to compare and evaluate the performance of selected turbulence models with experimental results. Four popular turbulence models have been tested and validated against experimental data often turbulent flows. The models are: (1) the two-equation k-epsilon model of Wilcox, (2) the two-equation k-epsilon model of Launder and Sharma, (3) the two-equation k-omega/k-epsilon SST model of Menter, and (4) the one-equation model of Spalart and Allmaras. The flows investigated are five free shear flows consisting of a mixing layer, a round jet, a plane jet, a plane wake, and a compressible mixing layer; and five boundary layer flows consisting of an incompressible flat plate, a Mach 5 adiabatic flat plate, a separated boundary layer, an axisymmetric shock-wave/boundary layer interaction, and an RAE 2822 transonic airfoil. The experimental data for these flows are well established and have been extensively used in model developments. The results are shown in the following four sections: Part A describes the equations of motion and boundary conditions; Part B describes the model equations, constants, parameters, boundary conditions, and numerical implementation; and Parts C and D describe the experimental data and the performance of the models in the free-shear flows and the boundary layer flows, respectively.

  4. Note: Design of FPGA based system identification module with application to atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Ghosal, Sayan; Pradhan, Sourav; Salapaka, Murti

    2018-05-01

    The science of system identification is widely utilized in modeling input-output relationships of diverse systems. In this article, we report field programmable gate array (FPGA) based implementation of a real-time system identification algorithm which employs forgetting factors and bias compensation techniques. The FPGA module is employed to estimate the mechanical properties of surfaces of materials at the nano-scale with an atomic force microscope (AFM). The FPGA module is user friendly which can be interfaced with commercially available AFMs. Extensive simulation and experimental results validate the design.

  5. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    English, Shawn Allen; Nelson, Stacy Michelle; Briggs, Timothy

    Presented is a model verification and validation effort using low - velocity impact (LVI) of carbon fiber reinforced polymer laminate experiments. A flat cylindrical indenter impacts the laminate with enough energy to produce delamination, matrix cracks and fiber breaks. Included in the experimental efforts are ultrasonic scans of the damage for qualitative validation of the models. However, the primary quantitative metrics of validation are the force time history measured through the instrumented indenter and initial and final velocities. The simulations, whi ch are run on Sandia's Sierra finite element codes , consist of all physics and material parameters of importancemore » as determined by a sensitivity analysis conducted on the LVI simulation. A novel orthotropic damage and failure constitutive model that is cap able of predicting progressive composite damage and failure is described in detail and material properties are measured, estimated from micromechanics or optimized through calibration. A thorough verification and calibration to the accompanying experiment s are presented. Specia l emphasis is given to the four - point bend experiment. For all simulations of interest, the mesh and material behavior is verified through extensive convergence studies. An ensemble of simulations incorporating model parameter unc ertainties is used to predict a response distribution which is then compared to experimental output. The result is a quantifiable confidence in material characterization and model physics when simulating this phenomenon in structures of interest.« less

  7. Validation Results for LEWICE 2.0

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Rutkowski, Adam

    1999-01-01

    A research project is underway at NASA Lewis to produce a computer code which can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce results accurately for different spacing and time step criteria across computing platform. It also differs in the extensive amount of effort undertaken to compare the results in a quantified manner against the database of ice shapes which have been generated in the NASA Lewis Icing Research Tunnel (IRT). The results of the shape comparisons are analyzed to determine the range of meteorological conditions under which LEWICE 2.0 is within the experimental repeatability. This comparison shows that the average variation of LEWICE 2.0 from the experimental data is 7.2% while the overall variability of the experimental data is 2.5%.

  8. Mathematical model formulation and validation of water and solute transport in whole hamster pancreatic islets.

    PubMed

    Benson, James D; Benson, Charles T; Critser, John K

    2014-08-01

    Optimization of cryopreservation protocols for cells and tissues requires accurate models of heat and mass transport. Model selection often depends on the configuration of the tissue. Here, a mathematical and conceptual model of water and solute transport for whole hamster pancreatic islets has been developed and experimentally validated incorporating fundamental biophysical data from previous studies on individual hamster islet cells while retaining whole-islet structural information. It describes coupled transport of water and solutes through the islet by three methods: intracellularly, intercellularly, and in combination. In particular we use domain decomposition techniques to couple a transmembrane flux model with an interstitial mass transfer model. The only significant undetermined variable is the cellular surface area which is in contact with the intercellularly transported solutes, Ais. The model was validated and Ais determined using a 3×3 factorial experimental design blocked for experimental day. Whole islet physical experiments were compared with model predictions at three temperatures, three perfusing solutions, and three islet size groups. A mean of 4.4 islets were compared at each of the 27 experimental conditions and found to correlate with a coefficient of determination of 0.87±0.06 (mean ± SD). Only the treatment variable of perfusing solution was found to be significant (p<0.05). We have devised a model that retains much of the intrinsic geometric configuration of the system, and thus fewer laboratory experiments are needed to determine model parameters and thus to develop new optimized cryopreservation protocols. Additionally, extensions to ovarian follicles and other concentric tissue structures may be made. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Navier-Stokes Computations of Longitudinal Forces and Moments for a Blended Wing Body

    NASA Technical Reports Server (NTRS)

    Pao, S. Paul; Biedron, Robert T.; Park, Michael A.; Fremaux, C. Michael; Vicroy, Dan D.

    2005-01-01

    The object of this paper is to investigate the feasibility of applying CFD methods to aerodynamic analyses for aircraft stability and control. The integrated aerodynamic parameters used in stability and control, however, are not necessarily those extensively validated in the state of the art CFD technology. Hence, an exploratory study of such applications and the comparison of the solutions to available experimental data will help to assess the validity of the current computation methods. In addition, this study will also examine issues related to wind tunnel measurements such as measurement uncertainty and support interference effects. Several sets of experimental data from the NASA Langley 14x22-Foot Subsonic Tunnel and the National Transonic Facility are presented. Two Navier-Stokes flow solvers, one using structured meshes and the other unstructured meshes, were used to compute longitudinal static stability derivatives for an advanced Blended Wing Body configuration over a wide range of angles of attack. The computations were performed for two different Reynolds numbers and the resulting forces and moments are compared with the above mentioned wind tunnel data.

  10. Navier-Stokes Computations of Longitudinal Forces and Moments for a Blended Wing Body

    NASA Technical Reports Server (NTRS)

    Pao, S. Paul; Biedron, Robert T.; Park, Michael A.; Fremaux, C. Michael; Vicroy, Dan D.

    2004-01-01

    The object of this paper is to investigate the feasibility of applying CFD methods to aerodynamic analyses for aircraft stability and control. The integrated aerodynamic parameters used in stability and control, however, are not necessarily those extensively validated in the state of the art CFD technology. Hence, an exploratory study of such applications and the comparison of the solutions to available experimental data will help to assess the validity of the current computation methods. In addition, this study will also examine issues related to wind tunnel measurements such as measurement uncertainty and support interference effects. Several sets of experimental data from the NASA Langley 14x22-Foot Subsonic Tunnel and the National Transonic Facility are presented. Two Navier-Stokes flow solvers, one using structured meshes and the other unstructured meshes, were used to compute longitudinal static stability derivatives for an advanced Blended Wing Body configuration over a wide range of angles of attack. The computations were performed for two different Reynolds numbers and the resulting forces and moments are compared with the above mentioned wind tunnel data.

  11. A Validated Open-Source Multisolver Fourth-Generation Composite Femur Model.

    PubMed

    MacLeod, Alisdair R; Rose, Hannah; Gill, Harinderjit S

    2016-12-01

    Synthetic biomechanical test specimens are frequently used for preclinical evaluation of implant performance, often in combination with numerical modeling, such as finite-element (FE) analysis. Commercial and freely available FE packages are widely used with three FE packages in particular gaining popularity: abaqus (Dassault Systèmes, Johnston, RI), ansys (ANSYS, Inc., Canonsburg, PA), and febio (University of Utah, Salt Lake City, UT). To the best of our knowledge, no study has yet made a comparison of these three commonly used solvers. Additionally, despite the femur being the most extensively studied bone in the body, no freely available validated model exists. The primary aim of the study was primarily to conduct a comparison of mesh convergence and strain prediction between the three solvers (abaqus, ansys, and febio) and to provide validated open-source models of a fourth-generation composite femur for use with all the three FE packages. Second, we evaluated the geometric variability around the femoral neck region of the composite femurs. Experimental testing was conducted using fourth-generation Sawbones® composite femurs instrumented with strain gauges at four locations. A generic FE model and four specimen-specific FE models were created from CT scans. The study found that the three solvers produced excellent agreement, with strain predictions being within an average of 3.0% for all the solvers (r2 > 0.99) and 1.4% for the two commercial codes. The average of the root mean squared error against the experimental results was 134.5% (r2 = 0.29) for the generic model and 13.8% (r2 = 0.96) for the specimen-specific models. It was found that composite femurs had variations in cortical thickness around the neck of the femur of up to 48.4%. For the first time, an experimentally validated, finite-element model of the femur is presented for use in three solvers. This model is freely available online along with all the supporting validation data.

  12. Direct adaptive control of a PUMA 560 industrial robot

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Lee, Thomas; Delpech, Michel

    1989-01-01

    The implementation and experimental validation of a new direct adaptive control scheme on a PUMA 560 industrial robot is described. The testbed facility consists of a Unimation PUMA 560 six-jointed robot and controller, and a DEC MicroVAX II computer which hosts the Robot Control C Library software. The control algorithm is implemented on the MicroVAX which acts as a digital controller for the PUMA robot, and the Unimation controller is effectively bypassed and used merely as an I/O device to interface the MicroVAX to the joint motors. The control algorithm for each robot joint consists of an auxiliary signal generated by a constant-gain Proportional plus Integral plus Derivative (PID) controller, and an adaptive position-velocity (PD) feedback controller with adjustable gains. The adaptive independent joint controllers compensate for the inter-joint couplings and achieve accurate trajectory tracking without the need for the complex dynamic model and parameter values of the robot. Extensive experimental results on PUMA joint control are presented to confirm the feasibility of the proposed scheme, in spite of strong interactions between joint motions. Experimental results validate the capabilities of the proposed control scheme. The control scheme is extremely simple and computationally very fast for concurrent processing with high sampling rates.

  13. Monte Carol-based validation of neutronic methodology for EBR-II analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liaw, J.R.; Finck, P.J.

    1993-01-01

    The continuous-energy Monte Carlo code VIM (Ref. 1) has been validated extensively over the years against fast critical experiments and other neutronic analysis codes. A high degree of confidence in VIM for predicting reactor physics parameters has been firmly established. This paper presents a numerical validation of two conventional multigroup neutronic analysis codes, DIF3D (Ref. 4) and VARIANT (Ref. 5), against VIM for two Experimental Breeder Reactor II (EBR-II) core loadings in detailed three-dimensional hexagonal-z geometry. The DIF3D code is based on nodal diffusion theory, and it is used in calculations for day-today reactor operations, whereas the VARIANT code ismore » based on nodal transport theory and is used with increasing frequency for specific applications. Both DIF3D and VARIANT rely on multigroup cross sections generated from ENDF/B-V by the ETOE-2/MC[sup 2]-II/SDX (Ref. 6) code package. Hence, this study also validates the multigroup cross-section processing methodology against the continuous-energy approach used in VIM.« less

  14. Role of multiple cusps in tooth fracture.

    PubMed

    Barani, Amir; Bush, Mark B; Lawn, Brian R

    2014-07-01

    The role of multiple cusps in the biomechanics of human molar tooth fracture is analysed. A model with four cusps at the bite surface replaces the single dome structure used in previous simulations. Extended finite element modelling, with provision to embed longitudinal cracks into the enamel walls, enables full analysis of crack propagation from initial extension to final failure. The cracks propagate longitudinally around the enamel side walls from starter cracks placed either at the top surface (radial cracks) or from the tooth base (margin cracks). A feature of the crack evolution is its stability, meaning that extension occurs steadily with increasing applied force. Predictions from the model are validated by comparison with experimental data from earlier publications, in which crack development was followed in situ during occlusal loading of extracted human molars. The results show substantial increase in critical forces to produce longitudinal fractures with number of cuspal contacts, indicating a capacity for an individual tooth to spread the load during mastication. It is argued that explicit critical force equations derived in previous studies remain valid, at the least as a means for comparing the capacity for teeth of different dimensions to sustain high bite forces. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Review and assessment of turbulence models for hypersonic flows

    NASA Astrophysics Data System (ADS)

    Roy, Christopher J.; Blottner, Frederick G.

    2006-10-01

    Accurate aerodynamic prediction is critical for the design and optimization of hypersonic vehicles. Turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating for these systems. The first goal of this article is to update the previous comprehensive review of hypersonic shock/turbulent boundary-layer interaction experiments published in 1991 by Settles and Dodson (Hypersonic shock/boundary-layer interaction database. NASA CR 177577, 1991). In their review, Settles and Dodson developed a methodology for assessing experiments appropriate for turbulence model validation and critically surveyed the existing hypersonic experiments. We limit the scope of our current effort by considering only two-dimensional (2D)/axisymmetric flows in the hypersonic flow regime where calorically perfect gas models are appropriate. We extend the prior database of recommended hypersonic experiments (on four 2D and two 3D shock-interaction geometries) by adding three new geometries. The first two geometries, the flat plate/cylinder and the sharp cone, are canonical, zero-pressure gradient flows which are amenable to theory-based correlations, and these correlations are discussed in detail. The third geometry added is the 2D shock impinging on a turbulent flat plate boundary layer. The current 2D hypersonic database for shock-interaction flows thus consists of nine experiments on five different geometries. The second goal of this study is to review and assess the validation usage of various turbulence models on the existing experimental database. Here we limit the scope to one- and two-equation turbulence models where integration to the wall is used (i.e., we omit studies involving wall functions). A methodology for validating turbulence models is given, followed by an extensive evaluation of the turbulence models on the current hypersonic experimental database. A total of 18 one- and two-equation turbulence models are reviewed, and results of turbulence model assessments for the six models that have been extensively applied to the hypersonic validation database are compiled and presented in graphical form. While some of the turbulence models do provide reasonable predictions for the surface pressure, the predictions for surface heat flux are generally poor, and often in error by a factor of four or more. In the vast majority of the turbulence model validation studies we review, the authors fail to adequately address the numerical accuracy of the simulations (i.e., discretization and iterative error) and the sensitivities of the model predictions to freestream turbulence quantities or near-wall y+ mesh spacing. We recommend new hypersonic experiments be conducted which (1) measure not only surface quantities but also mean and fluctuating quantities in the interaction region and (2) provide careful estimates of both random experimental uncertainties and correlated bias errors for the measured quantities and freestream conditions. For the turbulence models, we recommend that a wide-range of turbulence models (including newer models) be re-examined on the current hypersonic experimental database, including the more recent experiments. Any future turbulence model validation efforts should carefully assess the numerical accuracy and model sensitivities. In addition, model corrections (e.g., compressibility corrections) should be carefully examined for their effects on a standard, low-speed validation database. Finally, as new experiments or direct numerical simulation data become available with information on mean and fluctuating quantities, they should be used to improve the turbulence models and thus increase their predictive capability.

  16. Design and development of indirectly heated solid cathode for strip type electron gun.

    PubMed

    Maiti, Namita; Mukherjee, S; Kumar, Bhunesh; Barve, U D; Suryawanshi, V B; Das, A K

    2010-01-01

    Design analysis of a high power indirectly heated solid cathode (for a 200 kW, 45 kV, and 270 degrees bent strip type electron gun) has been presented. The design approach consists of simulation followed by extensive experimentation with different cathode configurations. The preferred cathode is of trapezoidal section (8 x 4 x 2 mm(3)) with an emitting area of 110 x 4 mm(2) made up of tantalum operating at about 2500 K. The solid cathode at the operating temperature of 2500 K generated a well defined electron beam. Electromagnetic and thermomechanical simulation is used to optimize the shape of the beam. Thermal modeling has also been used to analyze the temperature and stress distribution on the electrodes. The simulation results are validated by experimental measurement.

  17. Experimental investigation of a supersonic swept ramp injector using laser-induced iodine fluorescence

    NASA Technical Reports Server (NTRS)

    Hartfield, Roy J.; Hollo, Steven D.; Mcdaniel, James C.

    1990-01-01

    Planar measurements of injectant mole fraction and temperature have been conducted in a nonreacting supersonic combustor configured with underexpanded injection in the base of a swept ramp. The temperature measurements were conducted with a Mach 2 test section inlet in streamwise planes perpendicular to the test section wall on which the ramp was mounted. Injection concentration measurements, conducted in cross flow planes with both Mach 2 and Mach 2.9 free stream conditions, dramatically illustrate the domination of the mixing process by streamwise vorticity generated by the ramp. These measurements, conducted using a nonintrusive optical technique (laser-induced iodine fluorescence), provide an accurate and extensive experimental data base for the validation of computation fluid dynamic codes for the calculation of highly three-dimensional supersonic combustor flow fields.

  18. Genetic programming-based mathematical modeling of influence of weather parameters in BOD5 removal by Lemna minor.

    PubMed

    Chandrasekaran, Sivapragasam; Sankararajan, Vanitha; Neelakandhan, Nampoothiri; Ram Kumar, Mahalakshmi

    2017-11-04

    This study, through extensive experiments and mathematical modeling, reveals that other than retention time and wastewater temperature (T w ), atmospheric parameters also play important role in the effective functioning of aquatic macrophyte-based treatment system. Duckweed species Lemna minor is considered in this study. It is observed that the combined effect of atmospheric temperature (T atm ), wind speed (U w ), and relative humidity (RH) can be reflected through one parameter, namely the "apparent temperature" (T a ). A total of eight different models are considered based on the combination of input parameters and the best mathematical model is arrived at which is validated through a new experimental set-up outside the modeling period. The validation results are highly encouraging. Genetic programming (GP)-based models are found to reveal deeper understandings of the wetland process.

  19. Protocol for the validation of microbiological control of cellular products according to German regulators recommendations--Boon and Bane for the manufacturer.

    PubMed

    Störmer, M; Radojska, S; Hos, N J; Gathof, B S

    2015-04-01

    In order to generate standardized conditions for the microbiological control of HPCs, the PEI recommended defined steps for validation that will lead to extensive validation as shown in this study, where a possible validation principle for the microbiological control of allogeneic SCPs is presented. Although it could be demonstrated that automated culture improves microbial safety of cellular products, the requirement for extensive validation studies needs to be considered. © 2014 International Society of Blood Transfusion.

  20. 8 CFR 1214.1 - Review of requirements for admission, extension, and maintenance of status.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) of the Act. Upon application for admission, the alien shall present a valid passport and valid visa... present a passport only if requested to do so by the Service. The passport of an alien applying for... conditions of his or her admission. The passport of an alien applying for extension of stay shall be valid at...

  1. 8 CFR 1214.1 - Review of requirements for admission, extension, and maintenance of status.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) of the Act. Upon application for admission, the alien shall present a valid passport and valid visa... present a passport only if requested to do so by the Service. The passport of an alien applying for... conditions of his or her admission. The passport of an alien applying for extension of stay shall be valid at...

  2. 8 CFR 1214.1 - Review of requirements for admission, extension, and maintenance of status.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) of the Act. Upon application for admission, the alien shall present a valid passport and valid visa... present a passport only if requested to do so by the Service. The passport of an alien applying for... conditions of his or her admission. The passport of an alien applying for extension of stay shall be valid at...

  3. 8 CFR 1214.1 - Review of requirements for admission, extension, and maintenance of status.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) of the Act. Upon application for admission, the alien shall present a valid passport and valid visa... present a passport only if requested to do so by the Service. The passport of an alien applying for... conditions of his or her admission. The passport of an alien applying for extension of stay shall be valid at...

  4. 8 CFR 1214.1 - Review of requirements for admission, extension, and maintenance of status.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) of the Act. Upon application for admission, the alien shall present a valid passport and valid visa... present a passport only if requested to do so by the Service. The passport of an alien applying for... conditions of his or her admission. The passport of an alien applying for extension of stay shall be valid at...

  5. Decoding Dynamic Brain Patterns from Evoked Responses: A Tutorial on Multivariate Pattern Analysis Applied to Time Series Neuroimaging Data.

    PubMed

    Grootswagers, Tijl; Wardle, Susan G; Carlson, Thomas A

    2017-04-01

    Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain-computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to "decode" different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.

  6. Extension of research data repository system to support direct compute access to biomedical datasets: enhancing Dataverse to support large datasets

    PubMed Central

    McKinney, Bill; Meyer, Peter A.; Crosas, Mercè; Sliz, Piotr

    2016-01-01

    Access to experimental X-ray diffraction image data is important for validation and reproduction of macromolecular models and indispensable for the development of structural biology processing methods. In response to the evolving needs of the structural biology community, we recently established a diffraction data publication system, the Structural Biology Data Grid (SBDG, data.sbgrid.org), to preserve primary experimental datasets supporting scientific publications. All datasets published through the SBDG are freely available to the research community under a public domain dedication license, with metadata compliant with the DataCite Schema (schema.datacite.org). A proof-of-concept study demonstrated community interest and utility. Publication of large datasets is a challenge shared by several fields, and the SBDG has begun collaborating with the Institute for Quantitative Social Science at Harvard University to extend the Dataverse (dataverse.org) open-source data repository system to structural biology datasets. Several extensions are necessary to support the size and metadata requirements for structural biology datasets. In this paper, we describe one such extension—functionality supporting preservation of filesystem structure within Dataverse—which is essential for both in-place computation and supporting non-http data transfers. PMID:27862010

  7. A method for the modelling of porous and solid wind tunnel walls in computational fluid dynamics codes

    NASA Technical Reports Server (NTRS)

    Beutner, Thomas John

    1993-01-01

    Porous wall wind tunnels have been used for several decades and have proven effective in reducing wall interference effects in both low speed and transonic testing. They allow for testing through Mach 1, reduce blockage effects and reduce shock wave reflections in the test section. Their usefulness in developing computational fluid dynamics (CFD) codes has been limited, however, by the difficulties associated with modelling the effect of a porous wall in CFD codes. Previous approaches to modelling porous wall effects have depended either upon a simplified linear boundary condition, which has proven inadequate, or upon detailed measurements of the normal velocity near the wall, which require extensive wind tunnel time. The current work was initiated in an effort to find a simple, accurate method of modelling a porous wall boundary condition in CFD codes. The development of such a method would allow data from porous wall wind tunnels to be used more readily in validating CFD codes. This would be beneficial when transonic validations are desired, or when large models are used to achieve high Reynolds numbers in testing. A computational and experimental study was undertaken to investigate a new method of modelling solid and porous wall boundary conditions in CFD codes. The method utilized experimental measurements at the walls to develop a flow field solution based on the method of singularities. This flow field solution was then imposed as a pressure boundary condition in a CFD simulation of the internal flow field. The effectiveness of this method in describing the effect of porosity changes on the wall was investigated. Also, the effectiveness of this method when only sparse experimental measurements were available has been investigated. The current work demonstrated this approach for low speed flows and compared the results with experimental data obtained from a heavily instrumented variable porosity test section. The approach developed was simple, computationally inexpensive, and did not require extensive or intrusive measurements of the boundary conditions during the wind tunnel test. It may be applied to both solid and porous wall wind tunnel tests.

  8. An operational modal analysis method in frequency and spatial domain

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Zhang, Lingmi; Tamura, Yukio

    2005-12-01

    A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.

  9. Crack layer morphology and toughness characterization in steels

    NASA Technical Reports Server (NTRS)

    Chudnovsky, A.; Bessendorf, M.

    1983-01-01

    Both the macro studies of crack layer propagation are presented. The crack extension resistance parameter R sub 1 based on the morphological study of microdefects is introduced. Experimental study of the history dependent nature of G sub c supports the representation of G sub c as a product of specific enthalpy of damage (material constant) and R sub 1. The latter accounts for the history dependence. The observation of nonmonotonic crack growth under monotonic changes of J as well as statistical features of the critical energy release rate (variance of G sub c) indicate the validity of the proposed damage characterization.

  10. Molecular nonlinear dynamics and protein thermal uncertainty quantification

    PubMed Central

    Xia, Kelin; Wei, Guo-Wei

    2014-01-01

    This work introduces molecular nonlinear dynamics (MND) as a new approach for describing protein folding and aggregation. By using a mode system, we show that the MND of disordered proteins is chaotic while that of folded proteins exhibits intrinsically low dimensional manifolds (ILDMs). The stability of ILDMs is found to strongly correlate with protein energies. We propose a novel method for protein thermal uncertainty quantification based on persistently invariant ILDMs. Extensive comparison with experimental data and the state-of-the-art methods in the field validate the proposed new method for protein B-factor prediction. PMID:24697365

  11. Implementation of an experimental program to investigate the performance characteristics of OMEGA navigation

    NASA Technical Reports Server (NTRS)

    Baxa, E. G., Jr.

    1974-01-01

    A theoretical formulation of differential and composite OMEGA error is presented to establish hypotheses about the functional relationships between various parameters and OMEGA navigational errors. Computer software developed to provide for extensive statistical analysis of the phase data is described. Results from the regression analysis used to conduct parameter sensitivity studies on differential OMEGA error tend to validate the theoretically based hypothesis concerning the relationship between uncorrected differential OMEGA error and receiver separation range and azimuth. Limited results of measurement of receiver repeatability error and line of position measurement error are also presented.

  12. Modal analysis of the human neck in vivo as a criterion for crash test dummy evaluation

    NASA Astrophysics Data System (ADS)

    Willinger, R.; Bourdet, N.; Fischer, R.; Le Gall, F.

    2005-10-01

    Low speed rear impact remains an acute automative safety problem because of a lack of knowledge of the mechanical behaviour of the human neck early after impact. Poorly validated mathematical models of the human neck or crash test dummy necks make it difficult to optimize automotive seats and head rests. In this study we have constructed an experimental and theoretical modal analysis of the human head-neck system in the sagittal plane. The method has allowed us to identify the mechanical properties of the neck and to validate a mathematical model in the frequency domain. The extracted modal characteristics consist of a first natural frequency at 1.3±0.1 Hz associated with head flexion-extension motion and a second mode at 8±0.7 Hz associated with antero-posterior translation of the head, also called retraction motion. Based on this new validation parameters we have been able to compare the human and crash test dummy frequency response functions and to evaluate their biofidelity. Three head-neck systems of current test dummies dedicated for use in rear-end car crash accident investigations have been evaluated in the frequency domain. We did not consider any to be acceptable, either because of excessive rigidity of their flexion-extension mode or because they poorly reproduce the head translation mode. In addition to dummy evaluation, this study provides new insight into injury mechanisms when a given natural frequency can be linked to a specific neck deformation.

  13. The lateral distribution of extensive air showers produced by cosmic rays above 10 19 eV as measured by water-Čerenkov detectors

    NASA Astrophysics Data System (ADS)

    Coy, R. N.; Cunningham, G.; Pryke, C. L.; Watson, A. A.

    1997-03-01

    Measurements of the lateral distribution function (ldf) of Extensive Air Showers (EAS) as recorded by the array of water-Čerenkov detectors at Haverah Park are described, and accurate experimental parameterizations expressing the mean ldf for 2 × 10 17 < E < 4 × 10 18 eV, 50 < r < 700 m, and θ < 45° are given. An extrapolation of these relations to the regime E ≥ 10 19 eV and r > 700 m is described: extrapolation in this energy domain appears valid, and an approximate correction term is given for the larger core distances. The results of recent Monte Carlo simulations of shower development and detector behavior are compared to the parameterized ldf. The agreement is good increasing confidence that these simulations may be trusted as design tools for the Auger project, a proposed 'next generation' detector system.

  14. An Extension of the Chi-Square Procedure for Non-NORMAL Statistics, with Application to Solar Neutrino Data

    NASA Astrophysics Data System (ADS)

    Sturrock, P. A.

    2008-01-01

    Using the chi-square statistic, one may conveniently test whether a series of measurements of a variable are consistent with a constant value. However, that test is predicated on the assumption that the appropriate probability distribution function (pdf) is normal in form. This requirement is usually not satisfied by experimental measurements of the solar neutrino flux. This article presents an extension of the chi-square procedure that is valid for any form of the pdf. This procedure is applied to the GALLEX-GNO dataset, and it is shown that the results are in good agreement with the results of Monte Carlo simulations. Whereas application of the standard chi-square test to symmetrized data yields evidence significant at the 1% level for variability of the solar neutrino flux, application of the extended chi-square test to the unsymmetrized data yields only weak evidence (significant at the 4% level) of variability.

  15. Control research in the NASA high-alpha technology program

    NASA Technical Reports Server (NTRS)

    Gilbert, William P.; Nguyen, Luat T.; Gera, Joseph

    1990-01-01

    NASA is conducting a focused technology program, known as the High-Angle-of-Attack Technology Program, to accelerate the development of flight-validated technology applicable to the design of fighters with superior stall and post-stall characteristics and agility. A carefully integrated effort is underway combining wind tunnel testing, analytical predictions, piloted simulation, and full-scale flight research. A modified F-18 aircraft has been extensively instrumented for use as the NASA High-Angle-of-Attack Research Vehicle used for flight verification of new methods and concepts. This program stresses the importance of providing improved aircraft control capabilities both by powered control (such as thrust-vectoring) and by innovative aerodynamic control concepts. The program is accomplishing extensive coordinated ground and flight testing to assess and improve available experimental and analytical methods and to develop new concepts for enhanced aerodynamics and for effective control, guidance, and cockpit displays essential for effective pilot utilization of the increased agility provided.

  16. Prediction of miRNA targets.

    PubMed

    Oulas, Anastasis; Karathanasis, Nestoras; Louloupi, Annita; Pavlopoulos, Georgios A; Poirazi, Panayiota; Kalantidis, Kriton; Iliopoulos, Ioannis

    2015-01-01

    Computational methods for miRNA target prediction are currently undergoing extensive review and evaluation. There is still a great need for improvement of these tools and bioinformatics approaches are looking towards high-throughput experiments in order to validate predictions. The combination of large-scale techniques with computational tools will not only provide greater credence to computational predictions but also lead to the better understanding of specific biological questions. Current miRNA target prediction tools utilize probabilistic learning algorithms, machine learning methods and even empirical biologically defined rules in order to build models based on experimentally verified miRNA targets. Large-scale protein downregulation assays and next-generation sequencing (NGS) are now being used to validate methodologies and compare the performance of existing tools. Tools that exhibit greater correlation between computational predictions and protein downregulation or RNA downregulation are considered the state of the art. Moreover, efficiency in prediction of miRNA targets that are concurrently verified experimentally provides additional validity to computational predictions and further highlights the competitive advantage of specific tools and their efficacy in extracting biologically significant results. In this review paper, we discuss the computational methods for miRNA target prediction and provide a detailed comparison of methodologies and features utilized by each specific tool. Moreover, we provide an overview of current state-of-the-art high-throughput methods used in miRNA target prediction.

  17. Are implicit self-esteem measures valid for assessing individual and cultural differences?

    PubMed

    Falk, Carl F; Heine, Steven J; Takemura, Kosuke; Zhang, Cathy X J; Hsu, Chih-Wei

    2015-02-01

    Our research utilized two popular theoretical conceptualizations of implicit self-esteem: 1) implicit self-esteem as a global automatic reaction to the self; and 2) implicit self-esteem as a context/domain specific construct. Under this framework, we present an extensive search for implicit self-esteem measure validity among different cultural groups (Study 1) and under several experimental manipulations (Study 2). In Study 1, Euro-Canadians (N = 107), Asian-Canadians (N = 187), and Japanese (N = 112) completed a battery of implicit self-esteem, explicit self-esteem, and criterion measures. Included implicit self-esteem measures were either popular or provided methodological improvements upon older methods. Criterion measures were sampled from previous research on implicit self-esteem and included self-report and independent ratings. In Study 2, Americans (N = 582) completed a shorter battery of these same types of measures under either a control condition, an explicit prime meant to activate the self-concept in a particular context, or prime meant to activate self-competence related implicit attitudes. Across both studies, explicit self-esteem measures far outperformed implicit self-esteem measures in all cultural groups and under all experimental manipulations. Implicit self-esteem measures are not valid for individual or cross-cultural comparisons. We speculate that individuals may not form implicit associations with the self as an attitudinal object. © 2013 Wiley Periodicals, Inc.

  18. Extending quantum mechanics entails extending special relativity

    NASA Astrophysics Data System (ADS)

    Aravinda, S.; Srikanth, R.

    2016-05-01

    The complementarity between signaling and randomness in any communicated resource that can simulate singlet statistics is generalized by relaxing the assumption of free will in the choice of measurement settings. We show how to construct an ontological extension for quantum mechanics (QMs) through the oblivious embedding of a sound simulation protocol in a Newtonian spacetime. Minkowski or other intermediate spacetimes are ruled out as the locus of the embedding by virtue of hidden influence inequalities. The complementarity transferred from a simulation to the extension unifies a number of results about quantum non-locality, and implies that special relativity has a different significance for the ontological model and for the operational theory it reproduces. Only the latter, being experimentally accessible, is required to be Lorentz covariant. There may be certain Lorentz non-covariant elements at the ontological level, but they will be inaccessible at the operational level in a valid extension. Certain arguments against the extendability of QM, due to Conway and Kochen (2009) and Colbeck and Renner (2012), are attributed to their assumption that the spacetime at the ontological level has Minkowski causal structure.

  19. Numerical modelling in friction lap joining of aluminium alloy and carbon-fiber-reinforced-plastic sheets

    NASA Astrophysics Data System (ADS)

    Das, A.; Bang, H. S.; Bang, H. S.

    2018-05-01

    Multi-material combinations of aluminium alloy and carbon-fiber-reinforced-plastics (CFRP) have gained attention in automotive and aerospace industries to enhance fuel efficiency and strength-to-weight ratio of components. Various limitations of laser beam welding, adhesive bonding and mechanical fasteners make these processes inefficient to join metal and CFRP sheets. Friction lap joining is an alternative choice for the same. Comprehensive studies in friction lap joining of aluminium to CFRP sheets are essential and scare in the literature. The present work reports a combined theoretical and experimental study in joining of AA5052 and CFRP sheets using friction lap joining process. A three-dimensional finite element based heat transfer model is developed to compute the temperature fields and thermal cycles. The computed results are validated extensively with the corresponding experimentally measured results.

  20. New Reactor Physics Benchmark Data in the March 2012 Edition of the IRPhEP Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; J. Blair Briggs; Jim Gulliford

    2012-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications. Numerous experiments that have been performed worldwide, represent a large investment of infrastructure, expertise, and cost, and are valuable resources of data for present and future research. These valuable assets provide the basis for recording, development, and validation of methods. If the experimental data are lost, the high cost to repeat many of these measurements may be prohibitive. The purpose of the IRPhEP is to provide an extensively peer-reviewed set ofmore » reactor physics-related integral data that can be used by reactor designers and safety analysts to validate the analytical tools used to design next-generation reactors and establish the safety basis for operation of these reactors. Contributors from around the world collaborate in the evaluation and review of selected benchmark experiments for inclusion in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [1]. Several new evaluations have been prepared for inclusion in the March 2012 edition of the IRPhEP Handbook.« less

  1. A laboratory validation study of the time-lapse oscillatory pumping test for leakage detection in geological repositories

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Lu, Jiemin; Islam, Akand

    2017-05-01

    Geologic repositories are extensively used for disposing byproducts in mineral and energy industries. The safety and reliability of these repositories are a primary concern to environmental regulators and the public. Time-lapse oscillatory pumping test (OPT) has been introduced recently as a pressure-based technique for detecting potential leakage in geologic repositories. By routinely conducting OPT at a number of pulsing frequencies, an operator may identify the potential repository anomalies in the frequency domain, alleviating the ambiguity caused by reservoir noise and improving the signal-to-noise ratio. Building on previous theoretical and field studies, this work performed a series of laboratory experiments to validate the concept of time-lapse OPT using a custom made, stainless steel tank under relatively high pressures. The experimental configuration simulates a miniature geologic storage repository consisting of three layers (i.e., injection zone, caprock, and above-zone aquifer). Results show that leakage in the injection zone led to deviations in the power spectrum of observed pressure data, and the amplitude of which also increases with decreasing pulsing frequencies. The experimental results are further analyzed by developing a 3D flow model, using which the model parameters are estimated through frequency domain inversion.

  2. Quantification of Methane and Ammonia Emissions in a Naturally Ventilated Barn by Using Defined Criteria to Calculate Emission Rates.

    PubMed

    Schmithausen, Alexander J; Schiefler, Inga; Trimborn, Manfred; Gerlach, Katrin; Südekum, Karl-Heinz; Pries, Martin; Büscher, Wolfgang

    2018-05-16

    Extensive experimentation on individual animals in respiration chambers has already been carried out to evaluate the potential of dietary changes and opportunities to mitigate CH₄ emissions from ruminants. Although it is difficult to determine the air exchange rate of open barn spaces, measurements at the herd level should provide similarly reliable and robust results. The primary objective of this study was (1) to define a validity range (data classification criteria (DCC)) for the variables of wind velocity and wind direction during long-term measurements at barn level; and (2) to apply this validity range to a feeding trial in a naturally cross-flow ventilated dairy barn. The application of the DCC permitted quantification of CH₄ and NH₃ emissions during a feeding trial consisting of four periods. Differences between the control group (no supplement) and the experimental group fed a ration supplemented with condensed Acacia mearnsii tannins (CT) became apparent. Notably, CT concentrations of 1% and 3% of ration dry matter did not reduce CH₄ emissions. In contrast, NH₃ emissions decreased 34.5% when 3% CT was supplemented. The data confirm that quantification of trace gases in a naturally ventilated barn at the herd level is possible.

  3. An experimental investigation of the structural dynamics of a torsionally soft rotor in vacuum

    NASA Technical Reports Server (NTRS)

    Srinivasan, A. V.; Cutts, D. G.; Shu, H. T.

    1986-01-01

    An extensive data base of structural dynamic characteristics has been generated from an experimental program conducted on a torsionally soft two-bladed model helicopter rotor system. Measurements of vibratory strains for five modes of vibration were made at twenty-one locations on the two blades at speeds varying from 0 to 1000 RPM and for several combinations of precone, droop and flexure stiffness. Tests were conducted in vacuum under carefully controlled conditions using a unique excitation device with a system of piezoelectric crystals bonded to the blade surface near the root. Frequencies, strain mode shapes and dampings are extracted from the time histories and can be used to validate structural dynamics codes. The dynamics of the system are such that there is a clear tendency for the first torsion and second flap modes to couple within the speed range considered. Strain mode shapes vary significantly with speed and configuration. This feature is important in the calcualtion of aeroelastic instabilities. The tension axis tests confirmed that the modulus-weighted centroid for the nonhomogeneous airfoil is slightly off the mass centroid and validated previous static tests done to determine location of the tension axis.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Jade; Nobrega, R. Paul; Schwantes, Christian

    The dynamics of globular proteins can be described in terms of transitions between a folded native state and less-populated intermediates, or excited states, which can play critical roles in both protein folding and function. Excited states are by definition transient species, and therefore are difficult to characterize using current experimental techniques. We report an atomistic model of the excited state ensemble of a stabilized mutant of an extensively studied flavodoxin fold protein CheY. We employed a hybrid simulation and experimental approach in which an aggregate 42 milliseconds of all-atom molecular dynamics were used as an informative prior for the structuremore » of the excited state ensemble. The resulting prior was then refined against small-angle X-ray scattering (SAXS) data employing an established method (EROS). The most striking feature of the resulting excited state ensemble was an unstructured N-terminus stabilized by non-native contacts in a conformation that is topologically simpler than the native state. We then predict incisive single molecule FRET experiments, using these results, as a means of model validation. Our study demonstrates the paradigm of uniting simulation and experiment in a statistical model to study the structure of protein excited states and rationally design validating experiments.« less

  5. Sustained prediction ability of net analyte preprocessing methods using reduced calibration sets. Theoretical and experimental study involving the spectrophotometric analysis of multicomponent mixtures.

    PubMed

    Goicoechea, H C; Olivieri, A C

    2001-07-01

    A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.

  6. Segregating photoelastic particles in free-surface granular flows

    NASA Astrophysics Data System (ADS)

    Thomas, Amalia; Vriend, Nathalie; Environmental; Industrial Fluid Dynamics Team

    2017-11-01

    We present results from a novel experimental set-up creating 2D avalanches of photoelastic discs. Two distinct hoppers supply either monodisperse or bidisperse particles at adjustable flow-rates into a 2 meter long, narrow acrylic chute inclined at 20°. For 20-40 seconds the avalanche maintains a steady-state that accelerates and thins downstream. The chute basal roughness is variable, allowing for different flow profiles. Using a set of polarizers and a high-speed camera, we visualize and quantify the forces due to dynamic interactions between the discs using photoelastic theory. Velocity and density profiles are derived from particle tracking at different distances from the discharge point and are coarse-grained to obtain continuous fields. With the access to both force information and dynamical properties via particle-tracking, we can experimentally validate existing mu(I) and non-local rheologies. As an extension, we probe the effect of granular segregation in bimodal mixtures by using the two separate inflow hoppers. We derive the state of segregation along the avalanche channel and measure the segregation velocities of each species. This provides insight in, and a unique validation of, the fundamental physical processes that drive segregation in avalanching geometries.

  7. Radiation dominated acoustophoresis driven by surface acoustic waves.

    PubMed

    Guo, Jinhong; Kang, Yuejun; Ai, Ye

    2015-10-01

    Acoustophoresis-based particle manipulation in microfluidics has gained increasing attention in recent years. Despite the fact that experimental studies have been extensively performed to demonstrate this technique for various microfluidic applications, numerical simulation of acoustophoresis driven by surface acoustic waves (SAWs) has still been largely unexplored. In this work, a numerical model taking into account the acoustic-piezoelectric interaction was developed to simulate the generation of a standing surface acoustic wave (SSAW) field and predict the acoustic pressure field in the liquid. Acoustic radiation dominated particle tracing was performed to simulate acoustophoresis of particles with different sizes undergoing a SSAW field. A microfluidic device composed of two interdigital transducers (IDTs) for SAW generation and a microfluidic channel was fabricated for experimental validation. Numerical simulations could well capture the focusing phenomenon of particles to the pressure nodes in the experimental observation. Further comparison of particle trajectories demonstrated considerably quantitative agreement between numerical simulations and experimental results with fitting in the applied voltage. Particle switching was also demonstrated using the fabricated device that could be further developed as an active particle sorting device. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Experimental studies of Micro- and Nano-grained UO 2: Grain Growth Behavior, Sufrace Morphology, and Fracture Toughness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, Yinbin; Mo, Kun; Jamison, Laura M.

    This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL) and aims at providing experimental data for the validation of the mesoscale simulation code MARMOT. MARMOT is a mesoscale multiphysics code that predicts the coevolution of microstructure and properties within reactor fuel during its lifetime in the reactor. It is an important component of the Moose-Bison-Marmot (MBM) code suite that has been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. In order to ensure the accuracy of the microstructure-basedmore » materials models being developed within the MARMOT code, extensive validation efforts must be carried out. In this report, we summarize the experimental efforts in FY16 including the following important experiments: (1) in-situ grain growth measurement of nano-grained UO 2; (2) investigation of surface morphology in micrograined UO 2; (3) Nano-indentation experiments on nano- and micro-grained UO 2. The highlight of this year is: we have successfully demonstrated our capability to in-situ measure grain size development while maintaining the stoichiometry of nano-grained UO 2 materials; the experiment is, for the first time, using synchrotron X-ray diffraction to in-situ measure grain growth behavior of UO 2.« less

  9. Roadmap to clinical use of gold nanoparticles for radiosensitization

    PubMed Central

    Schuemann, J.; Berbeco, R.; Chithrani, B. D.; Cho, S.; Kumar, R.; McMahon, S.; Sridhar, S.; Krishnan, S.

    2015-01-01

    The past decade has seen a dramatic increase in interest in the use of Gold Nanoparticles (GNPs) as radiation sensitizers for radiotherapy. This interest was initially driven by their strong absorption of ionizing radiation and the resulting ability to increase dose deposited within target volumes even at relatively low concentrations. These early observations are supported by extensive experimental validation, showing GNPs’ efficacy at sensitizing tumors in both in vitro and in vivo systems to a range of types of ionizing radiation, including kilovoltage and megavoltage X-rays as well as charged particles. Despite this experimental validation, there has been limited translation of GNP-mediated radiosensitization to a clinical setting. One of the key challenges in this area is the wide range of experimental systems that have been investigated, spanning a range of particle sizes, shapes and preparations. As a result, mechanisms of uptake and radiosensitization have remained difficult to clearly identify. This has proven a significant impediment to the identification of optimal GNP formulations which strike a balance among their radiosensitizing properties, their specificity to the tumors, their biocompatibility, and their imageability in vivo. This white paper reviews the current state of knowledge in each of the areas concerning the use of GNPs as radiosensitizers, and outlines the steps which will be required to advance GNP-enhanced radiation therapy from their current pre-clinical setting to clinical trials and eventual routine usage. PMID:26700713

  10. Stochastic Time Models of Syllable Structure

    PubMed Central

    Shaw, Jason A.; Gafos, Adamantios I.

    2015-01-01

    Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed. PMID:25996153

  11. Experimental and analytical studies on the vibration serviceability of long-span prestressed concrete floor

    NASA Astrophysics Data System (ADS)

    Cao, Liang; Liu, Jiepeng; Li, Jiang; Zhang, Ruizhi

    2018-04-01

    An extensive experimental and theoretical research study was undertaken to study the vibration serviceability of a long-span prestressed concrete floor system to be used in the lounge of a major airport. Specifically, jumping impact tests were carried out to obtain the floor's modal parameters, followed by an analysis of the distribution of peak accelerations. Running tests were also performed to capture the acceleration responses. The prestressed concrete floor was found to have a low fundamental natural frequency (≈ 8.86 Hz) corresponding to the average modal damping ratio of ≈ 2.17%. A coefficients β rp is proposed for convenient calculation of the maximum root-mean-square acceleration for running. In the theoretical analysis, the prestressed concrete floor under running excitation is treated as a two-span continuous anisotropic rectangular plate with simply-supported edges. The calculated analytical results (natural frequencies and root-mean-square acceleration) agree well with the experimental ones. The analytical approach is thus validated.

  12. Numerical and Experimental Study on Hydrodynamic Performance of A Novel Semi-Submersible Concept

    NASA Astrophysics Data System (ADS)

    Gao, Song; Tao, Long-bin; Kou, Yu-feng; Lu, Chao; Sun, Jiang-long

    2018-04-01

    Multiple Column Platform (MCP) semi-submersible is a newly proposed concept, which differs from the conventional semi-submersibles, featuring centre column and middle pontoon. It is paramount to ensure its structural reliability and safe operation at sea, and a rigorous investigation is conducted to examine the hydrodynamic and structural performance for the novel structure concept. In this paper, the numerical and experimental studies on the hydrodynamic performance of MCP are performed. Numerical simulations are conducted in both the frequency and time domains based on 3D potential theory. The numerical models are validated by experimental measurements obtained from extensive sets of model tests under both regular wave and irregular wave conditions. Moreover, a comparative study on MCP and two conventional semi-submersibles are carried out using numerical simulation. Specifically, the hydrodynamic characteristics, including hydrodynamic coefficients, natural periods and motion response amplitude operators (RAOs), mooring line tension are fully examined. The present study proves the feasibility of the novel MCP and demonstrates the potential possibility of optimization in the future study.

  13. A two-dimensional analytical model and experimental validation of garter stitch knitted shape memory alloy actuator architecture

    NASA Astrophysics Data System (ADS)

    Abel, Julianna; Luntz, Jonathan; Brei, Diann

    2012-08-01

    Active knits are a unique architectural approach to meeting emerging smart structure needs for distributed high strain actuation with simultaneous force generation. This paper presents an analytical state-based model for predicting the actuation response of a shape memory alloy (SMA) garter knit textile. Garter knits generate significant contraction against moderate to large loads when heated, due to the continuous interlocked network of loops of SMA wire. For this knit architecture, the states of operation are defined on the basis of the thermal and mechanical loading of the textile, the resulting phase change of the SMA, and the load path followed to that state. Transitions between these operational states induce either stick or slip frictional forces depending upon the state and path, which affect the actuation response. A load-extension model of the textile is derived for each operational state using elastica theory and Euler-Bernoulli beam bending for the large deformations within a loop of wire based on the stress-strain behavior of the SMA material. This provides kinematic and kinetic relations which scale to form analytical transcendental expressions for the net actuation motion against an external load. This model was validated experimentally for an SMA garter knit textile over a range of applied forces with good correlation for both the load-extension behavior in each state as well as the net motion produced during the actuation cycle (250% recoverable strain and over 50% actuation). The two-dimensional analytical model of the garter stitch active knit provides the ability to predict the kinetic actuation performance, providing the basis for the design and synthesis of large stroke, large force distributed actuators that employ this novel architecture.

  14. Validation of the k- ω turbulence model for the thermal boundary layer profile of effusive cooled walls

    NASA Astrophysics Data System (ADS)

    Hink, R.

    2015-09-01

    The choice of materials for rocket chamber walls is limited by its thermal resistance. The thermal loads can be reduced substantially by the blowing out of gases through a porous surface. The k- ω-based turbulence models for computational fluid dynamic simulations are designed for smooth, non-permeable walls and have to be adjusted to account for the influence of injected fluids. Wilcox proposed therefore an extension for the k- ω turbulence model for the correct prediction of turbulent boundary layer velocity profiles. In this study, this extension is validated against experimental thermal boundary layer data from the Thermosciences Division of the Department of Mechanical Engineering from the Stanford University. All simulations are performed with a finite volume-based in-house code of the German Aerospace Center. Several simulations with different blowing settings were conducted and discussed in comparison to the results of the original model and in comparison to an additional roughness implementation. This study has permitted to understand that velocity profile corrections are necessary in contrast to additional roughness corrections to predict the correct thermal boundary layer profile of effusive cooled walls. Finally, this approach is applied to a two-dimensional simulation of an effusive cooled rocket chamber wall.

  15. A contaminant-free assessment of Endogenous Retroviral RNA in human plasma

    PubMed Central

    Karamitros, Timokratis; Paraskevis, Dimitrios; Hatzakis, Angelos; Psichogiou, Mina; Elefsiniotis, Ioannis; Hurst, Tara; Geretti, Anna-Maria; Beloukas, Apostolos; Frater, John; Klenerman, Paul; Katzourakis, Aris; Magiorkinis, Gkikas

    2016-01-01

    Endogenous retroviruses (ERVs) comprise 6–8% of the human genome. HERVs are silenced in most normal tissues, up-regulated in stem cells and in placenta but also in cancer and HIV-1 infection. Crucially, there are conflicting reports on detecting HERV RNA in non-cellular clinical samples such as plasma that suggest the study of HERV RNA can be daunting. Indeed, we find that the use of real-time PCR in a quality assured clinical laboratory setting can be sensitive to low-level proviral contamination. We developed a mathematical model for low-level contamination that allowed us to design a laboratory protocol and standard operating procedures for robust measurement of HERV RNA. We focus on one family, HERV-K HML-2 (HK2) that has been most recently active even though they invaded our ancestral genomes almost 30 millions ago. We extensively validated our experimental design on a model cell culture system showing high sensitivity and specificity, totally eliminating the proviral contamination. We then tested 236 plasma samples from patients infected with HIV-1, HCV or HBV and found them to be negative. The study of HERV RNA for human translational studies should be performed with extensively validated protocols and standard operating procedures to control the widespread low-level human DNA contamination. PMID:27640347

  16. Osmotic pressure beyond concentration restrictions.

    PubMed

    Grattoni, Alessandro; Merlo, Manuele; Ferrari, Mauro

    2007-10-11

    Osmosis is a fundamental physical process that involves the transit of solvent molecules across a membrane separating two liquid solutions. Osmosis plays a role in many biological processes such as fluid exchange in animal cells (Cell Biochem. Biophys. 2005, 42, 277-345;1 J. Periodontol. 2007, 78, 757-7632) and water transport in plants. It is also involved in many technological applications such as drug delivery systems (Crit. Rev. Ther. Drug. 2004, 21, 477-520;3 J. Micro-Electromech. Syst. 2004, 13, 75-824) and water purification. Extensive attention has been dedicated in the past to the modeling of osmosis, starting with the classical theories of van't Hoff and Morse. These are predictive, in the sense that they do not involve adjustable parameters; however, they are directly applicable only to limited regimes of dilute solute concentrations. Extensions beyond the domains of validity of these classical theories have required recourse to fitting parameters, transitioning therefore to semiempirical, or nonpredictive models. A novel approach was presented by Granik et al., which is not a priori restricted in concentration domains, presents no adjustable parameters, and is mechanistic, in the sense that it is based on a coupled diffusion model. In this work, we examine the validity of predictive theories of osmosis, by comparison with our new experimental results, and a meta-analysis of literature data.

  17. Performance prediction of a ducted rocket combustor

    NASA Astrophysics Data System (ADS)

    Stowe, Robert

    2001-07-01

    The ducted rocket is a supersonic flight propulsion system that takes the exhaust from a solid fuel gas generator, mixes it with air, and burns it to produce thrust. To develop such systems, the use of numerical models based on Computational Fluid Dynamics (CFD) is increasingly popular, but their application to reacting flow requires specific attention and validation. Through a careful examination of the governing equations and experimental measurements, a CFD-based method was developed to predict the performance of a ducted rocket combustor. It uses an equilibrium-chemistry Probability Density Function (PDF) combustion model, with a gaseous and a separate stream of 75 nm diameter carbon spheres to represent the fuel. After extensive validation with water tunnel and direct-connect combustion experiments over a wide range of geometries and test conditions, this CFD-based method was able to predict, within a good degree of accuracy, the combustion efficiency of a ducted rocket combustor.

  18. Boiling points of halogenated aliphatic compounds: a quantitative structure-property relationship for prediction and validation.

    PubMed

    Oberg, Tomas

    2004-01-01

    Halogenated aliphatic compounds have many technical uses, but substances within this group are also ubiquitous environmental pollutants that can affect the ozone layer and contribute to global warming. The establishment of quantitative structure-property relationships is of interest not only to fill in gaps in the available database but also to validate experimental data already acquired. The three-dimensional structures of 240 compounds were modeled with molecular mechanics prior to the generation of empirical descriptors. Two bilinear projection methods, principal component analysis (PCA) and partial-least-squares regression (PLSR), were used to identify outliers. PLSR was subsequently used to build a multivariate calibration model by extracting the latent variables that describe most of the covariation between the molecular structure and the boiling point. Boiling points were also estimated with an extension of the group contribution method of Stein and Brown.

  19. Broadband Transmission Loss Due to Reverberant Excitation

    NASA Technical Reports Server (NTRS)

    Barisciano, Lawrence P. Jr.

    1999-01-01

    The noise transmission characteristics of candidate curved aircraft sidewall panel constructions is examined analytically using finite element models of the selected panel geometries. The models are validated by experimental modal analyses and transmission loss testing. The structural and acoustic response of the models are then examined when subjected to random or reverberant excitation, the simulation of which is also discussed. For a candidate curved honeycomb panel, the effect of add-on trim panel treatments is examined. Specifically, two different mounting configurations are discussed and their effect on the transmission loss of the panel is presented. This study finds that the add-on acoustical treatments do improve on the primary structures transmission loss characteristics, however, much more research is necessary to draw any valid conclusions about the optimal configuration for the maximum noise transmission loss. This paper describes several directions for the extension of this work.

  20. Robust validation of approximate 1-matrix functionals with few-electron harmonium atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cioslowski, Jerzy, E-mail: jerzy@wmf.univ.szczecin.pl; Piris, Mario; Matito, Eduard

    2015-12-07

    A simple comparison between the exact and approximate correlation components U of the electron-electron repulsion energy of several states of few-electron harmonium atoms with varying confinement strengths provides a stringent validation tool for 1-matrix functionals. The robustness of this tool is clearly demonstrated in a survey of 14 known functionals, which reveals their substandard performance within different electron correlation regimes. Unlike spot-testing that employs dissociation curves of diatomic molecules or more extensive benchmarking against experimental atomization energies of molecules comprising some standard set, the present approach not only uncovers the flaws and patent failures of the functionals but, even moremore » importantly, also allows for pinpointing their root causes. Since the approximate values of U are computed at exact 1-densities, the testing requires minimal programming and thus is particularly suitable for rapid screening of new functionals.« less

  1. Stochastic Petri Net extension of a yeast cell cycle model.

    PubMed

    Mura, Ivan; Csikász-Nagy, Attila

    2008-10-21

    This paper presents the definition, solution and validation of a stochastic model of the budding yeast cell cycle, based on Stochastic Petri Nets (SPN). A specific family of SPNs is selected for building a stochastic version of a well-established deterministic model. We describe the procedure followed in defining the SPN model from the deterministic ODE model, a procedure that can be largely automated. The validation of the SPN model is conducted with respect to both the results provided by the deterministic one and the experimental results available from literature. The SPN model catches the behavior of the wild type budding yeast cells and a variety of mutants. We show that the stochastic model matches some characteristics of budding yeast cells that cannot be found with the deterministic model. The SPN model fine-tunes the simulation results, enriching the breadth and the quality of its outcome.

  2. Elasto-dynamic analysis of a gear pump-Part III: Experimental validation procedure and model extension to helical gears

    NASA Astrophysics Data System (ADS)

    Mucchi, E.; Dalpiaz, G.

    2015-01-01

    This work concerns external gear pumps for automotive applications, which operate at high speed and low pressure. In previous works of the authors (Part I and II, [1,2]), a non-linear lumped-parameter kineto-elastodynamic model for the prediction of the dynamic behaviour of external gear pumps was presented. It takes into account the most important phenomena involved in the operation of this kind of machine. The two main sources of noise and vibration are considered: pressure pulsation and gear meshing. The model has been used in order to foresee the influence of working conditions and design modifications on vibration generation. The model's experimental validation is a difficult task. Thus, Part III proposes a novel methodology for the validation carried out by the comparison of simulations and experimental results concerning forces and moments: it deals with the external and inertial components acting on the gears, estimated by the model, and the reactions and inertial components on the pump casing and the test plate, obtained by measurements. The validation is carried out comparing the level of the time synchronous average in the time domain and the waterfall maps in the frequency domain, with particular attention to identify system resonances. The validation results are satisfactory globally, but discrepancies are still present. Moreover, the assessed model has been properly modified for the application to a new virtual pump prototype with helical gears in order to foresee gear accelerations and dynamic forces. Part IV is focused on improvements in the modelling and analysis of the phenomena bound to the pressure evolution around the gears in order to achieve results closer to the measured values. As a matter of fact, the simulation results have shown that a variable meshing stiffness has a notable contribution on the dynamic behaviour of the pump but this is not as important as the pressure phenomena. As a consequence, the original model was modified with the aim at improving the calculation of pressure forces and torques. The improved pressure formulation includes several phenomena not considered in the previous one, such as the variable pressure evolution at input and output ports, as well as an accurate description of the trapped volume and its connections with high and low pressure chambers. The importance of these improvements are highlighted by comparison with experimental results, showing satisfactory matching.

  3. Validation of High-Resolution CFD Method for Slosh Damping Extraction of Baffled Tanks

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; West, Jeff

    2016-01-01

    Determination of slosh damping is a very challenging task as there is no analytical solution. The damping physics involve the vorticity dissipation which requires the full solution of the nonlinear Navier-Stokes equations. As a result, previous investigations and knowledge were mainly carried out by extensive experimental studies. A Volume-Of-Fluid (VOF) based CFD program developed at NASA MSFC was applied to extract slosh damping in a baffled tank from the first principle. First, experimental data using water with subscale smooth wall tank were used as the baseline validation. CFD simulation was demonstrated to be capable of accurately predicting natural frequency and very low damping value from the smooth wall tank at different fill levels. The damping due to a ring baffle at different liquid fill levels from barrel section and into the upper dome was then investigated to understand the slosh damping physics due to the presence of a ring baffle. Based on this study, the Root-Mean-Square error of our CFD simulation in estimating slosh damping was less than 4.8%, and the maximum error was less than 8.5%. Scalability of subscale baffled tank test using water was investigated using the validated CFD tool, and it was found that unlike the smooth wall case, slosh damping with baffle is almost independent of the working fluid and it is reasonable to apply water test data to the full scale LOX tank when the damping from baffle is dominant. On the other hand, for the smooth wall, the damping value must be scaled according to the Reynolds number. Comparison of experimental data, CFD, with the classical and modified Miles equations for upper dome was made, and the limitations of these semi-empirical equations were identified.

  4. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    NASA Astrophysics Data System (ADS)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J.

    2015-09-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.

  5. Rating of Perceived Exertion During Circuit Weight Training: A Concurrent Validation Study.

    PubMed

    Aniceto, Rodrigo R; Ritti-Dias, Raphael M; Dos Prazeres, Thaliane M P; Farah, Breno Q; de Lima, Fábio F M; do Prado, Wagner L

    2015-12-01

    The aim of this study was to determine whether rating of perceived exertion (RPE) is a valid method to control the effort during the circuit weight training (CWT) in trained men. Ten men (21.3 ± 3.3 years) with previous experience in resistance training (13.1 ± 6.3 months) performed 3 sessions: 1 orientation session and 2 experimental sessions. The subjects were randomly counterbalanced to 2 experimental sessions: CWT or multiple-set resistance training (control). In both sessions, 8 exercises (bench press, leg press 45°, seated row, leg curl, triceps pulley, leg extension, biceps curl, and adductor chair) were performed with the same work: 60% of 1 repetition maximum, 24 stations (3 circuits) or 24 sets (3 sets/exercise), 10 repetitions, 1 second in the concentric and eccentric phases, and rest intervals between sets and exercise of 60 seconds. Active muscle RPEs were measured after each 3 station/sets using the OMNI-Resistance Exercise Scale (OMNI-RES). In this same time, blood lactate was collected. Compared with baseline, both levels of blood lactate and RPE increased during whole workout in both sessions, the RPE at third, 23rd, and 27th minute and the blood lactate at third, seventh, 11th, 15th, 27th, and 31st minute were higher in multiple set compared with CWT. Positive correlation between blood lactate and RPE was observed in both experimental sessions. The results indicated that the RPE is a valid method to control the effort during the CWT in trained men and can be used to manipulate the intensity without the need to perform invasive assessments.

  6. Critical evaluation of measured rotational-vibrational transitions of four sulphur isotopologues of S16O2

    NASA Astrophysics Data System (ADS)

    Tóbiás, Roland; Furtenbacher, Tibor; Császár, Attila G.; Naumenko, Olga V.; Tennyson, Jonathan; Flaud, Jean-Marie; Kumar, Praveen; Poirier, Bill

    2018-03-01

    A critical evaluation and validation of the complete set of previously published experimental rotational-vibrational line positions is reported for the four stable sulphur isotopologues of the semirigid SO2 molecule - i.e., 32S16O2, 33S16O2, 34S16O2, and 36S16O2. The experimentally measured, assigned, and labeled transitions are collated from 43 sources. The 32S16O2, 33S16O2, 34S16O2, and 36S16O2 datasets contain 40,269, 15,628, 31,080, and 31 lines, respectively. Of the datasets collated, only the extremely limited 36S16O2 dataset is not subjected to a detailed analysis. As part of a detailed analysis of the experimental spectroscopic networks corresponding to the ground electronic states of the 32S16O2, 33S16O2, and 34S16O2 isotopologues, the MARVEL (Measured Active Rotational-Vibrational Energy Levels) procedure is used to determine the rovibrational energy levels. The rovibrational levels and their vibrational parent and asymmetric-top quantum numbers are compared to ones obtained from accurate variational nuclear-motion computations as well as to results of carefully designed effective Hamiltonian models. The rovibrational energy levels of the three isotopologues having the same labels are also compared against each other to ensure self-consistency. This careful, multifaceted analysis gives rise to 15,130, 5852, and 10,893 validated rovibrational energy levels, with a typical accuracy of a few 0.0001 cm-1 , for 32S16O2, 33S16O2, and 34S16O2, respectively. The extensive list of validated experimental lines and empirical (MARVEL) energy levels of the S16O2 isotopologues studied are deposited in the Supplementary Material of this article, as well as in the distributed information system ReSpecTh (http://respecth.hu).

  7. Variable viscosity and density biofilm simulations using an immersed boundary method, part II: Experimental validation and the heterogeneous rheology-IBM

    NASA Astrophysics Data System (ADS)

    Stotsky, Jay A.; Hammond, Jason F.; Pavlovsky, Leonid; Stewart, Elizabeth J.; Younger, John G.; Solomon, Michael J.; Bortz, David M.

    2016-07-01

    The goal of this work is to develop a numerical simulation that accurately captures the biomechanical response of bacterial biofilms and their associated extracellular matrix (ECM). In this, the second of a two-part effort, the primary focus is on formally presenting the heterogeneous rheology Immersed Boundary Method (hrIBM) and validating our model by comparison to experimental results. With this extension of the Immersed Boundary Method (IBM), we use the techniques originally developed in Part I ([19]) to treat biofilms as viscoelastic fluids possessing variable rheological properties anchored to a set of moving locations (i.e., the bacteria locations). In particular, we incorporate spatially continuous variable viscosity and density fields into our model. Although in [14,15], variable viscosity is used in an IBM context to model discrete viscosity changes across interfaces, to our knowledge this work and Part I are the first to apply the IBM to model a continuously variable viscosity field. We validate our modeling approach from Part I by comparing dynamic moduli and compliance moduli computed from our model to data from mechanical characterization experiments on Staphylococcus epidermidis biofilms. The experimental setup is described in [26] in which biofilms are grown and tested in a parallel plate rheometer. In order to initialize the positions of bacteria in the biofilm, experimentally obtained three dimensional coordinate data was used. One of the major conclusions of this effort is that treating the spring-like connections between bacteria as Maxwell or Zener elements provides good agreement with the mechanical characterization data. We also found that initializing the simulations with different coordinate data sets only led to small changes in the mechanical characterization results. Matlab code used to produce results in this paper will be available at https://github.com/MathBioCU/BiofilmSim.

  8. A Comprehensive Validation Approach Using The RAVEN Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J

    2015-06-01

    The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics neededmore » to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.« less

  9. Analysis of in situ resources for the Soil Moisture Active Passive Validation Experiments in 2015 and 2016

    NASA Astrophysics Data System (ADS)

    Cosh, M. H.; Jackson, T. J.; Colliander, A.; Bindlish, R.; McKee, L.; Goodrich, D. C.; Prueger, J. H.; Hornbuckle, B. K.; Coopersmith, E. J.; Holifield Collins, C.; Smith, J.

    2016-12-01

    With the launch of the Soil Moisture Active Passive Mission (SMAP) in 2015, a new era of soil moisture monitoring was begun. Soil moisture is available on a near daily basis at a 36 km resolution for the globe. But this dataset is only as valuable if its products are accurate and reliable. Therefore, in order to demonstrate the accuracy of the soil moisture product, NASA enacted an extensive calibration and validation program with many in situ soil moisture networks contributing data across a variety of landscape regimes. However, not all questions can be answered by these networks. As a result, two intensive field experiments were executed to provide more detailed reference points for calibration and validation. Multi-week field campaigns were conducted in Arizona and Iowa at the USDA Agricultural Research Service Walnut Gulch and South Fork Experimental Watersheds, respectively. Aircraft observations were made to provide a high resolution data product. Soil moisture, soil roughness and vegetation data were collected at high resolution to provide a downscaled dataset to compare against aircraft and satellite estimates.

  10. Experimental validation of ultrasonic NDE simulation software

    NASA Astrophysics Data System (ADS)

    Dib, Gerges; Larche, Michael; Diaz, Aaron A.; Crawford, Susan L.; Prowant, Matthew S.; Anderson, Michael T.

    2016-02-01

    Computer modeling and simulation is becoming an essential tool for transducer design and insight into ultrasonic nondestructive evaluation (UT-NDE). As the popularity of simulation tools for UT-NDE increases, it becomes important to assess their reliability to model acoustic responses from defects in operating components and provide information that is consistent with in-field inspection data. This includes information about the detectability of different defect types for a given UT probe. Recently, a cooperative program between the Electrical Power Research Institute and the U.S. Nuclear Regulatory Commission was established to validate numerical modeling software commonly used for simulating UT-NDE of nuclear power plant components. In the first phase of this cooperative, extensive experimental UT measurements were conducted on machined notches with varying depth, length, and orientation in stainless steel plates. Then, the notches were modeled in CIVA, a semi-analytical NDE simulation platform developed by the French Commissariat a l'Energie Atomique, and their responses compared with the experimental measurements. Discrepancies between experimental and simulation results are due to either improper inputs to the simulation model, or to incorrect approximations and assumptions in the numerical models. To address the former, a variation study was conducted on the different parameters that are required as inputs for the model, specifically the specimen and transducer properties. Then, the ability of simulations to give accurate predictions regarding the detectability of the different defects was demonstrated. This includes the results in terms of the variations in defect amplitude indications, and the ratios between tip diffracted and specular signal amplitudes.

  11. An atmospheric pressure high-temperature laminar flow reactor for investigation of combustion and related gas phase reaction systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oßwald, Patrick; Köhler, Markus

    A new high-temperature flow reactor experiment utilizing the powerful molecular beam mass spectrometry (MBMS) technique for detailed observation of gas phase kinetics in reacting flows is presented. The reactor design provides a consequent extension of the experimental portfolio of validation experiments for combustion reaction kinetics. Temperatures up to 1800 K are applicable by three individually controlled temperature zones with this atmospheric pressure flow reactor. Detailed speciation data are obtained using the sensitive MBMS technique, providing in situ access to almost all chemical species involved in the combustion process, including highly reactive species such as radicals. Strategies for quantifying the experimentalmore » data are presented alongside a careful analysis of the characterization of the experimental boundary conditions to enable precise numeric reproduction of the experimental results. The general capabilities of this new analytical tool for the investigation of reacting flows are demonstrated for a selected range of conditions, fuels, and applications. A detailed dataset for the well-known gaseous fuels, methane and ethylene, is provided and used to verify the experimental approach. Furthermore, application for liquid fuels and fuel components important for technical combustors like gas turbines and engines is demonstrated. Besides the detailed investigation of novel fuels and fuel components, the wide range of operation conditions gives access to extended combustion topics, such as super rich conditions at high temperature important for gasification processes, or the peroxy chemistry governing the low temperature oxidation regime. These demonstrations are accompanied by a first kinetic modeling approach, examining the opportunities for model validation purposes.« less

  12. Quantum turbulence and correlations in Bose-Einstein condensate collisions

    NASA Astrophysics Data System (ADS)

    Norrie, A. A.; Ballagh, R. J.; Gardiner, C. W.

    2006-04-01

    We investigate numerically simulated collisions between experimentally realistic Bose-Einstein condensate wave packets, within a regime where highly populated scattering haloes are formed. The theoretical basis for this work is the truncated Wigner method, for which we present a detailed derivation, paying particular attention to its validity regime for colliding condensates. This paper is an extension of our previous Letter [A. A. Norrie, R. J. Ballagh, and C. W. Gardiner, Phys. Rev. Lett. 94, 040401 (2005)], and we investigate both single-trajectory solutions, which reveal the presence of quantum turbulence in the scattering halo, and ensembles of trajectories, which we use to calculate quantum-mechanical correlation functions of the field.

  13. Quantum image processing: A review of advances in its security technologies

    NASA Astrophysics Data System (ADS)

    Yan, Fei; Iliyasu, Abdullah M.; Le, Phuc Q.

    In this review, we present an overview of the advances made in quantum image processing (QIP) comprising of the image representations, the operations realizable on them, and the likely protocols and algorithms for their applications. In particular, we focus on recent progresses on QIP-based security technologies including quantum watermarking, quantum image encryption, and quantum image steganography. This review is aimed at providing readers with a succinct, yet adequate compendium of the progresses made in the QIP sub-area. Hopefully, this effort will stimulate further interest aimed at the pursuit of more advanced algorithms and experimental validations for available technologies and extensions to other domains.

  14. Robust Measurements of Phase Response Curves Realized via Multicycle Weighted Spike-Triggered Averages

    NASA Astrophysics Data System (ADS)

    Imai, Takashi; Ota, Kaiichiro; Aoyagi, Toshio

    2017-02-01

    Phase reduction has been extensively used to study rhythmic phenomena. As a result of phase reduction, the rhythm dynamics of a given system can be described using the phase response curve. Measuring this characteristic curve is an important step toward understanding a system's behavior. Recently, a basic idea for a new measurement method (called the multicycle weighted spike-triggered average method) was proposed. This paper confirms the validity of this method by providing an analytical proof and demonstrates its effectiveness in actual experimental systems by applying the method to an oscillating electric circuit. Some practical tips to use the method are also presented.

  15. Stochastic HKMDHE: A multi-objective contrast enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Maity, Srideep; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2018-02-01

    This contribution proposes a novel extension of the existing `Hyper Kurtosis based Modified Duo-Histogram Equalization' (HKMDHE) algorithm, for multi-objective contrast enhancement of biomedical images. A novel modified objective function has been formulated by joint optimization of the individual histogram equalization objectives. The optimal adequacy of the proposed methodology with respect to image quality metrics such as brightness preserving abilities, peak signal-to-noise ratio (PSNR), Structural Similarity Index (SSIM) and universal image quality metric has been experimentally validated. The performance analysis of the proposed Stochastic HKMDHE with existing histogram equalization methodologies like Global Histogram Equalization (GHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) has been given for comparative evaluation.

  16. Cuminum cyminum and Carum carvi: An update

    PubMed Central

    Johri, R. K.

    2011-01-01

    Cuminum cyminum and Carum carvi are the sources of cumin and caraway seeds respectively, which have been used since antiquity for the treatment of various indications in traditional healing systems in wide geographical areas. Cumin and caraway seeds are rich sources of essential oils and have been actively researched for their chemical composition and biological activities. In recent times (especially during the last 3 years) considerable progress has been made regarding validation of their acclaimed medicinal attributes by extensive experimental studies. In this attempt many novel bioactivities have been revealed. This review highlights the significance of cumin and caraway as potential source of diverse natural products and their medicinal applications. PMID:22096320

  17. TRAX-CHEM: A pre-chemical and chemical stage extension of the particle track structure code TRAX in water targets

    NASA Astrophysics Data System (ADS)

    Boscolo, D.; Krämer, M.; Durante, M.; Fuss, M. C.; Scifoni, E.

    2018-04-01

    The production, diffusion, and interaction of particle beam induced water-derived radicals is studied with the a pre-chemical and chemical module of the Monte Carlo particle track structure code TRAX, based on a step by step approach. After a description of the model implemented, the chemical evolution of the most important products of water radiolysis is studied for electron, proton, helium, and carbon ion radiation at different energies. The validity of the model is verified by comparing the calculated time and LET dependent yield with experimental data from literature and other simulation approaches.

  18. Planar measurement of flow field parameters in a nonreacting supersonic combustor using laser-induced iodine fluorescence

    NASA Technical Reports Server (NTRS)

    Hartfield, Roy J., Jr.; Hollo, Steven D.; Mcdaniel, James C.

    1990-01-01

    A nonintrusive optical technique, laser-induced iodine fluorescence, has been used to obtain planar measurements of flow field parameters in the supersonic mixing flow field of a nonreacting supersonic combustor. The combustor design used in this work was configured with staged transverse sonic injection behind a rearward-facing step into a Mach 2.07 free stream. A set of spatially resolved measurements of temperature and injectant mole fraction has been generated. These measurements provide an extensive and accurate experimental data set required for the validation of computational fluid dynamic codes developed for the calculation of highly three-dimensional combustor flow fields.

  19. Immersed transient eddy current flow metering: a calibration-free velocity measurement technique for liquid metals

    NASA Astrophysics Data System (ADS)

    Krauter, N.; Stefani, F.

    2017-10-01

    Eddy current flow meters are widely used for measuring the flow velocity of electrically conducting fluids. Since the flow induced perturbations of a magnetic field depend both on the geometry and the conductivity of the fluid, extensive calibration is needed to get accurate results. Transient eddy current flow metering has been developed to overcome this problem. It relies on tracking the position of an impressed eddy current system that is moving with the same velocity as the conductive fluid. We present an immersed version of this measurement technique and demonstrate its viability by numerical simulations and a first experimental validation.

  20. Students' views about the nature of experimental physics

    NASA Astrophysics Data System (ADS)

    Wilcox, Bethany R.; Lewandowski, H. J.

    2017-12-01

    The physics community explores and explains the physical world through a blend of theoretical and experimental studies. The future of physics as a discipline depends on training of students in both the theoretical and experimental aspects of the field. However, while student learning within lecture courses has been the subject of extensive research, lab courses remain relatively under-studied. In particular, there is little, if any, data available that address the effectiveness of physics lab courses at encouraging students to recognize the nature and importance of experimental physics within the discipline as a whole. To address this gap, we present the first large-scale, national study (Ninstitutions=75 and Nstudents=7167 ) of undergraduate physics lab courses through analysis of students' responses to a research-validated assessment designed to investigate students' beliefs about the nature of experimental physics. We find that students often enter and leave physics lab courses with ideas about experimental physics as practiced in their courses that are inconsistent with the views of practicing experimental physicists, and this trend holds at both the introductory and upper-division levels. Despite this inconsistency, we find that both introductory and upper-division students are able to accurately predict the expertlike response even in cases where their views about experimentation in their lab courses disagree. These finding have implications for the recruitment, retention, and adequate preparation of students in physics.

  1. Predicting RNA pseudoknot folding thermodynamics

    PubMed Central

    Cao, Song; Chen, Shi-Jie

    2006-01-01

    Based on the experimentally determined atomic coordinates for RNA helices and the self-avoiding walks of the P (phosphate) and C4 (carbon) atoms in the diamond lattice for the polynucleotide loop conformations, we derive a set of conformational entropy parameters for RNA pseudoknots. Based on the entropy parameters, we develop a folding thermodynamics model that enables us to compute the sequence-specific RNA pseudoknot folding free energy landscape and thermodynamics. The model is validated through extensive experimental tests both for the native structures and for the folding thermodynamics. The model predicts strong sequence-dependent helix-loop competitions in the pseudoknot stability and the resultant conformational switches between different hairpin and pseudoknot structures. For instance, for the pseudoknot domain of human telomerase RNA, a native-like and a misfolded hairpin intermediates are found to coexist on the (equilibrium) folding pathways, and the interplay between the stabilities of these intermediates causes the conformational switch that may underlie a human telomerase disease. PMID:16709732

  2. Toward meaningful outcomes in teaching conversation and greeting skills with individuals with autism spectrum disorder.

    PubMed

    Hood, Stephanie A; Luczynski, Kevin C; Mitteer, Daniel R

    2017-07-01

    We identified greeting and conversation deficits based on a parent interview and semistructured direct assessment for one child and two adolescents with autism spectrum disorder. We taught the greeting and conversation skills using behavioral skills training and within-session corrective feedback. A multiple baseline across conversation and greeting skills demonstrated experimental control over the effects of the teaching on acquisition and generalization to novel adults. We also conducted embedded reversals to assess maintenance of the acquired skills. Teaching produced robust acquisition, generalization, maintenance, and treatment extension for 15 of the 16 targeted skills across participants. Participant and parent reports indicated high levels of social validity for the intervention and outcomes. The results support individualized assessment and intervention for improving greeting and conversation skills during unscripted interactions, which are requisite for more extended and complex social interactions. © 2017 Society for the Experimental Analysis of Behavior.

  3. Deformation Response and Life of Metallic Composites

    NASA Technical Reports Server (NTRS)

    Lissenden, Cliff J.

    2005-01-01

    The project was initially funded for one year (for $100,764) to investigate the potential of particulate reinforced metals for aeropropulsion applications and to generate fatigue results that quantify the mean stress effect for a titanium alloy matrix material (TIMETAL 21S). The project was continued for a second year (for $85,000) to more closely investigate cyclic deformation, especially ratcheting, of the titanium alloy matrix at elevated temperature. Equipment was purchased (for $19,000) to make the experimental program feasible; this equipment included an extensometer calibrator and a multi-channel signal conditioning amplifier. The project was continued for a third year ($50,000) to conduct cyclic relaxation experiments aimed at validating the elastic-viscoelastic-viscoplastic model that NASA GRC had developed for the titanium alloy. Finally, a one-year no cost extension was granted to enable continued analysis of the experimental results and model comparisons.

  4. Prediction of clothing thermal insulation and moisture vapour resistance of the clothed body walking in wind.

    PubMed

    Qian, Xiaoming; Fan, Jintu

    2006-11-01

    Clothing thermal insulation and moisture vapour resistance are the two most important parameters in thermal environmental engineering, functional clothing design and end use of clothing ensembles. In this study, clothing thermal insulation and moisture vapour resistance of various types of clothing ensembles were measured using the walking-able sweating manikin, Walter, under various environmental conditions and walking speeds. Based on an extensive experimental investigation and an improved understanding of the effects of body activities and environmental conditions, a simple but effective direct regression model has been established, for predicting the clothing thermal insulation and moisture vapour resistance under wind and walking motion, from those when the manikin was standing in still air. The model has been validated by using experimental data reported in the previous literature. It has shown that the new models have advantages and provide very accurate prediction.

  5. Nuclear spin noise in NMR revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrand, Guillaume; Luong, Michel; Huber, Gaspard

    2015-09-07

    The theoretical shapes of nuclear spin-noise spectra in NMR are derived by considering a receiver circuit with finite preamplifier input impedance and a transmission line between the preamplifier and the probe. Using this model, it becomes possible to reproduce all observed experimental features: variation of the NMR resonance linewidth as a function of the transmission line phase, nuclear spin-noise signals appearing as a “bump” or as a “dip” superimposed on the average electronic noise level even for a spin system and probe at the same temperature, pure in-phase Lorentzian spin-noise signals exhibiting non-vanishing frequency shifts. Extensive comparisons to experimental measurementsmore » validate the model predictions, and define the conditions for obtaining pure in-phase Lorentzian-shape nuclear spin noise with a vanishing frequency shift, in other words, the conditions for simultaneously obtaining the spin-noise and frequency-shift tuning optima.« less

  6. Experimental statistical signature of many-body quantum interference

    NASA Astrophysics Data System (ADS)

    Giordani, Taira; Flamini, Fulvio; Pompili, Matteo; Viggianiello, Niko; Spagnolo, Nicolò; Crespi, Andrea; Osellame, Roberto; Wiebe, Nathan; Walschaers, Mattia; Buchleitner, Andreas; Sciarrino, Fabio

    2018-03-01

    Multi-particle interference is an essential ingredient for fundamental quantum mechanics phenomena and for quantum information processing to provide a computational advantage, as recently emphasized by boson sampling experiments. Hence, developing a reliable and efficient technique to witness its presence is pivotal in achieving the practical implementation of quantum technologies. Here, we experimentally identify genuine many-body quantum interference via a recent efficient protocol, which exploits statistical signatures at the output of a multimode quantum device. We successfully apply the test to validate three-photon experiments in an integrated photonic circuit, providing an extensive analysis on the resources required to perform it. Moreover, drawing upon established techniques of machine learning, we show how such tools help to identify the—a priori unknown—optimal features to witness these signatures. Our results provide evidence on the efficacy and feasibility of the method, paving the way for its adoption in large-scale implementations.

  7. Experimental verification of a radiofrequency power model for Wi-Fi technology.

    PubMed

    Fang, Minyu; Malone, David

    2010-04-01

    When assessing the power emitted from a Wi-Fi network, it has been observed that these networks operate at a relatively low duty cycle. In this paper, we extend a recently introduced model of emitted power in Wi-Fi networks to cover conditions where devices do not always have packets to transmit. We present experimental results to validate the original model and its extension by developing approximate, but practical, testbed measurement techniques. The accuracy of the models is confirmed, with small relative errors: less than 5-10%. Moreover, we confirm that the greatest power is emitted when the network is saturated with traffic. Using this, we give a simple technique to quickly estimate power output based on traffic levels and give examples showing how this might be used in practice to predict current or future power output from a Wi-Fi network.

  8. Results of Microgravity Fluid Dynamics Captured With the Spheres-Slosh Experiment

    NASA Technical Reports Server (NTRS)

    Lapilli, Gabriel; Kirk, Daniel; Gutierrez, Hector; Schallhorn, Paul; Marsell, Brandon; Roth, Jacob; Moder, Jeffrey

    2015-01-01

    This paper provides an overview of the SPHERES-Slosh Experiment (SSE) aboard the International Space Station (ISS) and presents on-orbit results with data analysis. In order to predict the location of the liquid propellant during all times of a spacecraft mission, engineers and mission analysts utilize Computational Fluid Dynamics (CFD). These state-of-the-art computer programs numerically solve the fluid flow equations to predict the location of the fluid at any point in time during different spacecraft maneuvers. The models and equations used by these programs have been extensively validated on the ground, but long duration data has never been acquired in a microgravity environment. The SSE aboard the ISS is designed to acquire this type of data, used by engineers on earth to validate and improve the CFD prediction models, improving the design of the next generation of space vehicles as well as the safety of current missions. The experiment makes use of two Synchronized Position Hold, Engage, Reorient Experimental Satellites (SPHERES) connected by a frame. In the center of the frame there is a plastic, pill shaped tank that is partially filled with green-colored water. A pair of high resolution cameras records the movement of the liquid inside the tank as the experiment maneuvers within the Japanese Experimental Module test volume. Inertial measurement units record the accelerations and rotations of the tank, making the combination of stereo imaging and inertial data the inputs for CFD model validation.

  9. Result of Microgravity Fluid Dynamics Captured with the SPHERES-Slosh Experiment

    NASA Technical Reports Server (NTRS)

    Lapilli, Gabriel; Kirk, Daniel; Gutierrez, Hector; Schallhorn, Paul; Marsell, Brandon; Roth, Jacob; Moder, Jeffrey

    2015-01-01

    This paper provides an overview of the SPHERES-Slosh Experiment (SSE) aboard the International Space Station (ISS) and presents on-orbit results with data analysis. In order to predict the location of the liquid propellant during all times of a spacecraft mission, engineers and mission analysts utilize Computational Fluid Dynamics (CFD). These state-of-the-art computer programs numerically solve the fluid flow equations to predict the location of the fluid at any point in time during different spacecraft maneuvers. The models and equations used by these programs have been extensively validated on the ground, but long duration data has never been acquired in a microgravity environment. The SSE aboard the ISS is designed to acquire this type of data, used by engineers on earth to validate and improve the CFD prediction models, improving the design of the next generation of space vehicles as well as the safety of current missions. The experiment makes use of two Synchronized Position Hold, Engage, Reorient Experimental Satellites (SPHERES) connected by a frame. In the center of the frame there is a plastic, pill shaped tank that is partially filled with green-colored water. A pair of high resolution cameras records the movement of the liquid inside the tank as the experiment maneuvers within the Japanese Experimental Module test volume. Inertial measurement units record the accelerations and rotations of the tank, making the combination of stereo imaging and inertial data the inputs for CFD model validation.

  10. Results of Microgravity Fluid Dynamics Captured with the Spheres-Slosh Experiment

    NASA Technical Reports Server (NTRS)

    Lapilli, Gabriel; Kirk, Daniel Robert; Gutierrez, Hector; Schallhorn, Paul; Marsell, Brandon; Roth, Jacob; Jeffrey Moder

    2015-01-01

    This paper provides an overview of the SPHERES-Slosh Experiment (SSE) aboard the International Space Station (ISS) and presents on-orbit results with data analysis. In order to predict the location of the liquid propellant during all times of a spacecraft mission, engineers and mission analysts utilize Computational Fluid Dynamics (CFD). These state-of-the-art computer programs numerically solve the fluid flow equations to predict the location of the fluid at any point in time during different spacecraft maneuvers. The models and equations used by these programs have been extensively validated on the ground, but long duration data has never been acquired in a microgravity environment. The SSE aboard the ISS is designed to acquire this type of data, used by engineers on earth to validate and improve the CFD prediction models, improving the design of the next generation of space vehicles as well as the safety of current missions. The experiment makes use of two Synchronized Position Hold, Engage, Reorient Experimental Satellites (SPHERES) connected by a frame. In the center of the frame there is a plastic, pill shaped tank that is partially filled with green-colored water. A pair of high resolution cameras records the movement of the liquid inside the tank as the experiment maneuvers within the Japanese Experimental Module test volume. Inertial measurement units record the accelerations and rotations of the tank, making the combination of stereo imaging and inertial data the inputs for CFD model validation.

  11. Sooting turbulent jet flame: characterization and quantitative soot measurements

    NASA Astrophysics Data System (ADS)

    Köhler, M.; Geigle, K. P.; Meier, W.; Crosland, B. M.; Thomson, K. A.; Smallwood, G. J.

    2011-08-01

    Computational fluid dynamics (CFD) modelers require high-quality experimental data sets for validation of their numerical tools. Preferred features for numerical simulations of a sooting, turbulent test case flame are simplicity (no pilot flame), well-defined boundary conditions, and sufficient soot production. This paper proposes a non-premixed C2H4/air turbulent jet flame to fill this role and presents an extensive database for soot model validation. The sooting turbulent jet flame has a total visible flame length of approximately 400 mm and a fuel-jet Reynolds number of 10,000. The flame has a measured lift-off height of 26 mm which acts as a sensitive marker for CFD model validation, while this novel compiled experimental database of soot properties, temperature and velocity maps are useful for the validation of kinetic soot models and numerical flame simulations. Due to the relatively simple burner design which produces a flame with sufficient soot concentration while meeting modelers' needs with respect to boundary conditions and flame specifications as well as the present lack of a sooting "standard flame", this flame is suggested as a new reference turbulent sooting flame. The flame characterization presented here involved a variety of optical diagnostics including quantitative 2D laser-induced incandescence (2D-LII), shifted-vibrational coherent anti-Stokes Raman spectroscopy (SV-CARS), and particle image velocimetry (PIV). Producing an accurate and comprehensive characterization of a transient sooting flame was challenging and required optimization of these diagnostics. In this respect, we present the first simultaneous, instantaneous PIV, and LII measurements in a heavily sooting flame environment. Simultaneous soot and flow field measurements can provide new insights into the interaction between a turbulent vortex and flame chemistry, especially since soot structures in turbulent flames are known to be small and often treated in a statistical manner.

  12. Structural aspects of Lorentz-violating quantum field theory

    NASA Astrophysics Data System (ADS)

    Cambiaso, M.; Lehnert, R.; Potting, R.

    2018-01-01

    In the last couple of decades the Standard Model Extension has emerged as a fruitful framework to analyze the empirical and theoretical extent of the validity of cornerstones of modern particle physics, namely, of Special Relativity and of the discrete symmetries C, P and T (or some combinations of these). The Standard Model Extension allows to contrast high-precision experimental tests with posited alterations representing minute Lorentz and/or CPT violations. To date no violation of these symmetry principles has been observed in experiments, mostly prompted by the Standard-Model Extension. From the latter, bounds on the extent of departures from Lorentz and CPT symmetries can be obtained with ever increasing accuracy. These analyses have been mostly focused on tree-level processes. In this presentation I would like to comment on structural aspects of perturbative Lorentz violating quantum field theory. I will show that some insight coming from radiative corrections demands a careful reassessment of perturbation theory. Specifically I will argue that both the standard renormalization procedure as well as the Lehmann-Symanzik-Zimmermann reduction formalism need to be adapted given that the asymptotic single-particle states can receive quantum corrections from Lorentz-violating operators that are not present in the original Lagrangian.

  13. 40 CFR 761.386 - Required experimental conditions for the validation study and subsequent use during decontamination.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... experimental conditions for the validation study and subsequent use during decontamination. The following experimental conditions apply for any solvent: (a) Temperature and pressure. Conduct the validation study and...

  14. 40 CFR 761.386 - Required experimental conditions for the validation study and subsequent use during decontamination.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... experimental conditions for the validation study and subsequent use during decontamination. The following experimental conditions apply for any solvent: (a) Temperature and pressure. Conduct the validation study and...

  15. 40 CFR 761.386 - Required experimental conditions for the validation study and subsequent use during decontamination.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... experimental conditions for the validation study and subsequent use during decontamination. The following experimental conditions apply for any solvent: (a) Temperature and pressure. Conduct the validation study and...

  16. The Physiological Molecular Shape of Spectrin: A Compact Supercoil Resembling a Chinese Finger Trap.

    PubMed

    Brown, Jeffrey W; Bullitt, Esther; Sriswasdi, Sira; Harper, Sandra; Speicher, David W; McKnight, C James

    2015-06-01

    The primary, secondary, and tertiary structures of spectrin are reasonably well defined, but the structural basis for the known dramatic molecular shape change, whereby the molecular length can increase three-fold, is not understood. In this study, we combine previously reported biochemical and high-resolution crystallographic data with structural mass spectroscopy and electron microscopic data to derive a detailed, experimentally-supported quaternary structure of the spectrin heterotetramer. In addition to explaining spectrin's physiological resting length of ~55-65 nm, our model provides a mechanism by which spectrin is able to undergo a seamless three-fold extension while remaining a linear filament, an experimentally observed property. According to the proposed model, spectrin's quaternary structure and mechanism of extension is similar to a Chinese Finger Trap: at shorter molecular lengths spectrin is a hollow cylinder that extends by increasing the pitch of each spectrin repeat, which decreases the internal diameter. We validated our model with electron microscopy, which demonstrated that, as predicted, spectrin is hollow at its biological resting length of ~55-65 nm. The model is further supported by zero-length chemical crosslink data indicative of an approximately 90 degree bend between adjacent spectrin repeats. The domain-domain interactions in our model are entirely consistent with those present in the prototypical linear antiparallel heterotetramer as well as recently reported inter-strand chemical crosslinks. The model is consistent with all known physical properties of spectrin, and upon full extension our Chinese Finger Trap Model reduces to the ~180-200 nm molecular model currently in common use.

  17. Temperature measurement reliability and validity with thermocouple extension leads or changing lead temperature.

    PubMed

    Jutte, Lisa S; Long, Blaine C; Knight, Kenneth L

    2010-01-01

    Thermocouples' leads are often too short, necessitating the use of an extension lead. To determine if temperature measures were influenced by extension-lead use or lead temperature changes. Descriptive laboratory study. Laboratory. Experiment 1: 10 IT-21 thermocouples and 5 extension leads. Experiment 2: 5 IT-21 and PT-6 thermocouples. In experiment 1, temperature data were collected on 10 IT-21 thermocouples in a stable water bath with and without extension leads. In experiment 2, temperature data were collected on 5 IT-21 and PT-6 thermocouples in a stable water bath before, during, and after ice-pack application to extension leads. In experiment 1, extension leads did not influence IT-21 validity (P  =  .45) or reliability (P  =  .10). In experiment 2, postapplication IT-21 temperatures were greater than preapplication and application measures (P < .05). Extension leads had no influence on temperature measures. Ice application to leads may increase measurement error.

  18. Roadmap to Clinical Use of Gold Nanoparticles for Radiation Sensitization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuemann, Jan, E-mail: jschuemann@mgh.harvard.edu; Berbeco, Ross; Chithrani, Devika B.

    2016-01-01

    The past decade has seen a dramatic increase in interest in the use of gold nanoparticles (GNPs) as radiation sensitizers for radiation therapy. This interest was initially driven by their strong absorption of ionizing radiation and the resulting ability to increase dose deposited within target volumes even at relatively low concentrations. These early observations are supported by extensive experimental validation, showing GNPs' efficacy at sensitizing tumors in both in vitro and in vivo systems to a range of types of ionizing radiation, including kilovoltage and megavoltage X rays as well as charged particles. Despite this experimental validation, there has been limited translationmore » of GNP-mediated radiation sensitization to a clinical setting. One of the key challenges in this area is the wide range of experimental systems that have been investigated, spanning a range of particle sizes, shapes, and preparations. As a result, mechanisms of uptake and radiation sensitization have remained difficult to clearly identify. This has proven a significant impediment to the identification of optimal GNP formulations which strike a balance among their radiation sensitizing properties, their specificity to the tumors, their biocompatibility, and their imageability in vivo. This white paper reviews the current state of knowledge in each of the areas concerning the use of GNPs as radiosensitizers, and outlines the steps which will be required to advance GNP-enhanced radiation therapy from their current pre-clinical setting to clinical trials and eventual routine usage.« less

  19. Students' views about the nature of experimental physics

    NASA Astrophysics Data System (ADS)

    Wilcox, Bethany

    2017-04-01

    The physics community explores and explains the physical world through a blend of theoretical and experimental studies. The future of physics as a discipline depends on training of students in both the theoretical and experimental aspects of the field. However, while student learning within lecture courses has been the subject of extensive research, lab courses remain relatively under-studied. In particular, there is little, if any, data available that addresses the effectiveness of physics lab courses at encouraging students to recognize the nature and importance of experimental physics within the discipline as a whole. To address this gap, we present the first large-scale, national study (Ninstitutions = 71 and Nstudents = 7167) of undergraduate physics lab courses through analysis of students' responses to a research-validated assessment designed to investigate students' beliefs about the nature of experimental physics. We find that students often enter and leave physics lab courses with ideas about experimental physics that are inconsistent with the views of practicing experimental physicists, and this trend holds at both the introductory and upper-division levels. Despite this inconsistency, we find that both introductory and upper-division students are able to accurately predict the expert-like response even in cases where their personal views disagree. These finding have implications for the recruitment, retention, and adequate preparation of students in physics. This work was funded by the NSF-IUSE Grant No. DUE-1432204 and NSF Grant No. PHY-1125844.

  20. Large-Scale Mapping and Validation of Escherichia coli Transcriptional Regulation from a Compendium of Expression Profiles

    PubMed Central

    Thaden, Joshua T; Mogno, Ilaria; Wierzbowski, Jamey; Cottarel, Guillaume; Kasif, Simon; Collins, James J; Gardner, Timothy S

    2007-01-01

    Machine learning approaches offer the potential to systematically identify transcriptional regulatory interactions from a compendium of microarray expression profiles. However, experimental validation of the performance of these methods at the genome scale has remained elusive. Here we assess the global performance of four existing classes of inference algorithms using 445 Escherichia coli Affymetrix arrays and 3,216 known E. coli regulatory interactions from RegulonDB. We also developed and applied the context likelihood of relatedness (CLR) algorithm, a novel extension of the relevance networks class of algorithms. CLR demonstrates an average precision gain of 36% relative to the next-best performing algorithm. At a 60% true positive rate, CLR identifies 1,079 regulatory interactions, of which 338 were in the previously known network and 741 were novel predictions. We tested the predicted interactions for three transcription factors with chromatin immunoprecipitation, confirming 21 novel interactions and verifying our RegulonDB-based performance estimates. CLR also identified a regulatory link providing central metabolic control of iron transport, which we confirmed with real-time quantitative PCR. The compendium of expression data compiled in this study, coupled with RegulonDB, provides a valuable model system for further improvement of network inference algorithms using experimental data. PMID:17214507

  1. Validating and improving a zero-dimensional stack voltage model of the Vanadium Redox Flow Battery

    NASA Astrophysics Data System (ADS)

    König, S.; Suriyah, M. R.; Leibfried, T.

    2018-02-01

    Simple, computationally efficient battery models can contribute significantly to the development of flow batteries. However, validation studies for these models on an industrial-scale stack level are rarely published. We first extensively present a simple stack voltage model for the Vanadium Redox Flow Battery. For modeling the concentration overpotential, we derive mass transfer coefficients from experimental results presented in the 1990s. The calculated mass transfer coefficient of the positive half-cell is 63% larger than of the negative half-cell, which is not considered in models published to date. Further, we advance the concentration overpotential model by introducing an apparent electrochemically active electrode surface which differs from the geometric electrode area. We use the apparent surface as fitting parameter for adapting the model to experimental results of a flow battery manufacturer. For adapting the model, we propose a method for determining the agreement between model and reality quantitatively. To protect the manufacturer's intellectual property, we introduce a normalization method for presenting the results. For the studied stack, the apparent electrochemically active surface of the electrode is 41% larger than its geometrical area. Hence, the current density in the diffusion layer is 29% smaller than previously reported for a zero-dimensional model.

  2. Atomistic structural ensemble refinement reveals non-native structure stabilizes a sub-millisecond folding intermediate of CheY

    DOE PAGES

    Shi, Jade; Nobrega, R. Paul; Schwantes, Christian; ...

    2017-03-08

    The dynamics of globular proteins can be described in terms of transitions between a folded native state and less-populated intermediates, or excited states, which can play critical roles in both protein folding and function. Excited states are by definition transient species, and therefore are difficult to characterize using current experimental techniques. We report an atomistic model of the excited state ensemble of a stabilized mutant of an extensively studied flavodoxin fold protein CheY. We employed a hybrid simulation and experimental approach in which an aggregate 42 milliseconds of all-atom molecular dynamics were used as an informative prior for the structuremore » of the excited state ensemble. The resulting prior was then refined against small-angle X-ray scattering (SAXS) data employing an established method (EROS). The most striking feature of the resulting excited state ensemble was an unstructured N-terminus stabilized by non-native contacts in a conformation that is topologically simpler than the native state. We then predict incisive single molecule FRET experiments, using these results, as a means of model validation. Our study demonstrates the paradigm of uniting simulation and experiment in a statistical model to study the structure of protein excited states and rationally design validating experiments.« less

  3. Atomistic structural ensemble refinement reveals non-native structure stabilizes a sub-millisecond folding intermediate of CheY

    NASA Astrophysics Data System (ADS)

    Shi, Jade; Nobrega, R. Paul; Schwantes, Christian; Kathuria, Sagar V.; Bilsel, Osman; Matthews, C. Robert; Lane, T. J.; Pande, Vijay S.

    2017-03-01

    The dynamics of globular proteins can be described in terms of transitions between a folded native state and less-populated intermediates, or excited states, which can play critical roles in both protein folding and function. Excited states are by definition transient species, and therefore are difficult to characterize using current experimental techniques. Here, we report an atomistic model of the excited state ensemble of a stabilized mutant of an extensively studied flavodoxin fold protein CheY. We employed a hybrid simulation and experimental approach in which an aggregate 42 milliseconds of all-atom molecular dynamics were used as an informative prior for the structure of the excited state ensemble. This prior was then refined against small-angle X-ray scattering (SAXS) data employing an established method (EROS). The most striking feature of the resulting excited state ensemble was an unstructured N-terminus stabilized by non-native contacts in a conformation that is topologically simpler than the native state. Using these results, we then predict incisive single molecule FRET experiments as a means of model validation. This study demonstrates the paradigm of uniting simulation and experiment in a statistical model to study the structure of protein excited states and rationally design validating experiments.

  4. Curation accuracy of model organism databases

    PubMed Central

    Keseler, Ingrid M.; Skrzypek, Marek; Weerasinghe, Deepika; Chen, Albert Y.; Fulcher, Carol; Li, Gene-Wei; Lemmer, Kimberly C.; Mladinich, Katherine M.; Chow, Edmond D.; Sherlock, Gavin; Karp, Peter D.

    2014-01-01

    Manual extraction of information from the biomedical literature—or biocuration—is the central methodology used to construct many biological databases. For example, the UniProt protein database, the EcoCyc Escherichia coli database and the Candida Genome Database (CGD) are all based on biocuration. Biological databases are used extensively by life science researchers, as online encyclopedias, as aids in the interpretation of new experimental data and as golden standards for the development of new bioinformatics algorithms. Although manual curation has been assumed to be highly accurate, we are aware of only one previous study of biocuration accuracy. We assessed the accuracy of EcoCyc and CGD by manually selecting curated assertions within randomly chosen EcoCyc and CGD gene pages and by then validating that the data found in the referenced publications supported those assertions. A database assertion is considered to be in error if that assertion could not be found in the publication cited for that assertion. We identified 10 errors in the 633 facts that we validated across the two databases, for an overall error rate of 1.58%, and individual error rates of 1.82% for CGD and 1.40% for EcoCyc. These data suggest that manual curation of the experimental literature by Ph.D-level scientists is highly accurate. Database URL: http://ecocyc.org/, http://www.candidagenome.org// PMID:24923819

  5. Theoretical modeling and experimental validation of a torsional piezoelectric vibration energy harvesting system

    NASA Astrophysics Data System (ADS)

    Qian, Feng; Zhou, Wanlu; Kaluvan, Suresh; Zhang, Haifeng; Zuo, Lei

    2018-04-01

    Vibration energy harvesting has been extensively studied in recent years to explore a continuous power source for sensor networks and low-power electronics. Torsional vibration widely exists in mechanical engineering; however, it has not yet been well exploited for energy harvesting. This paper presents a theoretical model and an experimental validation of a torsional vibration energy harvesting system comprised of a shaft and a shear mode piezoelectric transducer. The piezoelectric transducer position on the surface of the shaft is parameterized by two variables that are optimized to obtain the maximum power output. The piezoelectric transducer can work in d 15 mode (pure shear mode), coupled mode of d 31 and d 33, and coupled mode of d 33, d 31 and d 15, respectively, when attached at different angles. Approximate expressions of voltage and power are derived from the theoretical model, which gave predictions in good agreement with analytical solutions. Physical interpretations on the implicit relationship between the power output and the position parameters of the piezoelectric transducer is given based on the derived approximate expression. The optimal position and angle of the piezoelectric transducer is determined, in which case, the transducer works in the coupled mode of d 15, d 31 and d 33.

  6. Application of Jacobian-free Newton–Krylov method in implicitly solving two-fluid six-equation two-phase flow problems: Implementation, validation and benchmark

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-03-09

    This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3Dmore » code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.« less

  7. Synchrotron characterization of nanograined UO 2 grain growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mo, Kun; Miao, Yinbin; Yun, Di

    2015-09-30

    This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL) and aims at providing experimental data for the validation of the mesoscale simulation code MARMOT. MARMOT is a mesoscale multiphysics code that predicts the coevolution of microstructure and properties within reactor fuel during its lifetime in the reactor. It is an important component of the Moose-Bison-Marmot (MBM) code suite that has been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. In order to ensure the accuracy of the microstructuremore » based materials models being developed within the MARMOT code, extensive validation efforts must be carried out. In this report, we summarize our preliminary synchrotron radiation experiments at APS to determine the grain size of nanograin UO 2. The methodology and experimental setup developed in this experiment can directly apply to the proposed in-situ grain growth measurements. The investigation of the grain growth kinetics was conducted based on isothermal annealing and grain growth characterization as functions of duration and temperature. The kinetic parameters such as activation energy for grain growth for UO 2 with different stoichiometry are obtained and compared with molecular dynamics (MD) simulations.« less

  8. Supplying materials needed for grain growth characterizations of nano-grained UO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mo, Kun; Miao, Yinbin; Yun, Di

    2015-09-30

    This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL) and aims at providing experimental data for the validation of the mesoscale simulation code MARMOT. MARMOT is a mesoscale multiphysics code that predicts the coevolution of microstructure and properties within reactor fuel during its lifetime in the reactor. It is an important component of the Moose-Bison-Marmot (MBM) code suite that has been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. In order to ensure the accuracy of the microstructuremore » based materials models being developed within the MARMOT code, extensive validation efforts must be carried out. In this report, we summarize our preliminary synchrotron radiation experiments at APS to determine the grain size of nanograin UO 2. The methodology and experimental setup developed in this experiment can directly apply to the proposed in-situ grain growth measurements. The investigation of the grain growth kinetics was conducted based on isothermal annealing and grain growth characterization as functions of duration and temperature. The kinetic parameters such as activation energy for grain growth for UO 2 with different stoichiometry are obtained and compared with molecular dynamics (MD) simulations.« less

  9. Assessment of leaf carotenoids content with a new carotenoid index: Development and validation on experimental and model data

    NASA Astrophysics Data System (ADS)

    Zhou, Xianfeng; Huang, Wenjiang; Kong, Weiping; Ye, Huichun; Dong, Yingying; Casa, Raffaele

    2017-05-01

    Leaf carotenoids content (LCar) is an important indicator of plant physiological status. Accurate estimation of LCar provides valuable insight into early detection of stress in vegetation. With spectroscopy techniques, a semi-empirical approach based on spectral indices was extensively used for carotenoids content estimation. However, established spectral indices for carotenoids that generally rely on limited measured data, might lack predictive accuracy for carotenoids estimation in various species and at different growth stages. In this study, we propose a new carotenoid index (CARI) for LCar assessment based on a large synthetic dataset simulated from the leaf radiative transfer model PROSPECT-5, and evaluate its capability with both simulated data from PROSPECT-5 and 4SAIL and extensive experimental datasets: the ANGERS dataset and experimental data acquired in field experiments in China in 2004. Results show that CARI was the index most linearly correlated with carotenoids content at the leaf level using a synthetic dataset (R2 = 0.943, RMSE = 1.196 μg/cm2), compared with published spectral indices. Cross-validation results with CARI using ANGERS data achieved quite an accurate estimation (R2 = 0.545, RMSE = 3.413 μg/cm2), though the RBRI performed as the best index (R2 = 0.727, RMSE = 2.640 μg/cm2). CARI also showed good accuracy (R2 = 0.639, RMSE = 1.520 μg/cm2) for LCar assessment with leaf level field survey data, though PRI performed better (R2 = 0.710, RMSE = 1.369 μg/cm2). Whereas RBRI, PRI and other assessed spectral indices showed a good performance for a given dataset, overall their estimation accuracy was not consistent across all datasets used in this study. Conversely CARI was more robust showing good results in all datasets. Further assessment of LCar with simulated and measured canopy reflectance data indicated that CARI might not be very sensitive to LCar changes at low leaf area index (LAI) value, and in these conditions soil moisture influenced the LCar retrieval accuracy.

  10. Validity and reliability of a low-cost digital dynamometer for measuring isometric strength of lower limb.

    PubMed

    Romero-Franco, Natalia; Jiménez-Reyes, Pedro; Montaño-Munuera, Juan A

    2017-11-01

    Lower limb isometric strength is a key parameter to monitor the training process or recognise muscle weakness and injury risk. However, valid and reliable methods to evaluate it often require high-cost tools. The aim of this study was to analyse the concurrent validity and reliability of a low-cost digital dynamometer for measuring isometric strength in lower limb. Eleven physically active and healthy participants performed maximal isometric strength for: flexion and extension of ankle, flexion and extension of knee, flexion, extension, adduction, abduction, internal and external rotation of hip. Data obtained by the digital dynamometer were compared with the isokinetic dynamometer to examine its concurrent validity. Data obtained by the digital dynamometer from 2 different evaluators and 2 different sessions were compared to examine its inter-rater and intra-rater reliability. Intra-class correlation (ICC) for validity was excellent in every movement (ICC > 0.9). Intra and inter-tester reliability was excellent for all the movements assessed (ICC > 0.75). The low-cost digital dynamometer demonstrated strong concurrent validity and excellent intra and inter-tester reliability for assessing isometric strength in the main lower limb movements.

  11. Vibration control of beams using stand-off layer damping: finite element modeling and experiments

    NASA Astrophysics Data System (ADS)

    Chaudry, A.; Baz, A.

    2006-03-01

    Damping treatments with stand-off layer (SOL) have been widely accepted as an attractive alternative to conventional constrained layer damping (CLD) treatments. Such an acceptance stems from the fact that the SOL, which is simply a slotted spacer layer sandwiched between the viscoelastic layer and the base structure, acts as a strain magnifier that considerably amplifies the shear strain and hence the energy dissipation characteristics of the viscoelastic layer. Accordingly, more effective vibration suppression can be achieved by using SOL as compared to employing CLD. In this paper, a comprehensive finite element model of the stand-off layer constrained damping treatment is developed. The model accounts for the geometrical and physical parameters of the slotted SOL, the viscoelastic, layer the constraining layer, and the base structure. The predictions of the model are validated against the predictions of a distributed transfer function model and a model built using a commercial finite element code (ANSYS). Furthermore, the theoretical predictions are validated experimentally for passive SOL treatments of different configurations. The obtained results indicate a close agreement between theory and experiments. Furthermore, the obtained results demonstrate the effectiveness of the CLD with SOL in enhancing the energy dissipation as compared to the conventional CLD. Extension of the proposed one-dimensional CLD with SOL to more complex structures is a natural extension to the present study.

  12. Analysis and Validation of a Predictive Model for Growth and Death of Aeromonas hydrophila under Modified Atmospheres at Refrigeration Temperatures

    PubMed Central

    Pin, Carmen; Velasco de Diego, Raquel; George, Susan; García de Fernando, Gonzalo D.; Baranyi, József

    2004-01-01

    Specific growth and death rates of Aeromonas hydrophila were measured in laboratory media under various combinations of temperature, pH, and percent CO2 and O2 in the atmosphere. Predictive models were developed from the data and validated by means of observations obtained from (i) seafood experiments set up for this purpose and (ii) the ComBase database (http://www.combase.cc; http://wyndmoor.arserrc.gov/combase/).Two main reasons were identified for the differences between the predicted and observed growth in food: they were the variability of the growth rates in food and the bias of the model predictions when applied to food environments. A statistical method is presented to quantitatively analyze these differences. The method was also used to extend the interpolation region of the model. In this extension, the concept of generalized Z values (C. Pin, G. García de Fernando, J. A. Ordóñez, and J. Baranyi, Food Microbiol. 18:539-545, 2001) played an important role. The extension depended partly on the density of the model-generating observations and partly on the accuracy of extrapolated predictions close to the boundary of the interpolation region. The boundary of the growth region of the organism was also estimated by means of experimental results for growth and death rates. PMID:15240265

  13. Electrode Coverage Optimization for Piezoelectric Energy Harvesting from Tip Excitation

    PubMed Central

    Chen, Guangzhu; Bai, Nan

    2018-01-01

    Piezoelectric energy harvesting using cantilever-type structures has been extensively investigated due to its potential application in providing power supplies for wireless sensor networks, but the low output power has been a bottleneck for its further commercialization. To improve the power conversion capability, a piezoelectric beam with different electrode coverage ratios is studied theoretically and experimentally in this paper. A distributed-parameter theoretical model is established for a bimorph piezoelectric beam with the consideration of the electrode coverage area. The impact of the electrode coverage on the capacitance, the output power and the optimal load resistance are analyzed, showing that the piezoelectric beam has the best performance with an electrode coverage of 66.1%. An experimental study was then carried out to validate the theoretical results using a piezoelectric beam fabricated with segmented electrodes. The experimental results fit well with the theoretical model. A 12% improvement on the Root-Mean-Square (RMS) output power was achieved with the optimized electrode converge ratio (66.1%). This work provides a simple approach to utilizing piezoelectric beams in a more efficient way. PMID:29518934

  14. Reply to Comment on ‘The motion of an arbitrarily rotating spherical projectile and its application to ball games’

    NASA Astrophysics Data System (ADS)

    Robinson, Garry; Robinson, Ian

    2014-06-01

    Jensen (2014 Phys. Scr. 89 067001) presents arguments that the expressions that we have used in our recent paper (Robinson and Robinson 2013 Phys. Scr. 88 018101) for the lift force and possibly the drag force acting on a rotating spherical projectile are dimensionally incorrect and therefore cannot be valid. We acknowledge that the alternative equations suggested by Jensen are dimensionally correct, and may well be borne out by future experimental results. However, we demonstrate that our equations are in fact also dimensionally correct, the key concept being that of having the appropriate dimensions for the multiplying constants, an extensively used practice with experimentally determined laws. After a detailed discussion of the situation, a simple illustrative example of Hooke's law for the restoring force, F, due to a mass attached to a spring displaced by a distance x from its equilibrium position is presented, where the spring constant, k, has such units as to render the equation dimensionally correct. Finally we discuss the implications of some relevant existing experimental results for the lift force.

  15. Role of metabolism and viruses in aflatoxin-induced liver cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groopman, John D.; Kensler, Thomas W.

    The use of biomarkers in molecular epidemiology studies for identifying stages in the progression of development of the health effects of environmental agents has the potential for providing important information for critical regulatory, clinical and public health problems. Investigations of aflatoxins probably represent one of the most extensive data sets in the field and this work may serve as a template for future studies of other environmental agents. The aflatoxins are naturally occurring mycotoxins found on foods such as corn, peanuts, various other nuts and cottonseed and they have been demonstrated to be carcinogenic in many experimental models. As amore » result of nearly 30 years of study, experimental data and epidemiological studies in human populations, aflatoxin B{sub 1} was classified as carcinogenic to humans by the International Agency for Research on Cancer. The long-term goal of the research described herein is the application of biomarkers to the development of preventative interventions for use in human populations at high-risk for cancer. Several of the aflatoxin-specific biomarkers have been validated in epidemiological studies and are now being used as intermediate biomarkers in prevention studies. The development of these aflatoxin biomarkers has been based upon the knowledge of the biochemistry and toxicology of aflatoxins gleaned from both experimental and human studies. These biomarkers have subsequently been utilized in experimental models to provide data on the modulation of these markers under different situations of disease risk. This systematic approach provides encouragement for preventive interventions and should serve as a template for the development, validation and application of other chemical-specific biomarkers to cancer or other chronic diseases.« less

  16. Evaluated cross-section libraries and kerma factors for neutrons up to 100 MeV on {sup 12}C

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chadwick, M.B.; Blann, M.; Cox, L.

    1995-04-11

    A program is being carried out at Lawrence Livermore National Laboratory to develop high-energy evaluated nuclear data libraries for use in Monte Carlo simulations of cancer radiation therapy. In this report we describe evaluated cross sections and kerma factors for neutrons with incident energies up to 100 MeV on {sup 12}C. The aim of this effort is to incorporate advanced nuclear physics modeling methods, with new experimental measurements, to generate cross section libraries needed for an accurate simulation of dose deposition in fast neutron therapy. The evaluated libraries are based mainly on nuclear model calculations, benchmarked to experimental measurements wheremore » they exist. We use the GNASH code system, which includes Hauser-Feshbach, preequilibrium, and direct reaction mechanisms. The libraries tabulate elastic and nonelastic cross sections, angle-energy correlated production spectra for light ejectiles with A{le}and kinetic energies given to light ejectiles and heavy recoil fragments. The major steps involved in this effort are: (1) development and validation of nuclear models for incident energies up to 100 MeV; (2) collation of experimental measurements, including new results from Louvain-la-Nueve and Los Alamos; (3) extension of the Livermore ENDL formats for representing high-energy data; (4) calculation and evaluation of nuclear data; and (5) validation of the libraries. We describe the evaluations in detail, with particular emphasis on our new high-energy modeling developments. Our evaluations agree well with experimental measurements of integrated and differential cross sections. We compare our results with the recent ENDF/B-VI evaluation which extends up to 32 MeV.« less

  17. Validation of the FEA of a deep drawing process with additional force transmission

    NASA Astrophysics Data System (ADS)

    Behrens, B.-A.; Bouguecha, A.; Bonk, C.; Grbic, N.; Vucetic, M.

    2017-10-01

    In order to meet requirements by automotive industry like decreasing the CO2 emissions, which reflects in reducing vehicles mass in the car body, the chassis and the powertrain, the continuous innovation and further development of existing production processes are required. In sheet metal forming processes the process limits and components characteristics are defined through the process specific loads. While exceeding the load limits, a failure in the material occurs, which can be avoided by additional force transmission activated in the deep drawing process before the process limit is achieved. This contribution deals with experimental investigations of a forming process with additional force transmission regarding the extension of the process limits. Based on FEA a tool system is designed and developed by IFUM. For this purpose, the steel material HCT600 is analyzed numerically. Within the experimental investigations, the deep drawing processes, with and without the additional force transmission are carried out. Here, a comparison of the produced rectangle cups is done. Subsequently, the identical deep drawing processes are investigated numerically. Thereby, the values of the punch reaction force and displacement are estimated and compared with experimental results. Thus, the validation of material model is successfully carried out on process scale. For further quantitative verification of the FEA results the experimental determined geometry of the rectangular cup is measured optically with ATOS system of the company GOM mbH and digitally compared with external software Geomagic®QualifyTM. The goal of this paper is the verification of the transferability of the FEA model for a conventional deep drawing process to a deep drawing process with additional force transmission with a counter punch.

  18. Annotation of Alternatively Spliced Proteins and Transcripts with Protein-Folding Algorithms and Isoform-Level Functional Networks.

    PubMed

    Li, Hongdong; Zhang, Yang; Guan, Yuanfang; Menon, Rajasree; Omenn, Gilbert S

    2017-01-01

    Tens of thousands of splice isoforms of proteins have been catalogued as predicted sequences from transcripts in humans and other species. Relatively few have been characterized biochemically or structurally. With the extensive development of protein bioinformatics, the characterization and modeling of isoform features, isoform functions, and isoform-level networks have advanced notably. Here we present applications of the I-TASSER family of algorithms for folding and functional predictions and the IsoFunc, MIsoMine, and Hisonet data resources for isoform-level analyses of network and pathway-based functional predictions and protein-protein interactions. Hopefully, predictions and insights from protein bioinformatics will stimulate many experimental validation studies.

  19. Image encryption using a synchronous permutation-diffusion technique

    NASA Astrophysics Data System (ADS)

    Enayatifar, Rasul; Abdullah, Abdul Hanan; Isnin, Ismail Fauzi; Altameem, Ayman; Lee, Malrey

    2017-03-01

    In the past decade, the interest on digital images security has been increased among scientists. A synchronous permutation and diffusion technique is designed in order to protect gray-level image content while sending it through internet. To implement the proposed method, two-dimensional plain-image is converted to one dimension. Afterward, in order to reduce the sending process time, permutation and diffusion steps for any pixel are performed in the same time. The permutation step uses chaotic map and deoxyribonucleic acid (DNA) to permute a pixel, while diffusion employs DNA sequence and DNA operator to encrypt the pixel. Experimental results and extensive security analyses have been conducted to demonstrate the feasibility and validity of this proposed image encryption method.

  20. On-irrigator pasture soil moisture sensor

    NASA Astrophysics Data System (ADS)

    Eng-Choon Tan, Adrian; Richards, Sean; Platt, Ian; Woodhead, Ian

    2017-02-01

    In this paper, we presented the development of a proximal soil moisture sensor that measured the soil moisture content of dairy pasture directly from the boom of an irrigator. The proposed sensor was capable of soil moisture measurements at an accuracy of  ±5% volumetric moisture content, and at meter scale ground area resolutions. The sensor adopted techniques from the ultra-wideband radar to enable measurements of ground reflection at resolutions that are smaller than the antenna beamwidth of the sensor. An experimental prototype was developed for field measurements. Extensive field measurements using the developed prototype were conducted on grass pasture at different ground conditions to validate the accuracy of the sensor in performing soil moisture measurements.

  1. Catalysis on Single Supported Atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeBusk, Melanie Moses; Narula, Chaitanya Kumar

    2015-01-01

    The highly successful application of supported metals as heterogeneous catalysts in automotive catalysts, fuel cells, and other multitudes of industrial processes have led to extensive efforts to understand catalyst behavior at the nano-scale. Recent discovery of simple wet methods to prepare single supported atoms, the smallest nano-catalyst, has allowed for experimental validation of catalytic activity of a variety of catalysts and potential for large scale production for such catalysts for industrial processes. In this chapter, we summarize the synthetic and structural aspects of single supported atoms. We also present proposed mechanisms for the activity of single supported catalysts where conventionalmore » mechanisms cannot operate due to lack of M-M bonds in the catalysts.« less

  2. Workshop report - A validation study of Navier-Stokes codes for transverse injection into a Mach 2 flow

    NASA Technical Reports Server (NTRS)

    Eklund, Dean R.; Northam, G. B.; Mcdaniel, J. C.; Smith, Cliff

    1992-01-01

    A CFD (Computational Fluid Dynamics) competition was held at the Third Scramjet Combustor Modeling Workshop to assess the current state-of-the-art in CFD codes for the analysis of scramjet combustors. Solutions from six three-dimensional Navier-Stokes codes were compared for the case of staged injection of air behind a step into a Mach 2 flow. This case was investigated experimentally at the University of Virginia and extensive in-stream data was obtained. Code-to-code comparisons have been made with regard to both accuracy and efficiency. The turbulence models employed in the solutions are believed to be a major source of discrepancy between the six solutions.

  3. Lattice-Assisted Spectroscopy: A Generalized Scanning Tunneling Microscope for Ultracold Atoms.

    PubMed

    Kantian, A; Schollwöck, U; Giamarchi, T

    2015-10-16

    We propose a scheme to measure the frequency-resolved local particle and hole spectra of any optical lattice-confined system of correlated ultracold atoms that offers single-site addressing and imaging, which is now an experimental reality. Combining perturbation theory and time-dependent density matrix renormalization group simulations, we quantitatively test and validate this approach of lattice-assisted spectroscopy on several one-dimensional example systems, such as the superfluid and Mott insulator, with and without a parabolic trap, and finally on edge states of the bosonic Su-Schrieffer-Heeger model. We highlight extensions of our basic scheme to obtain an even wider variety of interesting and important frequency resolved spectra.

  4. Numerical Simulation with Experimental Validation of the Draping Behavior of Woven Fabrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodgers, William; Pasupuleti, Praveen; Zhao, Selina

    Woven fabric composites are extensively used in molding complex geometrical shapes due to their high conformability compared to other fabrics. Preforming is an important step in the overall process. In this step, the two-dimensional fabric is draped to become the three-dimensional shape of the part prior to resin injection. During preforming, the orientation of the tows may change significantly compared to the initial orientations. Accurate prediction of the tow orientations after molding is important for evaluating the structural performance of the final part. This paper investigates the fiber angle changes for carbon fiber woven fabrics during draping over a truncatedmore » pyramid tool designed and fabricated at the General Motors Research Labs. This aspect of study is a subset of the broad study conducted under the purview of a Department of Energy project funded to GM in developing state of the art computational tools for integrated manufacturing and structural performance prediction of carbon fiber composites. Fabric bending, picture frame testing, and bias-extension evaluations were carried out to determine the material parameters for these fabrics. The PAM-FORM computer program was used to model the draping behavior of these fabrics. Following deformation, fiber angle changes at different locations on the truncated pyramid were measured experimentally. The predicted angles matched the experimental results well as measured along the centerline and at several different locations on the deformed fabric. Details of the test methods used as well as the numerical results with various simulation parameters will be provided.« less

  5. Experimental and clinical usefulness of crossmodal paradigms in psychiatry: an illustration from emotional processing in alcohol-dependence

    PubMed Central

    Maurage, Pierre; Campanella, Salvatore

    2013-01-01

    Crossmodal processing (i.e., the construction of a unified representation stemming from distinct sensorial modalities inputs) constitutes a crucial ability in humans' everyday life. It has been extensively explored at cognitive and cerebral levels during the last decade among healthy controls. Paradoxically however, and while difficulties to perform this integrative process have been suggested in a large range of psychopathological states (e.g., schizophrenia and autism), these crossmodal paradigms have been very rarely used in the exploration of psychiatric populations. The main aim of the present paper is thus to underline the experimental and clinical usefulness of exploring crossmodal processes in psychiatry. We will illustrate this proposal by means of the recent data obtained in the crossmodal exploration of emotional alterations in alcohol-dependence. Indeed, emotional decoding impairments might have a role in the development and maintenance of alcohol-dependence, and have been extensively investigated by means of experiments using separated visual or auditory stimulations. Besides these unimodal explorations, we have recently conducted several studies using audio-visual crossmodal paradigms, which has allowed us to improve the ecological validity of the unimodal experimental designs and to offer new insights on the emotional alterations among alcohol-dependent individuals. We will show how these preliminary results can be extended to develop a coherent and ambitious research program using crossmodal designs in various psychiatric populations and sensory modalities. We will finally end the paper by underlining the various potential clinical applications and the fundamental implications that can be raised by this emerging project. PMID:23898250

  6. Effect of a Diffusion Zone on Fatigue Crack Propagation in Layered FGMs

    NASA Astrophysics Data System (ADS)

    Hauber, Brett; Brockman, Robert; Paulino, Glaucio

    2008-02-01

    Research into functionally graded materials (FGMs) has led to advances in our ability to analyze cracks. However, two prominent aspects remain relatively unexplored: 1) development and validation of modeling methods for fatigue crack propagation in FGMs, and 2) experimental validation of stress intensity models in engineered materials such as two phase monolithic and graded materials. This work addresses some of these problems for a limited set of conditions, material systems (e.g., Ti/TiB), and material gradients. Numerical analyses are conducted for single edge notch bend (SENB) specimens. Stress intensity factors are computed using the specialized finite element code I-Franc (Illinois Fracture Analysis Code), which is tailored for both homogeneous and graded materials, as well as Franc2DL and ABAQUS. Crack extension is considered by means of specified crack increments, together with fatigue evaluations to predict crack propagation life. Results will be used to determine linear material gradient parameters that are significant for prediction of fatigue crack growth behavior.

  7. Numerical Modeling of Turbulence Effects within an Evaporating Droplet in Atomizing Sprays

    NASA Technical Reports Server (NTRS)

    Balasubramanyam, M. S.; Chen, C. P.; Trinh, H. P.

    2006-01-01

    A new approach to account for finite thermal conductivity and turbulence effects within atomizing liquid sprays is presented in this paper. The model is an extension of the T-blob and T-TAB atomization/spray model of Trinh and Chen (2005). This finite conductivity model is based on the two-temperature film theory, where the turbulence characteristics of the droplet are used to estimate the effective thermal diffhsivity within the droplet phase. Both one-way and two-way coupled calculations were performed to investigate the performance of this model. The current evaporation model is incorporated into the T-blob atomization model of Trinh and Chen (2005) and implemented in an existing CFD Eulerian-Lagrangian two-way coupling numerical scheme. Validation studies were carried out by comparing with available evaporating atomization spray experimental data in terms of jet penetration, temperature field, and droplet SMD distribution within the spray. Validation results indicate the superiority of the finite-conductivity model in low speed parallel flow evaporating spray.

  8. Identification of appropriate reference genes for human mesenchymal stem cell analysis by quantitative real-time PCR.

    PubMed

    Li, Xiuying; Yang, Qiwei; Bai, Jinping; Xuan, Yali; Wang, Yimin

    2015-01-01

    Normalization to a reference gene is the method of choice for quantitative reverse transcription-PCR (RT-qPCR) analysis. The stability of reference genes is critical for accurate experimental results and conclusions. We have evaluated the expression stability of eight commonly used reference genes found in four different human mesenchymal stem cells (MSC). Using geNorm, NormFinder and BestKeeper algorithms, we show that beta-2-microglobulin and peptidyl-prolylisomerase A were the optimal reference genes for normalizing RT-qPCR data obtained from MSC, whereas the TATA box binding protein was not suitable due to its extensive variability in expression. Our findings emphasize the significance of validating reference genes for qPCR analyses. We offer a short list of reference genes to use for normalization and recommend some commercially-available software programs as a rapid approach to validate reference genes. We also demonstrate that the two reference genes, β-actin and glyceraldehyde-3-phosphate dehydrogenase, are frequently used are not always successful in many cases.

  9. Tuning support vector machines for minimax and Neyman-Pearson classification.

    PubMed

    Davenport, Mark A; Baraniuk, Richard G; Scott, Clayton D

    2010-10-01

    This paper studies the training of support vector machine (SVM) classifiers with respect to the minimax and Neyman-Pearson criteria. In principle, these criteria can be optimized in a straightforward way using a cost-sensitive SVM. In practice, however, because these criteria require especially accurate error estimation, standard techniques for tuning SVM parameters, such as cross-validation, can lead to poor classifier performance. To address this issue, we first prove that the usual cost-sensitive SVM, here called the 2C-SVM, is equivalent to another formulation called the 2nu-SVM. We then exploit a characterization of the 2nu-SVM parameter space to develop a simple yet powerful approach to error estimation based on smoothing. In an extensive experimental study, we demonstrate that smoothing significantly improves the accuracy of cross-validation error estimates, leading to dramatic performance gains. Furthermore, we propose coordinate descent strategies that offer significant gains in computational efficiency, with little to no loss in performance.

  10. Vascular endothelial growth factor receptor-2 (VEGFR-2) inhibitors: development and validation of predictive 3-D QSAR models through extensive ligand- and structure-based approaches

    NASA Astrophysics Data System (ADS)

    Ragno, Rino; Ballante, Flavio; Pirolli, Adele; Wickersham, Richard B.; Patsilinakos, Alexandros; Hesse, Stéphanie; Perspicace, Enrico; Kirsch, Gilbert

    2015-08-01

    Vascular endothelial growth factor receptor-2, (VEGFR-2), is a key element in angiogenesis, the process by which new blood vessels are formed, and is thus an important pharmaceutical target. Here, 3-D quantitative structure-activity relationship (3-D QSAR) were used to build a quantitative screening and pharmacophore model of the VEGFR-2 receptors for design of inhibitors with improved activities. Most of available experimental data information has been used as training set to derive optimized and fully cross-validated eight mono-probe and a multi-probe quantitative models. Notable is the use of 262 molecules, aligned following both structure-based and ligand-based protocols, as external test set confirming the 3-D QSAR models' predictive capability and their usefulness in design new VEGFR-2 inhibitors. From a survey on literature, this is the first generation of a wide-ranging computational medicinal chemistry application on VEGFR2 inhibitors.

  11. Validation of lumbar spine loading from a musculoskeletal model including the lower limbs and lumbar spine.

    PubMed

    Actis, Jason A; Honegger, Jasmin D; Gates, Deanna H; Petrella, Anthony J; Nolasco, Luis A; Silverman, Anne K

    2018-02-08

    Low back mechanics are important to quantify to study injury, pain and disability. As in vivo forces are difficult to measure directly, modeling approaches are commonly used to estimate these forces. Validation of model estimates is critical to gain confidence in modeling results across populations of interest, such as people with lower-limb amputation. Motion capture, ground reaction force and electromyographic data were collected from ten participants without an amputation (five male/five female) and five participants with a unilateral transtibial amputation (four male/one female) during trunk-pelvis range of motion trials in flexion/extension, lateral bending and axial rotation. A musculoskeletal model with a detailed lumbar spine and the legs including 294 muscles was used to predict L4-L5 loading and muscle activations using static optimization. Model estimates of L4-L5 intervertebral joint loading were compared to measured intradiscal pressures from the literature and muscle activations were compared to electromyographic signals. Model loading estimates were only significantly different from experimental measurements during trunk extension for males without an amputation and for people with an amputation, which may suggest a greater portion of L4-L5 axial load transfer through the facet joints, as facet loads are not captured by intradiscal pressure transducers. Pressure estimates between the model and previous work were not significantly different for flexion, lateral bending or axial rotation. Timing of model-estimated muscle activations compared well with electromyographic activity of the lumbar paraspinals and upper erector spinae. Validated estimates of low back loading can increase the applicability of musculoskeletal models to clinical diagnosis and treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Research applications for an Object and Action Naming Battery to assess naming skills in adult Spanish-English bilingual speakers.

    PubMed

    Edmonds, Lisa A; Donovan, Neila J

    2014-06-01

    Virtually no valid materials are available to evaluate confrontation naming in Spanish-English bilingual adults in the U.S. In a recent study, a large group of young Spanish-English bilingual adults were evaluated on An Object and Action Naming Battery (Edmonds & Donovan in Journal of Speech, Language, and Hearing Research 55:359-381, 2012). Rasch analyses of the responses resulted in evidence for the content and construct validity of the retained items. However, the scope of that study did not allow for extensive examination of individual item characteristics, group analyses of participants, or the provision of testing and scoring materials or raw data, thereby limiting the ability of researchers to administer the test to Spanish-English bilinguals and to score the items with confidence. In this study, we present the in-depth information described above on the basis of further analyses, including (1) online searchable spreadsheets with extensive empirical (e.g., accuracy and name agreeability) and psycholinguistic item statistics; (2) answer sheets and instructions for scoring and interpreting the responses to the Rasch items; (3) tables of alternative correct responses for English and Spanish; (4) ability strata determined for all naming conditions (English and Spanish nouns and verbs); and (5) comparisons of accuracy across proficiency groups (i.e., Spanish dominant, English dominant, and balanced). These data indicate that the Rasch items from An Object and Action Naming Battery are valid and sensitive for the evaluation of naming in young Spanish-English bilingual adults. Additional information based on participant responses for all of the items on the battery can provide researchers with valuable information to aid in stimulus development and response interpretation for experimental studies in this population.

  13. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    PubMed

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.

  14. An Analysis of Construct Validity of Motivation As It Relates to North Carolina County Agricultural Extension Service Agents.

    ERIC Educational Resources Information Center

    Calloway, Pauline Frances

    This study investigated the construct validity of the Herzberg (1964) theory of motivation as it relates to county Extension agents; and developed an inventory to measure the job satisfaction of county agents in North Carolina. The inventory was administered to 419 agents in 79 counties. Factor analysis was used to determine the number of job…

  15. A system utilizing radio frequency identification (RFID) technology to monitor individual rodent behavior in complex social settings.

    PubMed

    Howerton, Christopher L; Garner, Joseph P; Mench, Joy A

    2012-07-30

    Pre-clinical investigation of human CNS disorders relies heavily on mouse models. However these show low predictive validity for translational success to humans, partly due to the extensive use of rapid, high-throughput behavioral assays. Improved assays to monitor rodent behavior over longer time scales in a variety of contexts while still maintaining the efficiency of data collection associated with high-throughput assays are needed. We developed an apparatus that uses radio frequency identification device (RFID) technology to facilitate long-term automated monitoring of the behavior of mice in socially or structurally complex cage environments. Mice that were individually marked and implanted with transponders were placed in pairs in the apparatus, and their locations continuously tracked for 24 h. Video observation was used to validate the RFID readings. The apparatus and its associated software accurately tracked the locations of all mice, yielding information about each mouse's location over time, its diel activity patterns, and the amount of time it was in the same location as the other mouse in the pair. The information that can be efficiently collected in this apparatus has a variety of applications for pre-clinical research on human CNS disorders, for example major depressive disorder and autism spectrum disorder, in that it can be used to quantify validated endophenotypes or biomarkers of these disorders using rodent models. While the specific configuration of the apparatus described here was designed to answer particular experimental questions, it can be modified in various ways to accommodate different experimental designs. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Frequency Response Function Based Damage Identification for Aerospace Structures

    NASA Astrophysics Data System (ADS)

    Oliver, Joseph Acton

    Structural health monitoring technologies continue to be pursued for aerospace structures in the interests of increased safety and, when combined with health prognosis, efficiency in life-cycle management. The current dissertation develops and validates damage identification technology as a critical component for structural health monitoring of aerospace structures and, in particular, composite unmanned aerial vehicles. The primary innovation is a statistical least-squares damage identification algorithm based in concepts of parameter estimation and model update. The algorithm uses frequency response function based residual force vectors derived from distributed vibration measurements to update a structural finite element model through statistically weighted least-squares minimization producing location and quantification of the damage, estimation uncertainty, and an updated model. Advantages compared to other approaches include robust applicability to systems which are heavily damped, large, and noisy, with a relatively low number of distributed measurement points compared to the number of analytical degrees-of-freedom of an associated analytical structural model (e.g., modal finite element model). Motivation, research objectives, and a dissertation summary are discussed in Chapter 1 followed by a literature review in Chapter 2. Chapter 3 gives background theory and the damage identification algorithm derivation followed by a study of fundamental algorithm behavior on a two degree-of-freedom mass-spring system with generalized damping. Chapter 4 investigates the impact of noise then successfully proves the algorithm against competing methods using an analytical eight degree-of-freedom mass-spring system with non-proportional structural damping. Chapter 5 extends use of the algorithm to finite element models, including solutions for numerical issues, approaches for modeling damping approximately in reduced coordinates, and analytical validation using a composite sandwich plate model. Chapter 6 presents the final extension to experimental systems-including methods for initial baseline correlation and data reduction-and validates the algorithm on an experimental composite plate with impact damage. The final chapter deviates from development and validation of the primary algorithm to discuss development of an experimental scaled-wing test bed as part of a collaborative effort for developing structural health monitoring and prognosis technology. The dissertation concludes with an overview of technical conclusions and recommendations for future work.

  17. Validity and intra-rater reliability of an android phone application to measure cervical range-of-motion.

    PubMed

    Quek, June; Brauer, Sandra G; Treleaven, Julia; Pua, Yong-Hao; Mentiplay, Benjamin; Clark, Ross Allan

    2014-04-17

    Concurrent validity and intra-rater reliability using a customized Android phone application to measure cervical-spine range-of-motion (ROM) has not been previously validated against a gold-standard three-dimensional motion analysis (3DMA) system. Twenty-one healthy individuals (age:31 ± 9.1 years, male:11) participated, with 16 re-examined for intra-rater reliability 1-7 days later. An Android phone was fixed on a helmet, which was then securely fastened on the participant's head. Cervical-spine ROM in flexion, extension, lateral flexion and rotation were performed in sitting with concurrent measurements obtained from both a 3DMA system and the phone.The phone demonstrated moderate to excellent (ICC = 0.53-0.98, Spearman ρ = 0.52-0.98) concurrent validity for ROM measurements in cervical flexion, extension, lateral-flexion and rotation. However, cervical rotation demonstrated both proportional and fixed bias. Excellent intra-rater reliability was demonstrated for cervical flexion, extension and lateral flexion (ICC = 0.82-0.90), but poor for right- and left-rotation (ICC = 0.05-0.33) using the phone. Possible reasons for the outcome are that flexion, extension and lateral-flexion measurements are detected by gravity-dependent accelerometers while rotation measurements are detected by the magnetometer which can be adversely affected by surrounding magnetic fields. The results of this study demonstrate that the tested Android phone application is valid and reliable to measure ROM of the cervical-spine in flexion, extension and lateral-flexion but not in rotation likely due to magnetic interference. The clinical implication of this study is that therapists should be mindful of the plane of measurement when using the Android phone to measure ROM of the cervical-spine.

  18. Validity and intra-rater reliability of an Android phone application to measure cervical range-of-motion

    PubMed Central

    2014-01-01

    Background Concurrent validity and intra-rater reliability using a customized Android phone application to measure cervical-spine range-of-motion (ROM) has not been previously validated against a gold-standard three-dimensional motion analysis (3DMA) system. Findings Twenty-one healthy individuals (age:31 ± 9.1 years, male:11) participated, with 16 re-examined for intra-rater reliability 1–7 days later. An Android phone was fixed on a helmet, which was then securely fastened on the participant’s head. Cervical-spine ROM in flexion, extension, lateral flexion and rotation were performed in sitting with concurrent measurements obtained from both a 3DMA system and the phone. The phone demonstrated moderate to excellent (ICC = 0.53-0.98, Spearman ρ = 0.52-0.98) concurrent validity for ROM measurements in cervical flexion, extension, lateral-flexion and rotation. However, cervical rotation demonstrated both proportional and fixed bias. Excellent intra-rater reliability was demonstrated for cervical flexion, extension and lateral flexion (ICC = 0.82-0.90), but poor for right- and left-rotation (ICC = 0.05-0.33) using the phone. Possible reasons for the outcome are that flexion, extension and lateral-flexion measurements are detected by gravity-dependent accelerometers while rotation measurements are detected by the magnetometer which can be adversely affected by surrounding magnetic fields. Conclusion The results of this study demonstrate that the tested Android phone application is valid and reliable to measure ROM of the cervical-spine in flexion, extension and lateral-flexion but not in rotation likely due to magnetic interference. The clinical implication of this study is that therapists should be mindful of the plane of measurement when using the Android phone to measure ROM of the cervical-spine. PMID:24742001

  19. Availability of Neutronics Benchmarks in the ICSBEP and IRPhEP Handbooks for Computational Tools Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana

    2017-02-01

    In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount.more » Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next-generation reactor design, safety analysis requirements, and all other front- and back-end activities contributing to the overall nuclear fuel cycle where quality neutronics calculations are paramount.« less

  20. Matrix Dominated Failure of Fiber-Reinforced Composite Laminates Under Static and Dynamic Loading

    NASA Astrophysics Data System (ADS)

    Schaefer, Joseph Daniel

    Hierarchical material systems provide the unique opportunity to connect material knowledge to solving specific design challenges. Representing the quickest growing class of hierarchical materials in use, fiber-reinforced polymer composites (FRPCs) offer superior strength and stiffness-to-weight ratios, damage tolerance, and decreasing production costs compared to metals and alloys. However, the implementation of FRPCs has historically been fraught with inadequate knowledge of the material failure behavior due to incomplete verification of recent computational constitutive models and improper (or non-existent) experimental validation, which has severely slowed creation and development. Noted by the recent Materials Genome Initiative and the Worldwide Failure Exercise, current state of the art qualification programs endure a 20 year gap between material conceptualization and implementation due to the lack of effective partnership between computational coding (simulation) and experimental characterization. Qualification processes are primarily experiment driven; the anisotropic nature of composites predisposes matrix-dominant properties to be sensitive to strain rate, which necessitates extensive testing. To decrease the qualification time, a framework that practically combines theoretical prediction of material failure with limited experimental validation is required. In this work, the Northwestern Failure Theory (NU Theory) for composite lamina is presented as the theoretical basis from which the failure of unidirectional and multidirectional composite laminates is investigated. From an initial experimental characterization of basic lamina properties, the NU Theory is employed to predict the matrix-dependent failure of composites under any state of biaxial stress from quasi-static to 1000 s-1 strain rates. It was found that the number of experiments required to characterize the strain-rate-dependent failure of a new composite material was reduced by an order of magnitude, and the resulting strain-rate-dependence was applicable for a large class of materials. The presented framework provides engineers with the capability to quickly identify fiber and matrix combinations for a given application and determine the failure behavior over the range of practical loadings cases. The failure-mode-based NU Theory may be especially useful when partnered with computational approaches (which often employ micromechanics to determine constituent and constitutive response) to provide accurate validation of the matrix-dominated failure modes experienced by laminates during progressive failure.

  1. Validating the energy transport modeling of the DIII-D and EAST ramp up experiments using TSC

    NASA Astrophysics Data System (ADS)

    Liu, Li; Guo, Yong; Chan, Vincent; Mao, Shifeng; Wang, Yifeng; Pan, Chengkang; Luo, Zhengping; Zhao, Hailin; Ye, Minyou

    2017-06-01

    The confidence in ramp up scenario design of the China fusion engineering test reactor (CFETR) can be significantly enhanced using validated transport models to predict the current profile and temperature profile. In the tokamak simulation code (TSC), two semi-empirical energy transport models (the Coppi-Tang (CT) and BGB model) and three theory-based models (the GLF23, MMM95 and CDBM model) are investigated on the CFETR relevant ramp up discharges, including three DIII-D ITER-like ramp up discharges and one EAST ohmic discharge. For the DIII-D discharges, all the transport models yield dynamic {{\\ell}\\text{i}} within +/- 0.15 deviations except for some time points where the experimental fluctuation is very strong. All the models agree with the experimental {β\\text{p}} except that the CT model strongly overestimates {β\\text{p}} in the first half of ramp up phase. When applying the CT, CDBM and GLF23 model to estimate the internal flux, they show maximum deviations of more than 10% because of inaccuracies in the temperature profile predictions, while the BGB model performs best on the internal flux. Although all the models fall short in reproducing the dynamic {{\\ell}\\text{i}} evolution for the EAST tokamak, the result of the BGB model is the closest to the experimental {{\\ell}\\text{i}} . Based on these comparisons, we conclude that the BGB model is the most consistent among these models for simulating CFETR ohmic ramp-up. The CT model with improvement for better simulation of the temperature profiles in the first half of ramp up phase will also be attractive. For the MMM95, GLF23 and CDBM model, better prediction of the edge temperature will improve the confidence for CFETR L-mode simulation. Conclusive validation of any transport model will require extensive future investigation covering a larger variety discharges.

  2. Automatic Visual Tracking and Social Behaviour Analysis with Multiple Mice

    PubMed Central

    Giancardo, Luca; Sona, Diego; Huang, Huiping; Sannino, Sara; Managò, Francesca; Scheggia, Diego; Papaleo, Francesco; Murino, Vittorio

    2013-01-01

    Social interactions are made of complex behavioural actions that might be found in all mammalians, including humans and rodents. Recently, mouse models are increasingly being used in preclinical research to understand the biological basis of social-related pathologies or abnormalities. However, reliable and flexible automatic systems able to precisely quantify social behavioural interactions of multiple mice are still missing. Here, we present a system built on two components. A module able to accurately track the position of multiple interacting mice from videos, regardless of their fur colour or light settings, and a module that automatically characterise social and non-social behaviours. The behavioural analysis is obtained by deriving a new set of specialised spatio-temporal features from the tracker output. These features are further employed by a learning-by-example classifier, which predicts for each frame and for each mouse in the cage one of the behaviours learnt from the examples given by the experimenters. The system is validated on an extensive set of experimental trials involving multiple mice in an open arena. In a first evaluation we compare the classifier output with the independent evaluation of two human graders, obtaining comparable results. Then, we show the applicability of our technique to multiple mice settings, using up to four interacting mice. The system is also compared with a solution recently proposed in the literature that, similarly to us, addresses the problem with a learning-by-examples approach. Finally, we further validated our automatic system to differentiate between C57B/6J (a commonly used reference inbred strain) and BTBR T+tf/J (a mouse model for autism spectrum disorders). Overall, these data demonstrate the validity and effectiveness of this new machine learning system in the detection of social and non-social behaviours in multiple (>2) interacting mice, and its versatility to deal with different experimental settings and scenarios. PMID:24066146

  3. Snap-through instability analysis of dielectric elastomers with consideration of chain entanglements

    NASA Astrophysics Data System (ADS)

    Zhu, Jiakun; Luo, Jun; Xiao, Zhongmin

    2018-06-01

    It is widely recognized that the extension limit of polymer chains has a significant effect on the snap-through instability of dielectric elastomers (DEs). The snap-through instability performance of DEs has been extensively studied by two limited-stretch models, i.e., the eight-chain model and Gent model. However, the real polymer networks usually have many entanglements due to the impenetrability of the network chains as well as a finite extensibility resulting from the full stretching of the polymer chains. The effects of entanglements on the snap-through instability of DEs cannot be captured by the previous two limited-stretch models. In this paper, the nonaffine model proposed by Davidson and Goulbourne is adopted to characterize the influence of entanglements and extension limit of the polymer chains. It is demonstrated that the nonaffine model is almost identical to the eight-chain model and is close to the Gent model if we ignore the effects of chain entanglements and adopt the affine assumption. The suitability of the nonaffine model to characterize the mechanical behavior of elastomers is validated by fitting the experimental results reported in the open literature. After that, the snap-through stability performance of an ideal DE membrane under equal-biaxial prestretches is studied with the nonaffine model. It is revealed that besides the prestretch and chain extension limit, the chain entanglements can markedly influence the snap-through instability and the path to failure of DEs. These results provide a more comprehensive understanding on the snap-through instability of a DE and may be helpful to guide the design of DE devices.

  4. Parent Reports of Young Spanish-English Bilingual Children's Productive Vocabulary: A Development and Validation Study.

    PubMed

    Mancilla-Martinez, Jeannette; Gámez, Perla B; Vagh, Shaher Banu; Lesaux, Nonie K

    2016-01-01

    This 2-phase study aims to extend research on parent report measures of children's productive vocabulary by investigating the development (n = 38) of the Spanish Vocabulary Extension and validity (n = 194) of the 100-item Spanish and English MacArthur-Bates Communicative Development Inventories Toddler Short Forms and Upward Extension (Fenson et al., 2000, 2007; Jackson-Maldonado, Marchman, & Fernald, 2013) and the Spanish Vocabulary Extension for use with parents from low-income homes and their 24- to 48-month-old Spanish-English bilingual children. Study participants were drawn from Early Head Start and Head Start collaborative programs in the Northeastern United States in which English was the primary language used in the classroom. All families reported Spanish or Spanish-English as their home language(s). The MacArthur Communicative Development Inventories as well as the researcher-designed Spanish Vocabulary Extension were used as measures of children's English and Spanish productive vocabularies. Findings revealed the forms' concurrent and discriminant validity, on the basis of standardized measures of vocabulary, as measures of productive vocabulary for this growing bilingual population. These findings suggest that parent reports, including our researcher-designed form, represent a valid, cost-effective mechanism for vocabulary monitoring purposes in early childhood education settings.

  5. Characterization and experimental validation of a squeeze film damper with MR fluid in a rotor-bearing system

    NASA Astrophysics Data System (ADS)

    Dominguez-Nuñez, L. A.; Silva-Navarro, G.

    2014-04-01

    The general study and applications of Magneto-Rhelogical (MR) dampers have been spread in the lasts years but only some studies have been focusing on the vibration control problems on rotor-bearings systems. Squeeze-Film Dampers (SFD) are now commonly used to passively control the vibration response on rotor-bearing systems because they can provide flexibility, damping and extend the so-called stability thresholds in rotating machinery. More recently, SFD are combined with MR or Electro-Rheological (ER) fluids to introduce a semiactive control mechanism to modify the rotordynamic coefficients and deal with the robust performance of the overall system response for higher operating speeds. There are, however, some theoretical and technological problems that complicate their extensive use, like the relationship between the centering spring flexibility and the rheological behavior of the smart fluid to produce the SFD forces. In this work it is considered a SFD with MR fluid and a set of circular section beams in a squirrel cage arrangement in combination with latex seals as centering springs. The mathematical model analysis includes the controllable viscoelastic properties associated to the MR fluid. The characterization of the SFD is made by the determination of some coefficients associated with a modified Choi-Lee-Park polynomial model. During the analysis is considered a rotor-bearing system modeled using finite element methods. The SFD with MR fluid is connected to an experimental platform to validate and experimentally evaluate the overall system. Finally, to improve the open-loop system performance, a methodology for the use of different control schemes is proposed.

  6. A data driven method for estimation of B(avail) and appK(D) using a single injection protocol with [¹¹C]raclopride in the mouse.

    PubMed

    Wimberley, Catriona J; Fischer, Kristina; Reilhac, Anthonin; Pichler, Bernd J; Gregoire, Marie Claude

    2014-10-01

    The partial saturation approach (PSA) is a simple, single injection experimental protocol that will estimate both B(avail) and appK(D) without the use of blood sampling. This makes it ideal for use in longitudinal studies of neurodegenerative diseases in the rodent. The aim of this study was to increase the range and applicability of the PSA by developing a data driven strategy for determining reliable regional estimates of receptor density (B(avail)) and in vivo affinity (1/appK(D)), and validate the strategy using a simulation model. The data driven method uses a time window guided by the dynamic equilibrium state of the system as opposed to using a static time window. To test the method, simulations of partial saturation experiments were generated and validated against experimental data. The experimental conditions simulated included a range of receptor occupancy levels and three different B(avail) and appK(D) values to mimic diseases states. Also the effect of using a reference region and typical PET noise on the stability and accuracy of the estimates was investigated. The investigations showed that the parameter estimates in a simulated healthy mouse, using the data driven method were within 10±30% of the simulated input for the range of occupancy levels simulated. Throughout all experimental conditions simulated, the accuracy and robustness of the estimates using the data driven method were much improved upon the typical method of using a static time window, especially at low receptor occupancy levels. Introducing a reference region caused a bias of approximately 10% over the range of occupancy levels. Based on extensive simulated experimental conditions, it was shown the data driven method provides accurate and precise estimates of B(avail) and appK(D) for a broader range of conditions compared to the original method. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Static Design and Finite Element Analysis of Innovative CFRP Transverse Leaf Spring

    NASA Astrophysics Data System (ADS)

    Carello, M.; Airale, A. G.; Ferraris, A.; Messana, A.; Sisca, L.

    2017-12-01

    This paper describes the design and the numerical modelization of a novel transverse Carbon Fiber Reinforced Plastic (CFRP) leaf-spring prototype for a multilink suspension. The most significant innovation is in the functional integration where the leaf spring has been designed to work as spring, anti-roll bar, lower and longitudinal arms at the same time. In particular, the adopted work flow maintains a very close correlation between virtual simulations and experimental tests. Firstly, several tests have been conducted on the CFRP specimen to characterize the material property. Secondly, a virtual card fitting has been carried out in order to set up the leaf-spring Finite Element (FE) model using CRASURV formulation as material law and RADIOSS as solver. Finally, extensive tests have been done on the manufactured component for validation. The results obtained show a good agreement between virtual simulation and experimental tests. Moreover, this solution enabled the suspension to reduce about 75% of the total mass without losing performance.

  8. Laser beam complex amplitude measurement by phase diversity.

    PubMed

    Védrenne, Nicolas; Mugnier, Laurent M; Michau, Vincent; Velluet, Marie-Thérèse; Bierent, Rudolph

    2014-02-24

    The control of the optical quality of a laser beam requires a complex amplitude measurement able to deal with strong modulus variations and potentially highly perturbed wavefronts. The method proposed here consists in an extension of phase diversity to complex amplitude measurements that is effective for highly perturbed beams. Named camelot for Complex Amplitude MEasurement by a Likelihood Optimization Tool, it relies on the acquisition and processing of few images of the beam section taken along the optical path. The complex amplitude of the beam is retrieved from the images by the minimization of a Maximum a Posteriori error metric between the images and a model of the beam propagation. The analytical formalism of the method and its experimental validation are presented. The modulus of the beam is compared to a measurement of the beam profile, the phase of the beam is compared to a conventional phase diversity estimate. The precision of the experimental measurements is investigated by numerical simulations.

  9. A Summary of Validation Results for LEWICE 2.0

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    1998-01-01

    A research project is underway at NASA Lewis to produce a computer code which can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce results accurately for different point spacing, and time step criteria across general computing platforms. It also differs in the extensive amount of effort undertaken to compare the results in a quantifiable manner against the database of ice shapes which have been generated in the NASA Lewis Icing, Research Tunnel (IRT), The complete set of data used for this comparison is available in a recent contractor report . The result of this comparison shows that the difference between the predicted ice shape from LEWICE 2.0 and the average of the experimental data is 7.2% while the variability of the experimental data is 2.5%.

  10. On the interplay of gas dynamics and the electromagnetic field in an atmospheric Ar/H2 microwave plasma torch

    NASA Astrophysics Data System (ADS)

    Synek, Petr; Obrusník, Adam; Hübner, Simon; Nijdam, Sander; Zajíčková, Lenka

    2015-04-01

    A complementary simulation and experimental study of an atmospheric pressure microwave torch operating in pure argon or argon/hydrogen mixtures is presented. The modelling part describes a numerical model coupling the gas dynamics and mixing to the electromagnetic field simulations. Since the numerical model is not fully self-consistent and requires the electron density as an input, quite extensive spatially resolved Stark broadening measurements were performed for various gas compositions and input powers. In addition, the experimental part includes Rayleigh scattering measurements, which are used for the validation of the model. The paper comments on the changes in the gas temperature and hydrogen dissociation with the gas composition and input power, showing in particular that the dependence on the gas composition is relatively strong and non-monotonic. In addition, the work provides interesting insight into the plasma sustainment mechanism by showing that the power absorption profile in the plasma has two distinct maxima: one at the nozzle tip and one further upstream.

  11. Particle dispersion in homogeneous turbulence using the one-dimensional turbulence model

    DOE PAGES

    Sun, Guangyuan; Lignell, David O.; Hewson, John C.; ...

    2014-10-09

    Lagrangian particle dispersion is studied using the one-dimensional turbulence (ODT) model in homogeneous decaying turbulence configurations. The ODT model has been widely and successfully applied to a number of reacting and nonreacting flow configurations, but only limited application has been made to multiphase flows. We present a version of the particle implementation and interaction with the stochastic and instantaneous ODT eddy events. The model is characterized by comparison to experimental data of particle dispersion for a range of intrinsic particle time scales and body forces. Particle dispersion, velocity, and integral time scale results are presented. Moreover, the particle implementation introducesmore » a single model parameter β p , and sensitivity to this parameter and behavior of the model are discussed. Good agreement is found with experimental data and the ODT model is able to capture the particle inertial and trajectory crossing effects. Our results serve as a validation case of the multiphase implementations of ODT for extensions to other flow configurations.« less

  12. Model-Free control performance improvement using virtual reference feedback tuning and reinforcement Q-learning

    NASA Astrophysics Data System (ADS)

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian

    2017-04-01

    This paper proposes the combination of two model-free controller tuning techniques, namely linear virtual reference feedback tuning (VRFT) and nonlinear state-feedback Q-learning, referred to as a new mixed VRFT-Q learning approach. VRFT is first used to find stabilising feedback controller using input-output experimental data from the process in a model reference tracking setting. Reinforcement Q-learning is next applied in the same setting using input-state experimental data collected under perturbed VRFT to ensure good exploration. The Q-learning controller learned with a batch fitted Q iteration algorithm uses two neural networks, one for the Q-function estimator and one for the controller, respectively. The VRFT-Q learning approach is validated on position control of a two-degrees-of-motion open-loop stable multi input-multi output (MIMO) aerodynamic system (AS). Extensive simulations for the two independent control channels of the MIMO AS show that the Q-learning controllers clearly improve performance over the VRFT controllers.

  13. Computation of incompressible viscous flows through artificial heart devices with moving boundaries

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Rogers, Stuart; Kwak, Dochan; Chang, I.-DEE

    1991-01-01

    The extension of computational fluid dynamics techniques to artificial heart flow simulations is illustrated. Unsteady incompressible Navier-Stokes equations written in 3-D generalized curvilinear coordinates are solved iteratively at each physical time step until the incompressibility condition is satisfied. The solution method is based on the pseudo compressibility approach and uses an implicit upwind differencing scheme together with the Gauss-Seidel line relaxation method. The efficiency and robustness of the time accurate formulation of the algorithm are tested by computing the flow through model geometries. A channel flow with a moving indentation is computed and validated with experimental measurements and other numerical solutions. In order to handle the geometric complexity and the moving boundary problems, a zonal method and an overlapping grid embedding scheme are used, respectively. Steady state solutions for the flow through a tilting disk heart valve was compared against experimental measurements. Good agreement was obtained. The flow computation during the valve opening and closing is carried out to illustrate the moving boundary capability.

  14. Modelling and experimental evaluation of parallel connected lithium ion cells for an electric vehicle battery system

    NASA Astrophysics Data System (ADS)

    Bruen, Thomas; Marco, James

    2016-04-01

    Variations in cell properties are unavoidable and can be caused by manufacturing tolerances and usage conditions. As a result of this, cells connected in series may have different voltages and states of charge that limit the energy and power capability of the complete battery pack. Methods of removing this energy imbalance have been extensively reported within literature. However, there has been little discussion around the effect that such variation has when cells are connected electrically in parallel. This work aims to explore the impact of connecting cells, with varied properties, in parallel and the issues regarding energy imbalance and battery management that may arise. This has been achieved through analysing experimental data and a validated model. The main results from this study highlight that significant differences in current flow can occur between cells within a parallel stack that will affect how the cells age and the temperature distribution within the battery assembly.

  15. Experimental validation of structural optimization methods

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.

    1992-01-01

    The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.

  16. Validation and optimization of SST k-ω turbulence model for pollutant dispersion within a building array

    NASA Astrophysics Data System (ADS)

    Yu, Hesheng; Thé, Jesse

    2016-11-01

    The prediction of the dispersion of air pollutants in urban areas is of great importance to public health, homeland security, and environmental protection. Computational Fluid Dynamics (CFD) emerges as an effective tool for pollutant dispersion modelling. This paper reports and quantitatively validates the shear stress transport (SST) k-ω turbulence closure model and its transitional variant for pollutant dispersion under complex urban environment for the first time. Sensitivity analysis is performed to establish recommendation for the proper use of turbulence models in urban settings. The current SST k-ω simulation is validated rigorously by extensive experimental data using hit rate for velocity components, and the "factor of two" of observations (FAC2) and fractional bias (FB) for concentration field. The simulation results show that current SST k-ω model can predict flow field nicely with an overall hit rate of 0.870, and concentration dispersion with FAC2 = 0.721 and FB = 0.045. The flow simulation of the current SST k-ω model is slightly inferior to that of a detached eddy simulation (DES), but better than that of standard k-ε model. However, the current study is the best among these three model approaches, when validated against measurements of pollutant dispersion in the atmosphere. This work aims to provide recommendation for proper use of CFD to predict pollutant dispersion in urban environment.

  17. Simulation of the effect of hydrogen bonds on water activity of glucose and dextran using the Veytsman model.

    PubMed

    De Vito, Francesca; Veytsman, Boris; Painter, Paul; Kokini, Jozef L

    2015-03-06

    Carbohydrates exhibit either van der Waals and ionic interactions or strong hydrogen bonding interactions. The prominence and large number of hydrogen bonds results in major contributions to phase behavior. A thermodynamic framework that accounts for hydrogen bonding interactions is therefore necessary. We have developed an extension of the thermodynamic model based on the Veytsman association theory to predict the contribution of hydrogen bonds to the behavior of glucose-water and dextran-water systems and we have calculated the free energy of mixing and its derivative leading to chemical potential and water activity. We compared our calculations with experimental data of water activity for glucose and dextran and found excellent agreement far superior to the Flory-Huggins theory. The validation of our calculations using experimental data demonstrated the validity of the Veytsman model in properly accounting for the hydrogen bonding interactions and successfully predicting water activity of glucose and dextran. Our calculations of the concentration of hydrogen bonds using the Veytsman model were instrumental in our ability to explain the difference between glucose and dextran and the role that hydrogen bonds play in contributing to these differences. The miscibility predictions showed that the Veytsman model is also able to correctly describe the phase behavior of glucose and dextran. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Numerical Simulations and Experimental Measurements of Scale-Model Horizontal Axis Hydrokinetic Turbines (HAHT) Arrays

    NASA Astrophysics Data System (ADS)

    Javaherchi, Teymour; Stelzenmuller, Nick; Seydel, Joseph; Aliseda, Alberto

    2014-11-01

    The performance, turbulent wake evolution and interaction of multiple Horizontal Axis Hydrokinetic Turbines (HAHT) is analyzed in a 45:1 scale model setup. We combine experimental measurements with different RANS-based computational simulations that model the turbines with sliding-mesh, rotating reference frame and blame element theory strategies. The influence of array spacing and Tip Speed Ratio on performance and wake velocity structure is investigated in three different array configurations: Two coaxial turbines at different downstream spacing (5d to 14d), Three coaxial turbines with 5d and 7d downstream spacing, and Three turbines with lateral offset (0.5d) and downstream spacing (5d & 7d). Comparison with experimental measurements provides insights into the dynamics of HAHT arrays, and by extension to closely packed HAWT arrays. The experimental validation process also highlights the influence of the closure model used (k- ω SST and k- ɛ) and the flow Reynolds number (Re=40,000 to 100,000) on the computational predictions of devices' performance and characteristics of the flow field inside the above-mentioned arrays, establishing the strengths and limitations of existing numerical models for use in industrially-relevant settings (computational cost and time). Supported by DOE through the National Northwest Marine Renewable Energy Center (NNMREC).

  19. 41 CFR 60-3.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... likely to affect validity differences; or that these factors are included in the design of the study and... construct validity is both an extensive and arduous effort involving a series of research studies, which... validity studies. 60-3.14 Section 60-3.14 Public Contracts and Property Management Other Provisions...

  20. 41 CFR 60-3.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... likely to affect validity differences; or that these factors are included in the design of the study and... construct validity is both an extensive and arduous effort involving a series of research studies, which... validity studies. 60-3.14 Section 60-3.14 Public Contracts and Property Management Other Provisions...

  1. 41 CFR 60-3.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... likely to affect validity differences; or that these factors are included in the design of the study and... construct validity is both an extensive and arduous effort involving a series of research studies, which... validity studies. 60-3.14 Section 60-3.14 Public Contracts and Property Management Other Provisions...

  2. 41 CFR 60-3.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... likely to affect validity differences; or that these factors are included in the design of the study and... construct validity is both an extensive and arduous effort involving a series of research studies, which... validity studies. 60-3.14 Section 60-3.14 Public Contracts and Property Management Other Provisions...

  3. 41 CFR 60-3.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... likely to affect validity differences; or that these factors are included in the design of the study and... construct validity is both an extensive and arduous effort involving a series of research studies, which... validity studies. 60-3.14 Section 60-3.14 Public Contracts and Property Management Other Provisions...

  4. EVLncRNAs: a manually curated database for long non-coding RNAs validated by low-throughput experiments.

    PubMed

    Zhou, Bailing; Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu; Yang, Yuedong; Zhou, Yaoqi; Wang, Jihua

    2018-01-04

    Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. EVLncRNAs: a manually curated database for long non-coding RNAs validated by low-throughput experiments

    PubMed Central

    Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu

    2018-01-01

    Abstract Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. PMID:28985416

  6. Improving human activity recognition and its application in early stroke diagnosis.

    PubMed

    Villar, José R; González, Silvia; Sedano, Javier; Chira, Camelia; Trejo-Gabriel-Galan, Jose M

    2015-06-01

    The development of efficient stroke-detection methods is of significant importance in today's society due to the effects and impact of stroke on health and economy worldwide. This study focuses on Human Activity Recognition (HAR), which is a key component in developing an early stroke-diagnosis tool. An overview of the proposed global approach able to discriminate normal resting from stroke-related paralysis is detailed. The main contributions include an extension of the Genetic Fuzzy Finite State Machine (GFFSM) method and a new hybrid feature selection (FS) algorithm involving Principal Component Analysis (PCA) and a voting scheme putting the cross-validation results together. Experimental results show that the proposed approach is a well-performing HAR tool that can be successfully embedded in devices.

  7. Boolean Dynamic Modeling Approaches to Study Plant Gene Regulatory Networks: Integration, Validation, and Prediction.

    PubMed

    Velderraín, José Dávila; Martínez-García, Juan Carlos; Álvarez-Buylla, Elena R

    2017-01-01

    Mathematical models based on dynamical systems theory are well-suited tools for the integration of available molecular experimental data into coherent frameworks in order to propose hypotheses about the cooperative regulatory mechanisms driving developmental processes. Computational analysis of the proposed models using well-established methods enables testing the hypotheses by contrasting predictions with observations. Within such framework, Boolean gene regulatory network dynamical models have been extensively used in modeling plant development. Boolean models are simple and intuitively appealing, ideal tools for collaborative efforts between theorists and experimentalists. In this chapter we present protocols used in our group for the study of diverse plant developmental processes. We focus on conceptual clarity and practical implementation, providing directions to the corresponding technical literature.

  8. Advances and trends in the development of computational models for tires

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Tanner, J. A.

    1985-01-01

    Status and some recent developments of computational models for tires are summarized. Discussion focuses on a number of aspects of tire modeling and analysis including: tire materials and their characterization; evolution of tire models; characteristics of effective finite element models for analyzing tires; analysis needs for tires; and impact of the advances made in finite element technology, computational algorithms, and new computing systems on tire modeling and analysis. An initial set of benchmark problems has been proposed in concert with the U.S. tire industry. Extensive sets of experimental data will be collected for these problems and used for evaluating and validating different tire models. Also, the new Aircraft Landing Dynamics Facility (ALDF) at NASA Langley Research Center is described.

  9. The 3D structures of VDAC represent a native conformation

    PubMed Central

    Hiller, Sebastian; Abramson, Jeff; Mannella, Carmen; Wagner, Gerhard; Zeth, Kornelius

    2010-01-01

    The most abundant protein of the mitochondrial outer membrane is the voltage-dependent anion channel (VDAC), which facilitates the exchange of ions and molecules between mitochondria and cytosol and is regulated by interactions with other proteins and small molecules. VDAC has been extensively studied for more than three decades, and last year three independent investigations revealed a structure of VDAC-1 exhibiting 19 transmembrane β-strands, constituting a unique structural class of β-barrel membrane proteins. Here, we provide a historical perspective on VDAC research and give an overview of the experimental design used to obtain these structures. Furthermore, we validate the protein refolding approach and summarize biochemical and biophysical evidence that links the 19-stranded structure to the native form of VDAC. PMID:20708406

  10. Calibration of the LHAASO-KM2A electromagnetic particle detectors using charged particles within the extensive air showers

    NASA Astrophysics Data System (ADS)

    Lv, Hongkui; He, Huihai; Sheng, Xiangdong; Liu, Jia; Chen, Songzhan; Liu, Ye; Hou, Chao; Zhao, Jing; Zhang, Zhongquan; Wu, Sha; Wang, Yaping; Lhaaso Collaboration

    2018-07-01

    In the Large High Altitude Air Shower Observatory (LHAASO), one square kilometer array (KM2A), with 5242 electromagnetic particle detectors (EDs) and 1171 muon detectors (MDs), is designed to study ultra-high energy gamma-ray astronomy and cosmic ray physics. The remoteness and numerous detectors extremely demand a robust and automatic calibration procedure. In this paper, a self-calibration method which relies on the measurement of charged particles within the extensive air showers is proposed. The method is fully validated by Monte Carlo simulation and successfully applied in a KM2A prototype array experiment. Experimental results show that the self-calibration method can be used to determine the detector time offset constants at the sub-nanosecond level and the number density of particles collected by each ED with an accuracy of a few percents, which are adequate to meet the physical requirements of LHAASO experiment. This software calibration also offers an ideal method to realtime monitor the detector performances for next generation ground-based EAS experiments covering an area above square kilometers scale.

  11. Capability Extension to the Turbine Off-Design Computer Program AXOD With Applications to the Highly Loaded Fan-Drive Turbines

    NASA Technical Reports Server (NTRS)

    Chen, Shu-cheng S.

    2011-01-01

    The axial flow turbine off-design computer program AXOD has been upgraded to include the outlet guide vane (OGV) into its acceptable turbine configurations. The mathematical bases and the techniques used for the code implementation are described and discussed in lengths in this paper. This extended capability is verified and validated with two cases of highly loaded fan-drive turbines, designed and tested in the V/STOL Program of NASA. The first case is a 4 1/2-stage turbine with an average stage loading factor of 4.66, designed by Pratt & Whitney Aircraft. The second case is a 3 1/2-stage turbine with an average loading factor of 4.0, designed in-house by the NASA Lewis Research Center (now the NASA Glenn Research Center). Both cases were experimentally tested in the turbine facility located at the Glenn Research Center. The processes conducted in these studies are described in detail in this paper, and the results in comparison with the experimental data are presented and discussed. The comparisons between the AXOD results and the experimental data are in excellent agreement.

  12. A decision support model for investment on P2P lending platform.

    PubMed

    Zeng, Xiangxiang; Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao

    2017-01-01

    Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace-Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone.

  13. A decision support model for investment on P2P lending platform

    PubMed Central

    Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao

    2017-01-01

    Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace—Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone. PMID:28877234

  14. A Petri net model of granulomatous inflammation: implications for IL-10 mediated control of Leishmania donovani infection.

    PubMed

    Albergante, Luca; Timmis, Jon; Beattie, Lynette; Kaye, Paul M

    2013-01-01

    Experimental visceral leishmaniasis, caused by infection of mice with the protozoan parasite Leishmania donovani, is characterized by focal accumulation of inflammatory cells in the liver, forming discrete "granulomas" within which the parasite is eventually eliminated. To shed new light on fundamental aspects of granuloma formation and function, we have developed an in silico Petri net model that simulates hepatic granuloma development throughout the course of infection. The model was extensively validated by comparison with data derived from experimental studies in mice, and the model robustness was assessed by a sensitivity analysis. The model recapitulated the progression of disease as seen during experimental infection and also faithfully predicted many of the changes in cellular composition seen within granulomas over time. By conducting in silico experiments, we have identified a previously unappreciated level of inter-granuloma diversity in terms of the development of anti-leishmanial activity. Furthermore, by simulating the impact of IL-10 gene deficiency in a variety of lymphocyte and myeloid cell populations, our data suggest a dominant local regulatory role for IL-10 produced by infected Kupffer cells at the core of the granuloma.

  15. A Petri Net Model of Granulomatous Inflammation: Implications for IL-10 Mediated Control of Leishmania donovani Infection

    PubMed Central

    Albergante, Luca; Timmis, Jon; Beattie, Lynette; Kaye, Paul M.

    2013-01-01

    Experimental visceral leishmaniasis, caused by infection of mice with the protozoan parasite Leishmania donovani, is characterized by focal accumulation of inflammatory cells in the liver, forming discrete “granulomas” within which the parasite is eventually eliminated. To shed new light on fundamental aspects of granuloma formation and function, we have developed an in silico Petri net model that simulates hepatic granuloma development throughout the course of infection. The model was extensively validated by comparison with data derived from experimental studies in mice, and the model robustness was assessed by a sensitivity analysis. The model recapitulated the progression of disease as seen during experimental infection and also faithfully predicted many of the changes in cellular composition seen within granulomas over time. By conducting in silico experiments, we have identified a previously unappreciated level of inter-granuloma diversity in terms of the development of anti-leishmanial activity. Furthermore, by simulating the impact of IL-10 gene deficiency in a variety of lymphocyte and myeloid cell populations, our data suggest a dominant local regulatory role for IL-10 produced by infected Kupffer cells at the core of the granuloma. PMID:24363630

  16. Developing of Watershed Radionuclide Transport Model DHSVM-R as Modification and Extension of Distributed Hydrological and Sediment Dynamics Model DHSVM

    NASA Astrophysics Data System (ADS)

    Zheleznyak, M.; Kivva, S.; Onda, Y.; Nanba, K.; Wakiyama, Y.; Konoplev, A.

    2015-12-01

    The reliable modeling tools for prediction wash - off radionuclides from watersheds are needed as for assessment the consequences of accidental and industrial releases of radionuclides, as for soil erosion studies using the radioactive tracers. The distributed model of radionuclide transport through watershed in exchangeable and nonexchangeable forms in solute and with sediments was developed and validated for small Chernobyl watersheds in 90th within EU SPARTACUS project (van der Perk et al., 1996). New tendency is coupling of radionuclide transport models and the widely validated hydrological distributed models. To develop radionuclide transport model DHSVM-R the open source Distributed Hydrology Soil Vegetation Model -DHSVM http://www.hydro.washington.edu/Lettenmaier/Models/DHSVM was modified and extended. The main changes provided in the hydrological and sediment transport modules of DHSVM are as follows: Morel-Seytoux infiltration model is added; four-directions schematization for the model's cells flows (D4) is replaced by D8 approach; the finite-difference schemes for solution of kinematic wave equations for overland water flow, stream net flow, and sediment transport are replaced by new computationally efficient scheme. New radionuclide transport module, coupled with hydrological and sediment transport modules, continues SPARTACUS's approach, - it describes radionuclide wash-off from watershed and transport via stream network in soluble phase and on suspended sediments. The hydrological module of DHSVM-R was calibrated and validated for the watersheds of Ukrainian Carpathian mountains and for the subwatersheds of Niida river flowing 137Cs in solute and with suspended sediments to Pacific Ocean at 30 km north of the Fukushima Daiichi NPP. The modules of radionuclide and sediment transport were calibrated and validated versus experimental data for USLE experimental plots in Fukushima Prefecture and versus monitoring data collected in Niida watershed. The role of sediment transport in radionuclide wash-off from mountain and lowland watersheds is analyzed in comparison of modeling results for Chernobyl and Fukushima watersheds.

  17. Genome-Wide Analysis of A-to-I RNA Editing.

    PubMed

    Savva, Yiannis A; Laurent, Georges St; Reenan, Robert A

    2016-01-01

    Adenosine (A)-to-inosine (I) RNA editing is a fundamental posttranscriptional modification that ensures the deamination of A-to-I in double-stranded (ds) RNA molecules. Intriguingly, the A-to-I RNA editing system is particularly active in the nervous system of higher eukaryotes, altering a plethora of noncoding and coding sequences. Abnormal RNA editing is highly associated with many neurological phenotypes and neurodevelopmental disorders. However, the molecular mechanisms underlying RNA editing-mediated pathogenesis still remain enigmatic and have attracted increasing attention from researchers. Over the last decade, methods available to perform genome-wide transcriptome analysis, have evolved rapidly. Within the RNA editing field researchers have adopted next-generation sequencing technologies to identify RNA-editing sites within genomes and to elucidate the underlying process. However, technical challenges associated with editing site discovery have hindered efforts to uncover comprehensive editing site datasets, resulting in the general perception that the collections of annotated editing sites represent only a small minority of the total number of sites in a given organism, tissue, or cell type of interest. Additionally to doubts about sensitivity, existing RNA-editing site lists often contain high percentages of false positives, leading to uncertainty about their validity and usefulness in downstream studies. An accurate investigation of A-to-I editing requires properly validated datasets of editing sites with demonstrated and transparent levels of sensitivity and specificity. Here, we describe a high signal-to-noise method for RNA-editing site detection using single-molecule sequencing (SMS). With this method, authentic RNA-editing sites may be differentiated from artifacts. Machine learning approaches provide a procedure to improve upon and experimentally validate sequencing outcomes through use of computationally predicted, iterative feedback loops. Subsequent use of extensive Sanger sequencing validations can generate accurate editing site lists. This approach has broad application and accurate genome-wide editing analysis of various tissues from clinical specimens or various experimental organisms is now a possibility.

  18. Validation and augmentation of Inrix arterial travel time data using independent sources : [research summary].

    DOT National Transportation Integrated Search

    2015-02-01

    Although the freeway travel time data has been validated extensively in recent : years, the quality of arterial travel time data is not well known. This project : presents a comprehensive validation scheme for arterial travel time data based : on GPS...

  19. Validity and Reliability of a New Device (WIMU®) for Measuring Hamstring Muscle Extensibility.

    PubMed

    Muyor, José M

    2017-09-01

    The aims of the current study were 1) to evaluate the validity of the WIMU ® system for measuring hamstring muscle extensibility in the passive straight leg raise (PSLR) test using an inclinometer for the criterion and 2) to determine the test-retest reliability of the WIMU ® system to measure hamstring muscle extensibility during the PSLR test. 55 subjects were evaluated on 2 separate occasions. Data from a Unilever inclinometer and WIMU ® system were collected simultaneously. Intraclass correlation coefficients (ICCs) for the validity were very high (0.983-1); a very low systematic bias (-0.21°--0.42°), random error (0.05°-0.04°) and standard error of the estimate (0.43°-0.34°) were observed (left-right leg, respectively) between the 2 devices (inclinometer and the WIMU ® system). The R 2 between the devices was 0.999 (p<0.001) in both the left and right legs. The test-retest reliability of the WIMU ® system was excellent, with ICCs ranging from 0.972-0.995, low coefficients of variation (0.01%), and a low standard error of the estimate (0.19-0.31°). The WIMU ® system showed strong concurrent validity and excellent test-retest reliability for the evaluation of hamstring muscle extensibility in the PSLR test. © Georg Thieme Verlag KG Stuttgart · New York.

  20. Validity of Miles Equation in Predicting Propellant Slosh Damping in Baffled Tanks at Variable Slosh Amplitude

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; West, Jeff

    2018-01-01

    Determination of slosh damping is a very challenging task as there is no analytical solution. The damping physics involves the vorticity dissipation which requires the full solution of the nonlinear Navier-Stokes equations. As a result, previous investigations were mainly carried out by extensive experiments. A systematical study is needed to understand the damping physics of baffled tanks, to identify the difference between the empirical Miles equation and experimental measurements, and to develop new semi-empirical relations to better represent the real damping physics. The approach of this study is to use Computational Fluid Dynamics (CFD) technology to shed light on the damping mechanisms of a baffled tank. First, a 1-D Navier-Stokes equation representing different length scales and time scales in the baffle damping physics is developed and analyzed. Loci-STREAM-VOF, a well validated CFD solver developed at NASA MSFC, is applied to study the vorticity field around a baffle and around the fluid-gas interface to highlight the dissipation mechanisms at different slosh amplitudes. Previous measurement data is then used to validate the CFD damping results. The study found several critical parameters controlling fluid damping from a baffle: local slosh amplitude to baffle thickness (A/t), surface liquid depth to tank radius (d/R), local slosh amplitude to baffle width (A/W); and non-dimensional slosh frequency. The simulation highlights three significant damping regimes where different mechanisms dominate. The study proves that the previously found discrepancies between Miles equation and experimental measurement are not due to the measurement scatter, but rather due to different damping mechanisms at various slosh amplitudes. The limitations on the use of Miles equation are discussed based on the flow regime.

  1. A piecewise mass-spring-damper model of the human breast.

    PubMed

    Cai, Yiqing; Chen, Lihua; Yu, Winnie; Zhou, Jie; Wan, Frances; Suh, Minyoung; Chow, Daniel Hung-Kay

    2018-01-23

    Previous models to predict breast movement whilst performing physical activities have, erroneously, assumed uniform elasticity within the breast. Consequently, the predicted displacements have not yet been satisfactorily validated. In this study, real time motion capture of the natural vibrations of a breast that followed, after raising and allowing it to fall freely, revealed an obvious difference in the vibration characteristics above and below the static equilibrium position. This implied that the elastic and viscous damping properties of a breast could vary under extension or compression. Therefore, a new piecewise mass-spring-damper model of a breast was developed with theoretical equations to derive values for its spring constants and damping coefficients from free-falling breast experiments. The effective breast mass was estimated from the breast volume extracted from a 3D body scanned image. The derived spring constant (k a  = 73.5 N m -1 ) above the static equilibrium position was significantly smaller than that below it (k b  = 658 N m -1 ), whereas the respective damping coefficients were similar (c a  = 1.83 N s m -1 , c b  = 2.07 N s m -1 ). These values were used to predict the nipple displacement during bare-breasted running for validation. The predicted and experimental results had a 2.6% or less root-mean-square-error of the theoretical and experimental amplitudes, so the piecewise mass-spring-damper model and equations were considered to have been successfully validated. This provides a theoretical basis for further research into the dynamic, nonlinear viscoelastic properties of different breasts and the prediction of external forces for the necessary breast support during different sports activities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Investigation of Damping Physics and CFD Tool Validation for Simulation of Baffled Tanks at Variable Slosh Amplitude

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; West, Jeff

    2016-01-01

    Determination of slosh damping is a very challenging task as there is no analytical solution. The damping physics involves the vorticity dissipation which requires the full solution of the nonlinear Navier-Stokes equations. As a result, previous investigations were mainly carried out by extensive experiments. A systematical study is needed to understand the damping physics of baffled tanks, to identify the difference between the empirical Miles equation and experimental measurements, and to develop new semi-empirical relations to better represent the real damping physics. The approach of this study is to use Computational Fluid Dynamics (CFD) technology to shed light on the damping mechanisms of a baffled tank. First, a 1-D Navier-Stokes equation representing different length scales and time scales in the baffle damping physics is developed and analyzed. Loci-STREAM-VOF, a well validated CFD solver developed at NASA MSFC, is applied to study the vorticity field around a baffle and around the fluid-gas interface to highlight the dissipation mechanisms at different slosh amplitudes. Previous measurement data is then used to validate the CFD damping results. The study found several critical parameters controlling fluid damping from a baffle: local slosh amplitude to baffle thickness (A/t), surface liquid depth to tank radius (d/R), local slosh amplitude to baffle width (A/W); and non-dimensional slosh frequency. The simulation highlights three significant damping regimes where different mechanisms dominate. The study proves that the previously found discrepancies between Miles equation and experimental measurement are not due to the measurement scatter, but rather due to different damping mechanisms at various slosh amplitudes. The limitations on the use of Miles equation are discussed based on the flow regime.

  3. In Defense of an Instrument-Based Approach to Validity

    ERIC Educational Resources Information Center

    Hood, S. Brian

    2012-01-01

    Paul E. Newton argues in favor of a conception of validity, viz, "the consensus definition of validity," according to which the extension of the predicate "is valid" is a subset of "assessment-based decision-making procedure[s], which [are] underwritten by an argument that the assessment procedure can be used to measure the attribute entailed by…

  4. A Snapshot of Organizational Climate: Perceptions of Extension Faculty

    ERIC Educational Resources Information Center

    Tower, Leslie E.; Bowen, Elaine; Alkadry, Mohamad G.

    2011-01-01

    This article provides a snapshot of the perceptions of workplace climate of Extension faculty at a land-grant, research-high activity university, compared with the perceptions of non-Extension faculty at the same university. An online survey was conducted with a validated instrument. The response rate for university faculty was 44% (968); the…

  5. Development and psychometric testing of the Cancer Knowledge Scale for Elders.

    PubMed

    Su, Ching-Ching; Chen, Yuh-Min; Kuo, Bo-Jein

    2009-03-01

    To develop the Cancer Knowledge Scale for Elders and test its validity and reliability. The number of elders suffering from cancer is increasing. To facilitate cancer prevention behaviours among elders, they shall be educated about cancer-related knowledge. Prior to designing a programme that would respond to the special needs of elders, understanding the cancer-related knowledge within this population was necessary. However, extensive review of the literature revealed a lack of appropriate instruments for measuring cancer-related knowledge. A valid and reliable cancer knowledge scale for elders is necessary. A non-experimental methodological design was used to test the psychometric properties of the Cancer Knowledge Scale for Elders. Item analysis was first performed to screen out items that had low corrected item-total correlation coefficients. Construct validity was examined with a principle component method of exploratory factor analysis. Cancer-related health behaviour was used as the criterion variable to evaluate criterion-related validity. Internal consistency reliability was assessed by the KR-20. Stability was determined by two-week test-retest reliability. The factor analysis yielded a four-factor solution accounting for 49.5% of the variance. For criterion-related validity, cancer knowledge was positively correlated with cancer-related health behaviour (r = 0.78, p < 0.001). The KR-20 coefficients of each factor were 0.85, 0.76, 0.79 and 0.67 and 0.87 for the total scale. Test-retest reliability over a two-week period was 0.83 (p < 0.001). This study provides evidence for content validity, construct validity, criterion-related validity, internal consistency and stability of the Cancer Knowledge Scale for Elders. The results show that this scale is an easy-to-use instrument for elders and has adequate validity and reliability. The scale can be used as an assessment instrument when implementing cancer education programmes for elders. It can also be used to evaluate the effects of education programmes.

  6. Thermo-mechanical evaluation of carbon-carbon primary structure for SSTO vehicles

    NASA Astrophysics Data System (ADS)

    Croop, Harold C.; Lowndes, Holland B.; Hahn, Steven E.; Barthel, Chris A.

    1998-01-01

    An advanced development program to demonstrate carbon-carbon composite structure for use as primary load carrying structure has entered the experimental validation phase. The component being evaluated is a wing torque box section for a single-stage-to-orbit (SSTO) vehicle. The validation or demonstration component features an advanced carbon-carbon design incorporating 3D woven graphite preforms, integral spars, oxidation inhibited matrix, chemical vapor deposited (CVD) oxidation protection coating, and ceramic matrix composite fasteners. The validation component represents the culmination of a four phase design and fabrication development effort. Extensive developmental testing was performed to verify material properties and integrity of basic design features before committing to fabrication of the full scale box. The wing box component is now being set up for testing in the Air Force Research Laboratory Structural Test Facility at Wright-Patterson Air Force Base, Ohio. One of the important developmental tests performed in support of the design and planned testing of the full scale box was the fabrication and test of a skin/spar trial subcomponent. The trial subcomponent incorporated critical features of the full scale wing box design. This paper discusses the results of the trial subcomponent test which served as a pathfinder for the upcoming full scale box test.

  7. A transwell assay that excludes exosomes for assessment of tunneling nanotube-mediated intercellular communication.

    PubMed

    Thayanithy, Venugopal; O'Hare, Patrick; Wong, Phillip; Zhao, Xianda; Steer, Clifford J; Subramanian, Subbaya; Lou, Emil

    2017-11-13

    Tunneling nanotubes (TNTs) are naturally-occurring filamentous actin-based membranous extensions that form across a wide spectrum of mammalian cell types to facilitate long-range intercellular communication. Valid assays are needed to accurately assess the downstream effects of TNT-mediated transfer of cellular signals in vitro. We recently reported a modified transwell assay system designed to test the effects of intercellular transfer of a therapeutic oncolytic virus, and viral-activated drugs, between cells via TNTs. The objective of the current study was to demonstrate validation of this in vitro approach as a new method for effectively excluding diffusible forms of long- and close-range intercellular transfer of intracytoplasmic cargo, including exosomes/microvesicles and gap junctions in order to isolate TNT-selective cell communication. We designed several steps to effectively reduce or eliminate diffusion and long-range transfer via these extracellular vesicles, and used Nanoparticle Tracking Analysis to quantify exosomes following implementation of these steps. The experimental approach outlined here effectively reduced exosome trafficking by >95%; further use of heparin to block exosome uptake by putative recipient cells further impeded transfer of these extracellular vesicles. This validated assay incorporates several steps that can be taken to quantifiably control for extracellular vesicles in order to perform studies focused on TNT-selective communication.

  8. Threats to the Internal Validity of Experimental and Quasi-Experimental Research in Healthcare.

    PubMed

    Flannelly, Kevin J; Flannelly, Laura T; Jankowski, Katherine R B

    2018-01-01

    The article defines, describes, and discusses the seven threats to the internal validity of experiments discussed by Donald T. Campbell in his classic 1957 article: history, maturation, testing, instrument decay, statistical regression, selection, and mortality. These concepts are said to be threats to the internal validity of experiments because they pose alternate explanations for the apparent causal relationship between the independent variable and dependent variable of an experiment if they are not adequately controlled. A series of simple diagrams illustrate three pre-experimental designs and three true experimental designs discussed by Campbell in 1957 and several quasi-experimental designs described in his book written with Julian C. Stanley in 1966. The current article explains why each design controls for or fails to control for these seven threats to internal validity.

  9. Moment-rotation responses of the human lumbosacral spinal column.

    PubMed

    Guan, Yabo; Yoganandan, Narayan; Moore, Jason; Pintar, Frank A; Zhang, Jiangyue; Maiman, Dennis J; Laud, Purushottam

    2007-01-01

    The objective of this study was to test the hypothesis that the human lumbosacral joint behaves differently from L1-L5 joints and provides primary moment-rotation responses under pure moment flexion and extension and left and right lateral bending on a level-by-level basis. In addition, range of motion (ROM) and stiffness data were extracted from the moment-rotation responses. Ten T12-S1 column specimens with ages ranging from 27 to 68 years (mean: 50.6+/-13.2) were tested at a load level of 4.0 N m. Nonlinear flexion and extension and left and right lateral bending moment-rotation responses at each spinal level are reported in the form of a logarithmic function. The mean ROM was the greatest at the L5-S1 level under flexion (7.37+/-3.69 degrees) and extension (4.62+/-2.56 degrees) and at the L3-L4 level under lateral bending (4.04+/-1.11 degrees). The mean ROM was the least at the L1-L2 level under flexion (2.42+/-0.90 degrees), L2-L3 level under extension (1.58+/-0.63 degrees), and L1-L2 level under lateral bending (2.50+/-0.75 degrees). The present study proved the hypothesis that L5-S1 motions are significantly greater than L1-L5 motions under flexion and extension loadings, but the hypothesis was found to be untrue under the lateral bending mode. These experimental data are useful in the improved validation of FE models, which will increase the confidence of stress analysis and other modeling applications.

  10. Pure moment testing for spinal biomechanics applications: Fixed versus sliding ring cable-driven test designs.

    PubMed

    Eguizabal, Johnny; Tufaga, Michael; Scheer, Justin K; Ames, Christopher; Lotz, Jeffrey C; Buckley, Jenni M

    2010-05-07

    In vitro multi-axial bending testing using pure moment loading conditions has become the standard in evaluating the effects of different types of surgical intervention on spinal kinematics. Simple, cable-driven experimental set-ups have been widely adopted because they require little infrastructure. Traditionally, "fixed ring" cable-driven experimental designs have been used; however, there have been concerns with the validity of this set-up in applying pure moment loading. This study involved directly comparing the loading state induced by a traditional "fixed ring" apparatus versus a novel "sliding ring" approach. Flexion-extension bending was performed on an artificial spine model and a single cadaveric test specimen, and the applied loading conditions to the specimen were measured with an in-line multiaxial load cell. The results showed that the fixed ring system applies flexion-extension moments that are 50-60% less than the intended values. This design also imposes non-trivial anterior-posterior shear forces, and non-uniform loading conditions were induced along the length of the specimen. The results of this study indicate that fixed ring systems have the potential to deviate from a pure moment loading state and that our novel sliding ring modification corrects this error in the original test design. This suggests that the proposed sliding ring design should be used for future in vitro spine biomechanics studies involving a cable-driven pure moment apparatus. Copyright 2010 Elsevier Ltd. All rights reserved.

  11. Non-Gaussian Distribution of DNA Barcode Extension In Nanochannels Using High-throughput Imaging

    NASA Astrophysics Data System (ADS)

    Sheats, Julian; Reinhart, Wesley; Reifenberger, Jeff; Gupta, Damini; Muralidhar, Abhiram; Cao, Han; Dorfman, Kevin

    2015-03-01

    We present experimental data for the extension of internal segments of highly confined DNA using a high-­throughput experimental setup. Barcode­-labeled E. coli genomic DNA molecules were imaged at a high areal density in square nanochannels with sizes ranging from 40 nm to 51 nm in width. Over 25,000 molecules were used to obtain more than 1,000,000 measurements for genomic distances between 2,500 bp and 100,000 bp. The distribution of extensions has positive excess kurtosis and is skew­ left due to weak backfolding in the channel. As a result, the two Odijk theories for the chain extension and variance bracket the experimental data. We compared to predictions of a harmonic approximation for the confinement free energy and show that it produces a substantial error in the variance. These results suggest an inherent error associated with any statistical analysis of barcoded DNA that relies on harmonic models for chain extension. Present address: Department of Chemical and Biological Engineering, Princeton University.

  12. Improved patch-based learning for image deblurring

    NASA Astrophysics Data System (ADS)

    Dong, Bo; Jiang, Zhiguo; Zhang, Haopeng

    2015-05-01

    Most recent image deblurring methods only use valid information found in input image as the clue to fill the deblurring region. These methods usually have the defects of insufficient prior information and relatively poor adaptiveness. Patch-based method not only uses the valid information of the input image itself, but also utilizes the prior information of the sample images to improve the adaptiveness. However the cost function of this method is quite time-consuming and the method may also produce ringing artifacts. In this paper, we propose an improved non-blind deblurring algorithm based on learning patch likelihoods. On one hand, we consider the effect of the Gaussian mixture model with different weights and normalize the weight values, which can optimize the cost function and reduce running time. On the other hand, a post processing method is proposed to solve the ringing artifacts produced by traditional patch-based method. Extensive experiments are performed. Experimental results verify that our method can effectively reduce the execution time, suppress the ringing artifacts effectively, and keep the quality of deblurred image.

  13. Circulation Control Model Experimental Database for CFD Validation

    NASA Technical Reports Server (NTRS)

    Paschal, Keith B.; Neuhart, Danny H.; Beeler, George B.; Allan, Brian G.

    2012-01-01

    A 2D circulation control wing was tested in the Basic Aerodynamic Research Tunnel at the NASA Langley Research Center. A traditional circulation control wing employs tangential blowing along the span over a trailing-edge Coanda surface for the purpose of lift augmentation. This model has been tested extensively at the Georgia Tech Research Institute for the purpose of performance documentation at various blowing rates. The current study seeks to expand on the previous work by documenting additional flow-field data needed for validation of computational fluid dynamics. Two jet momentum coefficients were tested during this entry: 0.047 and 0.114. Boundary-layer transition was investigated and turbulent boundary layers were established on both the upper and lower surfaces of the model. Chordwise and spanwise pressure measurements were made, and tunnel sidewall pressure footprints were documented. Laser Doppler Velocimetry measurements were made on both the upper and lower surface of the model at two chordwise locations (x/c = 0.8 and 0.9) to document the state of the boundary layers near the spanwise blowing slot.

  14. Experimental validation of the Achromatic Telescopic Squeezing (ATS) scheme at the LHC

    NASA Astrophysics Data System (ADS)

    Fartoukh, S.; Bruce, R.; Carlier, F.; Coello De Portugal, J.; Garcia-Tabares, A.; Maclean, E.; Malina, L.; Mereghetti, A.; Mirarchi, D.; Persson, T.; Pojer, M.; Ponce, L.; Redaelli, S.; Salvachua, B.; Skowronski, P.; Solfaroli, M.; Tomas, R.; Valuch, D.; Wegscheider, A.; Wenninger, J.

    2017-07-01

    The Achromatic Telescopic Squeezing scheme offers new techniques to deliver unprecedentedly small beam spot size at the interaction points of the ATLAS and CMS experiments of the LHC, while perfectly controlling the chromatic properties of the corresponding optics (linear and non-linear chromaticities, off-momentum beta-beating, spurious dispersion induced by the crossing bumps). The first series of beam tests with ATS optics were achieved during the LHC Run I (2011/2012) for a first validation of the basics of the scheme at small intensity. In 2016, a new generation of more performing ATS optics was developed and more extensively tested in the machine, still with probe beams for optics measurement and correction at β* = 10 cm, but also with a few nominal bunches to establish first collisions at nominal β* (40 cm) and beyond (33 cm), and to analysis the robustness of these optics in terms of collimation and machine protection. The paper will highlight the most relevant and conclusive results which were obtained during this second series of ATS tests.

  15. An XML-based interchange format for genotype-phenotype data.

    PubMed

    Whirl-Carrillo, M; Woon, M; Thorn, C F; Klein, T E; Altman, R B

    2008-02-01

    Recent advances in high-throughput genotyping and phenotyping have accelerated the creation of pharmacogenomic data. Consequently, the community requires standard formats to exchange large amounts of diverse information. To facilitate the transfer of pharmacogenomics data between databases and analysis packages, we have created a standard XML (eXtensible Markup Language) schema that describes both genotype and phenotype data as well as associated metadata. The schema accommodates information regarding genes, drugs, diseases, experimental methods, genomic/RNA/protein sequences, subjects, subject groups, and literature. The Pharmacogenetics and Pharmacogenomics Knowledge Base (PharmGKB; www.pharmgkb.org) has used this XML schema for more than 5 years to accept and process submissions containing more than 1,814,139 SNPs on 20,797 subjects using 8,975 assays. Although developed in the context of pharmacogenomics, the schema is of general utility for exchange of genotype and phenotype data. We have written syntactic and semantic validators to check documents using this format. The schema and code for validation is available to the community at http://www.pharmgkb.org/schema/index.html (last accessed: 8 October 2007). (c) 2007 Wiley-Liss, Inc.

  16. Valid Knowledge: The Economy and the Academy

    ERIC Educational Resources Information Center

    Williams, Peter John

    2007-01-01

    The future of Western universities as public institutions is the subject of extensive continuing debate, underpinned by the issue of what constitutes "valid knowledge". Where in the past only propositional knowledge codified by academics was considered valid, in the new economy enabled by information and communications technology, the procedural…

  17. Initial Development and Validation of the Global Citizenship Scale

    ERIC Educational Resources Information Center

    Morais, Duarte B.; Ogden, Anthony C.

    2011-01-01

    The purpose of this article is to report on the initial development of a theoretically grounded and empirically validated scale to measure global citizenship. The methodology employed is multi-faceted, including two expert face validity trials, extensive exploratory and confirmatory factor analyses with multiple datasets, and a series of three…

  18. Initial Teacher Licensure Testing in Tennessee: Test Validation.

    ERIC Educational Resources Information Center

    Bowman, Harry L.; Petry, John R.

    In 1988 a study was conducted to determine the validity of candidate teacher licensure examinations for use in Tennessee under the 1984 Comprehensive Education Reform Act. The Department of Education conducted a study to determine the validity of 11 previously unvalidated or extensively revised tests for certification and to make recommendations…

  19. Empirical Validation and Application of the Computing Attitudes Survey

    ERIC Educational Resources Information Center

    Dorn, Brian; Elliott Tew, Allison

    2015-01-01

    Student attitudes play an important role in shaping learning experiences. However, few validated instruments exist for measuring student attitude development in a discipline-specific way. In this paper, we present the design, development, and validation of the computing attitudes survey (CAS). The CAS is an extension of the Colorado Learning…

  20. A recursive Bayesian approach for fatigue damage prognosis: An experimental validation at the reliability component level

    NASA Astrophysics Data System (ADS)

    Gobbato, Maurizio; Kosmatka, John B.; Conte, Joel P.

    2014-04-01

    Fatigue-induced damage is one of the most uncertain and highly unpredictable failure mechanisms for a large variety of mechanical and structural systems subjected to cyclic and random loads during their service life. A health monitoring system capable of (i) monitoring the critical components of these systems through non-destructive evaluation (NDE) techniques, (ii) assessing their structural integrity, (iii) recursively predicting their remaining fatigue life (RFL), and (iv) providing a cost-efficient reliability-based inspection and maintenance plan (RBIM) is therefore ultimately needed. In contribution to these objectives, the first part of the paper provides an overview and extension of a comprehensive reliability-based fatigue damage prognosis methodology — previously developed by the authors — for recursively predicting and updating the RFL of critical structural components and/or sub-components in aerospace structures. In the second part of the paper, a set of experimental fatigue test data, available in the literature, is used to provide a numerical verification and an experimental validation of the proposed framework at the reliability component level (i.e., single damage mechanism evolving at a single damage location). The results obtained from this study demonstrate (i) the importance and the benefits of a nearly continuous NDE monitoring system, (ii) the efficiency of the recursive Bayesian updating scheme, and (iii) the robustness of the proposed framework in recursively updating and improving the RFL estimations. This study also demonstrates that the proposed methodology can lead to either an extent of the RFL (with a consequent economical gain without compromising the minimum safety requirements) or an increase of safety by detecting a premature fault and therefore avoiding a very costly catastrophic failure.

  1. De novo inference of protein function from coarse-grained dynamics.

    PubMed

    Bhadra, Pratiti; Pal, Debnath

    2014-10-01

    Inference of molecular function of proteins is the fundamental task in the quest for understanding cellular processes. The task is getting increasingly difficult with thousands of new proteins discovered each day. The difficulty arises primarily due to lack of high-throughput experimental technique for assessing protein molecular function, a lacunae that computational approaches are trying hard to fill. The latter too faces a major bottleneck in absence of clear evidence based on evolutionary information. Here we propose a de novo approach to annotate protein molecular function through structural dynamics match for a pair of segments from two dissimilar proteins, which may share even <10% sequence identity. To screen these matches, corresponding 1 µs coarse-grained (CG) molecular dynamics trajectories were used to compute normalized root-mean-square-fluctuation graphs and select mobile segments, which were, thereafter, matched for all pairs using unweighted three-dimensional autocorrelation vectors. Our in-house custom-built forcefield (FF), extensively validated against dynamics information obtained from experimental nuclear magnetic resonance data, was specifically used to generate the CG dynamics trajectories. The test for correspondence of dynamics-signature of protein segments and function revealed 87% true positive rate and 93.5% true negative rate, on a dataset of 60 experimentally validated proteins, including moonlighting proteins and those with novel functional motifs. A random test against 315 unique fold/function proteins for a negative test gave >99% true recall. A blind prediction on a novel protein appears consistent with additional evidences retrieved therein. This is the first proof-of-principle of generalized use of structural dynamics for inferring protein molecular function leveraging our custom-made CG FF, useful to all. © 2014 Wiley Periodicals, Inc.

  2. Experimental validation of coil phase parametrisation on ASDEX Upgrade, and extension to ITER

    NASA Astrophysics Data System (ADS)

    Ryan, D. A.; Liu, Y. Q.; Kirk, A.; Suttrop, W.; Dudson, B.; Dunne, M.; Willensdorfer, M.; the ASDEX Upgrade team; the EUROfusion MST1 team

    2018-06-01

    It has been previously demonstrated in Li et al (2016 Nucl. Fusion 56 126007) that the optimum upper/lower coil phase shift ΔΦopt for alignment of RMP coils for ELM mitigation depends sensitively on q 95, and other equilibrium plasma parameters. Therefore, ΔΦopt is expected to vary widely during the current ramp of ITER plasmas, with negative implications for ELM mitigation during this period. A previously derived and numerically benchmarked parametrisation of the coil phase for optimal ELM mitigation on ASDEX Upgrade (Ryan et al 2017 Plasma Phys. Control. Fusion 59 024005) is validated against experimental measurements of ΔΦopt, made by observing the changes to the ELM frequency as the coil phase is scanned. It is shown that the parametrisation may predict the optimal coil phase to within 32° of the experimental measurement for n = 2 applied perturbations. It is explained that this agreement is sufficient to ensure that the ELM mitigation is not compromised by poor coil alignment. It is also found that the phase which maximises ELM mitigation is shifted from the phase which maximizes density pump-out, in contrast to theoretical expectations that ELM mitigation and density pump out have the same ΔΦ ul dependence. A time lag between the ELM frequency response and density response to the RMP is suggested as the cause. The method for numerically deriving the parametrisation is repeated for the ITER coil set, using the baseline scenario as a reference equilibrium, and the parametrisation coefficients given for future use in a feedback coil alignment system. The relative merits of square or sinusoidal toroidal current waveforms for ELM mitigation are briefly discussed.

  3. Quality of motion considerations in numerical analysis of motion restoring implants of the spine.

    PubMed

    Bowden, Anton E; Guerin, Heather L; Villarraga, Marta L; Patwardhan, Avinash G; Ochoa, Jorge A

    2008-06-01

    Motion restoring implants function in a dynamic environment that encompasses the full range of spinal kinematics. Accurate assessment of the in situ performance of these devices using numerical techniques requires model verification and validation against the well-established nonlinear quality of motion of the spine, as opposed to the previous norm of matching kinematic endpoint metrics such as range of motion and intervertebral disc pressure measurements at a single kinematic reference point. Experimental data was obtained during cadaveric testing of nine three-functional spinal unit (L3-S1) lumbar spine segments. Each specimen was tested from 8 Nm of applied flexion moment to 6 Nm of applied extension moment with an applied 400 N compressive follower preload. A nonlinear kinematic curve representing the spinal quality of motion (applied moment versus angular rotation) for the index finite element model was constructed and compared to the kinematic responses of the experimental specimens. The effect of spinal soft tissue structure mechanical behaviors on the fidelity of the model's quality of motion to experimental data was assessed by iteratively modifying the material representations of annulus fibrosus, nucleus pulposus, and ligaments. The present work demonstrated that for this model, the annulus fibrosus played a small role in the nonlinear quality of motion of the model, whereas changes in ligament representations had a large effect, as validated against the full kinematic range of motion. An anisotropic continuum representation of the annulus fibrosus was used, along with nonlinear fabric representations of the ligaments and a hyperelastic representation of the nucleus pulposus. Our results suggest that improvements in current methodologies broadly used in numerical simulations of the lumbar spine are needed to fully describe the highly nonlinear motion of the spine.

  4. A Comprehensive Validation Methodology for Sparse Experimental Data

    NASA Technical Reports Server (NTRS)

    Norman, Ryan B.; Blattnig, Steve R.

    2010-01-01

    A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.

  5. Experimental Characterization and Validation of Simultaneous Gust Alleviation and Energy Harvesting for Multifunctional Wing Spars

    DTIC Science & Technology

    2012-08-01

    U0=15m/s,  Lv  =350m   Cloud Wind and Clear Sky Gust Simulation Using Dryden PSD* Harvested Energy from Normal Vibration (Red) to...energy control law based on limited energy constraints 4) Experimentally validated simultaneous energy harvesting and vibration control Summary...Experimental Characterization and Validation of Simultaneous Gust Alleviation and Energy Harvesting for Multifunctional Wing Spars AFOSR

  6. Criterion validity study of the cervical range of motion (CROM) device for rotational range of motion on healthy adults.

    PubMed

    Tousignant, Michel; Smeesters, Cécil; Breton, Anne-Marie; Breton, Emilie; Corriveau, Hélène

    2006-04-01

    This study compared range of motion (ROM) measurements using a cervical range of motion device (CROM) and an optoelectronic system (OPTOTRAK). To examine the criterion validity of the CROM for the measurement of cervical ROM on healthy adults. Whereas measurements of cervical ROM are recognized as part of the assessment of patients with neck pain, few devices are available in clinical settings. Two papers published previously showed excellent criterion validity for measurements of cervical flexion/extension and lateral flexion using the CROM. Subjects performed neck rotation, flexion/extension, and lateral flexion while sitting on a wooden chair. The ROM values were measured by the CROM as well as the OPTOTRAK. The cervical rotational ROM values using the CROM demonstrated a good to excellent linear relationship with those using the OPTOTRAK: right rotation, r = 0.89 (95% confidence interval, 0.81-0.94), and left rotation, r = 0.94 (95% confidence interval, 0.90-0.97). Similar results were also obtained for flexion/extension and lateral flexion ROM values. The CROM showed excellent criterion validity for measurements of cervical rotation. We propose using ROM values measured by the CROM as outcome measures for patients with neck pain.

  7. Age-related reduction of trunk muscle torque and prevalence of trunk sarcopenia in community-dwelling elderly: Validity of a portable trunk muscle torque measurement instrument and its application to a large sample cohort study

    PubMed Central

    Sasaki, Shizuka; Chiba, Daisuke; Yamamoto, Yuji; Nawata, Atsushi; Tsuda, Eiichi; Nakaji, Shigeyuki; Ishibashi, Yasuyuki

    2018-01-01

    Trunk muscle weakness and imbalance are risk factors for postural instability, low back pain, and poor postoperative outcomes. The association between trunk muscle strength and aging is poorly understood, and establishing normal reference values is difficult. We aimed to establish the validity of a novel portable trunk muscle torque measurement instrument (PTMI). We then estimated reference data for healthy young adults and elucidated age-related weakness in trunk muscle strength. Twenty-four university students were enrolled to validate values for PTMI, and 816 volunteers from the general population who were recruited to the Iwaki Health Promotion Project were included to estimate reference data for trunk muscle strength. Trunk flexion and extension torque were measured with PTMI and KinCom, and interclass correlation coefficients (ICC) were estimated to evaluate the reliability of PTMI values. Furthermore, from the young adult reference, the age-related reduction in trunk muscle torque and the prevalence of sarcopenia among age-sex groups were estimated. The ICC in flexion and extension torque were 0.807 (p<0.001) and 0.789 (p<0.001), respectively. The prevalence of sarcopenia increased with age, and the prevalence due to flexion torque was double that of extension torque. Flexion torque decreased significantly after 60 years of age, and extension torque decreased after 70 years of age. In males over age 80, trunk muscle torque decreased to 49.1% in flexion and 63.5% in extension. In females over age 80, trunk muscle torque decreased to 60.7% in flexion and 68.4% in extension. The validity of PTMI was confirmed by correlation with KinCom. PTMI produced reference data for healthy young adults, and demonstrated age-related reduction in trunk muscle torque. Trunk sarcopenia progressed with aging, and the loss of flexion torque began earlier than extension torque. At age 80, trunk muscle torque had decreased 60% compared with healthy young adults. PMID:29471310

  8. Age-related reduction of trunk muscle torque and prevalence of trunk sarcopenia in community-dwelling elderly: Validity of a portable trunk muscle torque measurement instrument and its application to a large sample cohort study.

    PubMed

    Sasaki, Eiji; Sasaki, Shizuka; Chiba, Daisuke; Yamamoto, Yuji; Nawata, Atsushi; Tsuda, Eiichi; Nakaji, Shigeyuki; Ishibashi, Yasuyuki

    2018-01-01

    Trunk muscle weakness and imbalance are risk factors for postural instability, low back pain, and poor postoperative outcomes. The association between trunk muscle strength and aging is poorly understood, and establishing normal reference values is difficult. We aimed to establish the validity of a novel portable trunk muscle torque measurement instrument (PTMI). We then estimated reference data for healthy young adults and elucidated age-related weakness in trunk muscle strength. Twenty-four university students were enrolled to validate values for PTMI, and 816 volunteers from the general population who were recruited to the Iwaki Health Promotion Project were included to estimate reference data for trunk muscle strength. Trunk flexion and extension torque were measured with PTMI and KinCom, and interclass correlation coefficients (ICC) were estimated to evaluate the reliability of PTMI values. Furthermore, from the young adult reference, the age-related reduction in trunk muscle torque and the prevalence of sarcopenia among age-sex groups were estimated. The ICC in flexion and extension torque were 0.807 (p<0.001) and 0.789 (p<0.001), respectively. The prevalence of sarcopenia increased with age, and the prevalence due to flexion torque was double that of extension torque. Flexion torque decreased significantly after 60 years of age, and extension torque decreased after 70 years of age. In males over age 80, trunk muscle torque decreased to 49.1% in flexion and 63.5% in extension. In females over age 80, trunk muscle torque decreased to 60.7% in flexion and 68.4% in extension. The validity of PTMI was confirmed by correlation with KinCom. PTMI produced reference data for healthy young adults, and demonstrated age-related reduction in trunk muscle torque. Trunk sarcopenia progressed with aging, and the loss of flexion torque began earlier than extension torque. At age 80, trunk muscle torque had decreased 60% compared with healthy young adults.

  9. Modeling the concentration-dependent permeation modes of the KcsA potassium ion channel.

    PubMed

    Nelson, Peter Hugo

    2003-12-01

    The potassium channel from Streptomyces lividans (KcsA) is an integral membrane protein with sequence similarity to all known potassium channels, particularly in the selectivity filter region. A recently proposed model for ion channels containing either n or (n-1) single-file ions in their selectivity filters [P. H. Nelson, J. Chem. Phys. 177, 11396 (2002)] is applied to published KcsA channel K+ permeation data that exhibit a high-affinity process at low concentrations and a low-affinity process at high concentrations [M. LeMasurier et al., J. Gen. Physiol. 118, 303 (2001)]. The kinetic model is shown to provide a reasonable first-order explanation for both the high- and low-concentration permeation modes observed experimentally. The low-concentration mode ([K+]<200 mM) has a 200-mV dissociation constant of 56 mM and a conductance of 88 pS. The high-concentration mode ([K+]>200 mM) has a 200-mV dissociation constant of 1100 mM and a conductance of 500 pS. Based on the permeation model, and x-ray analysis [J. H. Morais-Cabral et al., Nature (London) 414, 37 (2001)], it is suggested that the experimentally observed K+ permeation modes correspond to an n=3 mechanism at high concentrations and an n=2 mechanism at low concentrations. The ratio of the electrical dissociation distances for the high- and low-concentration modes is 3:2, also consistent with the proposed n=3 and n=2 modes. Model predictions for K+ channels that exhibit asymmetric current-voltage (I-V) curves are presented, and further validation of the kinetic model via molecular simulation and experiment is discussed. The qualitatively distinct I-V characteristics exhibited experimentally by Tl+, NH+4, and Rb+ ions at 100 mM concentration can also be explained using the model, but more extensive experimental tests are required for quantitative validation of the model predictions.

  10. Modeling the concentration-dependent permeation modes of the KcsA potassium ion channel

    NASA Astrophysics Data System (ADS)

    Nelson, Peter Hugo

    2003-12-01

    The potassium channel from Streptomyces lividans (KcsA) is an integral membrane protein with sequence similarity to all known potassium channels, particularly in the selectivity filter region. A recently proposed model for ion channels containing either n or (n-1) single-file ions in their selectivity filters [P. H. Nelson, J. Chem. Phys. 177, 11396 (2002)] is applied to published KcsA channel K+ permeation data that exhibit a high-affinity process at low concentrations and a low-affinity process at high concentrations [M. LeMasurier et al., J. Gen. Physiol. 118, 303 (2001)]. The kinetic model is shown to provide a reasonable first-order explanation for both the high- and low-concentration permeation modes observed experimentally. The low-concentration mode ([K+]<200 mM) has a 200-mV dissociation constant of 56 mM and a conductance of 88 pS. The high-concentration mode ([K+]>200 mM) has a 200-mV dissociation constant of 1100 mM and a conductance of 500 pS. Based on the permeation model, and x-ray analysis [J. H. Morais-Cabral et al., Nature (London) 414, 37 (2001)], it is suggested that the experimentally observed K+ permeation modes correspond to an n=3 mechanism at high concentrations and an n=2 mechanism at low concentrations. The ratio of the electrical dissociation distances for the high- and low-concentration modes is 3:2, also consistent with the proposed n=3 and n=2 modes. Model predictions for K+ channels that exhibit asymmetric current-voltage (I-V) curves are presented, and further validation of the kinetic model via molecular simulation and experiment is discussed. The qualitatively distinct I-V characteristics exhibited experimentally by Tl+, NH+4, and Rb+ ions at 100 mM concentration can also be explained using the model, but more extensive experimental tests are required for quantitative validation of the model predictions.

  11. Experimental Validation of Model Updating and Damage Detection via Eigenvalue Sensitivity Methods with Artificial Boundary Conditions

    DTIC Science & Technology

    2017-09-01

    VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY

  12. Atomistic model of the spider silk nanostructure

    NASA Astrophysics Data System (ADS)

    Keten, Sinan; Buehler, Markus J.

    2010-04-01

    Spider silk is an ultrastrong and extensible self-assembling biopolymer that outperforms the mechanical characteristics of many synthetic materials including steel. Here we report atomic-level structures that represent aggregates of MaSp1 proteins from the N. Clavipes silk sequence based on a bottom-up computational approach using replica exchange molecular dynamics. We discover that poly-alanine regions predominantly form distinct and orderly beta-sheet crystal domains while disorderly structures are formed by poly-glycine repeats, resembling 31-helices. These could be the molecular source of the large semicrystalline fraction observed in silks, and also form the basis of the so-called "prestretched" molecular configuration. Our structures are validated against experimental data based on dihedral angle pair calculations presented in Ramachandran plots, alpha-carbon atomic distances, as well as secondary structure content.

  13. Aligning, Bonding, and Testing Mirrors for Lightweight X-ray Telescopes

    NASA Technical Reports Server (NTRS)

    Chan, Kai-Wing; Zhang, William W.; Saha, Timo T.; McClelland, Ryan S.; Biskach, Michael P.; Niemeyer, Jason; Schofield, Mark J.; Mazzarella, James R.; Kolos, Linette D.; Hong, Melinda M.; hide

    2015-01-01

    High-resolution, high throughput optics for x-ray astronomy entails fabrication of well-formed mirror segments and their integration with arc-second precision. In this paper, we address issues of aligning and bonding thin glass mirrors with negligible additional distortion. Stability of the bonded mirrors and the curing of epoxy used in bonding them were tested extensively. We present results from tests of bonding mirrors onto experimental modules, and on the stability of the bonded mirrors tested in x-ray. These results demonstrate the fundamental validity of the methods used in integrating mirrors into telescope module, and reveal the areas for further investigation. The alignment and integration methods are applicable to the astronomical mission concept such as STAR-X, the Survey and Time-domain Astronomical Research Explorer.

  14. Test of hadronic interaction models with the KASCADE-Grande muon data

    NASA Astrophysics Data System (ADS)

    Arteaga-Velázquez, J. C.; Apel, W. D.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Finger, M.; Fuchs, B.; Fuhrmann, D.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Mayer, H. J.; Melissas, M.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Wommer, M.; Zabierowski, J.

    2013-06-01

    KASCADE-Grande is an air-shower observatory devoted for the detection of cosmic rays with energies in the interval of 1014 - 1018 eV, where the Grande array is responsible for the higher energy range. The experiment comprises different detection systems which allow precise measurements of the charged, electron and muon numbers of extensive air-showers (EAS). These data is employed not only to reconstruct the properties of the primary cosmic-ray particle but also to test hadronic interaction models at high energies. In this contribution, predictions of the muon content of EAS from QGSJET II-2, SIBYLL 2.1 and EPOS 1.99 are confronted with the experimental measurements performed with the KASCADE-Grande experiment in order to test the validity of these hadronic models commonly used in EAS simulations.

  15. VQSEC Home Page

    Science.gov Websites

    Complex Water Impact Visitor Information Validation and Qualification Sciences Experimental Complex Our the problem space. The Validation and Qualification Sciences Experimental Complex (VQSEC) at Sandia

  16. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    NASA Astrophysics Data System (ADS)

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; Price, Stephen; Hoffman, Matthew; Lipscomb, William H.; Fyke, Jeremy; Vargo, Lauren; Boghozian, Adrianna; Norman, Matthew; Worley, Patrick H.

    2017-06-01

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptops to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Ultimately, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.

  17. Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2

    NASA Technical Reports Server (NTRS)

    Muylaert, Jean (Editor); Kumar, Ajay (Editor); Dujarric, Christian (Editor)

    1998-01-01

    The results of the phase 2 effort conducted under AGARD Working Group 18 on Hypersonic Experimental and Computational Capability, Improvement and Validation are presented in this report. The first volume, published in May 1996, mainly focused on the design methodology, plans and some initial results of experiments that had been conducted to serve as validation benchmarks. The current volume presents the detailed experimental and computational data base developed during this effort.

  18. Specification and Preliminary Validation of IAT (Integrated Analysis Techniques) Methods: Executive Summary.

    DTIC Science & Technology

    1985-03-01

    conceptual framwork , and preliminary validation of IAT concepts. Planned work for FY85, including more extensive validation, is also described. 20...Developments: Required Capabilities .... ......... 10 2-1 IAT Conceptual Framework - FY85 (FEO) ..... ........... 11 2-2 Recursive Nature of Decomposition...approach: 1) Identify needs & requirements for IAT. 2) Develop IAT conceptual framework. 3) Validate IAT methods. 4) Develop applications materials. To

  19. Experimental Design and Some Threats to Experimental Validity: A Primer

    ERIC Educational Resources Information Center

    Skidmore, Susan

    2008-01-01

    Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…

  20. 77 FR 46750 - Agency Information Collection Extension

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-06

    ... Questionnaire Testing, Evaluation, and Research.'' The proposed collection will utilize qualitative and quantitative methodologies to pretest questionnaires and validate EIA survey forms data quality, including..., Evaluation, and Research; (3) Type of Request: Extension, Without Change, of a Previously Approved Collection...

  1. Validity in Mixed Methods Research in Education: The Application of Habermas' Critical Theory

    ERIC Educational Resources Information Center

    Long, Haiying

    2017-01-01

    Mixed methods approach has developed into the third methodological movement in educational research. Validity in mixed methods research as an important issue, however, has not been examined as extensively as that of quantitative and qualitative research. Additionally, the previous discussions of validity in mixed methods research focus on research…

  2. Developing and Validating a Survey of Korean Early Childhood English Teachers' Knowledge

    ERIC Educational Resources Information Center

    Kim, Jung In

    2015-01-01

    The main purpose of this study is to develop and validate a valid measure of the early childhood (EC) English teacher knowledge. Through extensive literature review on second/foreign language (L2/FL) teacher knowledge, early childhood teacher knowledge and early childhood language teacher knowledge, and semi-structured interviews from current…

  3. A Novel Small-Specimen Planar Biaxial Testing System With Full In-Plane Deformation Control.

    PubMed

    Potter, Samuel; Graves, Jordan; Drach, Borys; Leahy, Thomas; Hammel, Chris; Feng, Yuan; Baker, Aaron; Sacks, Michael S

    2018-05-01

    Simulations of soft tissues require accurate and robust constitutive models, whose form is derived from carefully designed experimental studies. For such investigations of membranes or thin specimens, planar biaxial systems have been used extensively. Yet, all such systems remain limited in their ability to: (1) fully prescribe in-plane deformation gradient tensor F2D, (2) ensure homogeneity of the applied deformation, and (3) be able to accommodate sufficiently small specimens to ensure a reasonable degree of material homogeneity. To address these issues, we have developed a novel planar biaxial testing device that overcomes these difficulties and is capable of full control of the in-plane deformation gradient tensor F2D and of testing specimens as small as ∼4 mm × ∼4 mm. Individual actuation of the specimen attachment points, combined with a robust real-time feedback control, enabled the device to enforce any arbitrary F2D with a high degree of accuracy and homogeneity. Results from extensive device validation trials and example tissues illustrated the ability of the device to perform as designed and gather data needed for developing and validating constitutive models. Examples included the murine aortic tissues, allowing for investigators to take advantage of the genetic manipulation of murine disease models. These capabilities highlight the potential of the device to serve as a platform for informing and verifying the results of inverse models and for conducting robust, controlled investigation into the biomechanics of very local behaviors of soft tissues and membrane biomaterials.

  4. Kinematic evaluation of the finger's interphalangeal joints coupling mechanism--variability, flexion-extension differences, triggers, locking swanneck deformities, anthropometric correlations.

    PubMed

    Leijnse, J N A L; Quesada, P M; Spoor, C W

    2010-08-26

    The human finger contains tendon/ligament mechanisms essential for proper control. One mechanism couples the movements of the interphalangeal joints when the (unloaded) finger is flexed with active deep flexor. This study's aim was to accurately determine in a large finger sample the kinematics and variability of the coupled interphalangeal joint motions, for potential clinical and finger model validation applications. The data could also be applied to humanoid robotic hands. Sixty-eight fingers were measured in seventeen hands in nine subjects. Fingers exhibited great joint mobility variability, with passive proximal interphalangeal hyperextension ranging from zero to almost fifty degrees. Increased measurement accuracy was obtained by using marker frames to amplify finger segment motions. Gravitational forces on the marker frames were not found to invalidate measurements. The recorded interphalangeal joint trajectories were highly consistent, demonstrating the underlying coupling mechanism. The increased accuracy and large sample size allowed for evaluation of detailed trajectory variability, systematic differences between flexion and extension trajectories, and three trigger types, distinct from flexor tendon triggers, involving initial flexion deficits in either proximal or distal interphalangeal joint. The experimental methods, data and analysis should advance insight into normal and pathological finger biomechanics (e.g., swanneck deformities), and could help improve clinical differential diagnostics of trigger finger causes. The marker frame measuring method may be useful to quantify interphalangeal joints trajectories in surgical/rehabilitative outcome studies. The data as a whole provide the most comprehensive collection of interphalangeal joint trajectories for clinical reference and model validation known to us to date. 2010 Elsevier Ltd. All rights reserved.

  5. Global reaction mechanism for the auto-ignition of full boiling range gasoline and kerosene fuels

    NASA Astrophysics Data System (ADS)

    Vandersickel, A.; Wright, Y. M.; Boulouchos, K.

    2013-12-01

    Compact reaction schemes capable of predicting auto-ignition are a prerequisite for the development of strategies to control and optimise homogeneous charge compression ignition (HCCI) engines. In particular for full boiling range fuels exhibiting two stage ignition a tremendous demand exists in the engine development community. The present paper therefore meticulously assesses a previous 7-step reaction scheme developed to predict auto-ignition for four hydrocarbon blends and proposes an important extension of the model constant optimisation procedure, allowing for the model to capture not only ignition delays, but also the evolutions of representative intermediates and heat release rates for a variety of full boiling range fuels. Additionally, an extensive validation of the later evolutions by means of various detailed n-heptane reaction mechanisms from literature has been presented; both for perfectly homogeneous, as well as non-premixed/stratified HCCI conditions. Finally, the models potential to simulate the auto-ignition of various full boiling range fuels is demonstrated by means of experimental shock tube data for six strongly differing fuels, containing e.g. up to 46.7% cyclo-alkanes, 20% napthalenes or complex branched aromatics such as methyl- or ethyl-napthalene. The good predictive capability observed for each of the validation cases as well as the successful parameterisation for each of the six fuels, indicate that the model could, in principle, be applied to any hydrocarbon fuel, providing suitable adjustments to the model parameters are carried out. Combined with the optimisation strategy presented, the model therefore constitutes a major step towards the inclusion of real fuel kinetics into full scale HCCI engine simulations.

  6. Towards interoperable and reproducible QSAR analyses: Exchange of datasets.

    PubMed

    Spjuth, Ola; Willighagen, Egon L; Guha, Rajarshi; Eklund, Martin; Wikberg, Jarl Es

    2010-06-30

    QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML) which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join, extend, combine datasets and hence work collectively, but also allows for analyzing the effect descriptors have on the statistical model's performance. The presented Bioclipse plugins equip scientists with graphical tools that make QSAR-ML easily accessible for the community.

  7. Towards interoperable and reproducible QSAR analyses: Exchange of datasets

    PubMed Central

    2010-01-01

    Background QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. Results We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML) which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Conclusions Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join, extend, combine datasets and hence work collectively, but also allows for analyzing the effect descriptors have on the statistical model's performance. The presented Bioclipse plugins equip scientists with graphical tools that make QSAR-ML easily accessible for the community. PMID:20591161

  8. A stochastic Iwan-type model for joint behavior variability modeling

    NASA Astrophysics Data System (ADS)

    Mignolet, Marc P.; Song, Pengchao; Wang, X. Q.

    2015-08-01

    This paper focuses overall on the development and validation of a stochastic model to describe the dissipation and stiffness properties of a bolted joint for which experimental data is available and exhibits a large scatter. An extension of the deterministic parallel-series Iwan model for the characterization of the force-displacement behavior of joints is first carried out. This new model involves dynamic and static coefficients of friction differing from each other and a broadly defined distribution of Jenkins elements. Its applicability is next investigated using the experimental data, i.e. stiffness and dissipation measurements obtained in harmonic testing of 9 nominally identical bolted joints. The model is found to provide a very good fit of the experimental data for each bolted joint notwithstanding the significant variability of their behavior. This finding suggests that this variability can be simulated through the randomization of only the parameters of the proposed Iwan-type model. The distribution of these parameters is next selected based on maximum entropy concepts and their corresponding parameters, i.e. the hyperparameters of the model, are identified using a maximum likelihood strategy. Proceeding with a Monte Carlo simulation of this stochastic Iwan model demonstrates that the experimental data fits well within the uncertainty band corresponding to the 5th and 95th percentiles of the model predictions which well supports the adequacy of the modeling effort.

  9. Recursive formulae and performance comparisons for first mode dynamics of periodic structures

    NASA Astrophysics Data System (ADS)

    Hobeck, Jared D.; Inman, Daniel J.

    2017-05-01

    Periodic structures are growing in popularity especially in the energy harvesting and metastructures communities. Common types of these unique structures are referred to in the literature as zigzag, orthogonal spiral, fan-folded, and longitudinal zigzag structures. Many of these studies on periodic structures have two competing goals in common: (a) minimizing natural frequency, and (b) minimizing mass or volume. These goals suggest that no single design is best for all applications; therefore, there is a need for design optimization and comparison tools which first require efficient easy-to-implement models. All available structural dynamics models for these types of structures do provide exact analytical solutions; however, they are complex requiring tedious implementation and providing more information than necessary for practical applications making them computationally inefficient. This paper presents experimentally validated recursive models that are able to very accurately and efficiently predict the dynamics of the four most common types of periodic structures. The proposed modeling technique employs a combination of static deflection formulae and Rayleigh’s Quotient to estimate the first mode shape and natural frequency of periodic structures having any number of beams. Also included in this paper are the results of an extensive experimental validation study which show excellent agreement between model prediction and measurement. Lastly, the proposed models are used to evaluate the performance of each type of structure. Results of this performance evaluation reveal key advantages and disadvantages associated with each type of structure.

  10. Unsteady boundary layer development on a wind turbine blade: an experimental study of a surrogate problem

    NASA Astrophysics Data System (ADS)

    Cadel, Daniel R.; Zhang, Di; Lowe, K. Todd; Paterson, Eric G.

    2018-04-01

    Wind turbines with thick blade profiles experience turbulent, periodic approach flow, leading to unsteady blade loading and large torque fluctuations on the turbine drive shaft. Presented here is an experimental study of a surrogate problem representing some key aspects of the wind turbine unsteady fluid mechanics. This experiment has been designed through joint consideration by experiment and computation, with the ultimate goal of numerical model development for aerodynamics in unsteady and turbulent flows. A cylinder at diameter Reynolds number of 65,000 and Strouhal number of 0.184 is placed 10.67 diameters upstream of a NACA 63215b airfoil with chord Reynolds number of 170,000 and chord-reduced frequency of k=2π fc/2/V=1.5. Extensive flow field measurements using particle image velocimetry provide a number of insights about this flow, as well as data for model validation and development. Velocity contours on the airfoil suction side in the presence of the upstream cylinder indicate a redistribution of turbulent normal stresses from transverse to streamwise, consistent with rapid distortion theory predictions. A study of the boundary layer over the suction side of the airfoil reveals very low Reynolds number turbulent mean streamwise velocity profiles. The dominance of the high amplitude large eddy passages results in a phase lag in streamwise velocity as a function of distance from the wall. The results and accompanying description provide a new test case incorporating moderate-reduced frequency inflow for computational model validation and development.

  11. Numerical Investigation of Plasma Detachment in Magnetic Nozzle Experiments

    NASA Technical Reports Server (NTRS)

    Sankaran, Kamesh; Polzin, Kurt A.

    2008-01-01

    At present there exists no generally accepted theoretical model that provides a consistent physical explanation of plasma detachment from an externally-imposed magnetic nozzle. To make progress towards that end, simulation of plasma flow in the magnetic nozzle of an arcjet experiment is performed using a multidimensional numerical simulation tool that includes theoretical models of the various dispersive and dissipative processes present in the plasma. This is an extension of the simulation tool employed in previous work by Sankaran et al. The aim is to compare the computational results with various proposed magnetic nozzle detachment theories to develop an understanding of the physical mechanisms that cause detachment. An applied magnetic field topology is obtained using a magnetostatic field solver (see Fig. I), and this field is superimposed on the time-dependent magnetic field induced in the plasma to provide a self-consistent field description. The applied magnetic field and model geometry match those found in experiments by Kuriki and Okada. This geometry is modeled because there is a substantial amount of experimental data that can be compared to the computational results, allowing for validation of the model. In addition, comparison of the simulation results with the experimentally obtained plasma parameters will provide insight into the mechanisms that lead to plasma detachment, revealing how they scale with different input parameters. Further studies will focus on modeling literature experiments both for the purpose of additional code validation and to extract physical insight regarding the mechanisms driving detachment.

  12. 40 CFR 761.386 - Required experimental conditions for the validation study and subsequent use during decontamination.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Comparison Study for... experimental conditions for the validation study and subsequent use during decontamination. The following...

  13. 40 CFR 761.386 - Required experimental conditions for the validation study and subsequent use during decontamination.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the validation study and subsequent use during decontamination. 761.386 Section 761.386 Protection of... (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Comparison Study for... experimental conditions for the validation study and subsequent use during decontamination. The following...

  14. Experimental Validation Techniques for the Heleeos Off-Axis Laser Propagation Model

    DTIC Science & Technology

    2010-03-01

    EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER PROPAGATION MODEL THESIS John Haiducek, 1st Lt, USAF AFIT/GAP/ENP/10-M07 DEPARTMENT...Department of Defense, or the United States Government. AFIT/GAP/ENP/10-M07 EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER ...BS, Physics 1st Lt, USAF March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT/GAP/ENP/10-M07 Abstract The High Energy Laser End-to-End

  15. System equivalent model mixing

    NASA Astrophysics Data System (ADS)

    Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis

    2018-05-01

    This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.

  16. Experimental determination of pore shapes using phase retrieval from q -space NMR diffraction

    NASA Astrophysics Data System (ADS)

    Demberg, Kerstin; Laun, Frederik Bernd; Bertleff, Marco; Bachert, Peter; Kuder, Tristan Anselm

    2018-05-01

    This paper presents an approach to solving the phase problem in nuclear magnetic resonance (NMR) diffusion pore imaging, a method that allows imaging the shape of arbitrary closed pores filled with an NMR-detectable medium for investigation of the microstructure of biological tissue and porous materials. Classical q -space imaging composed of two short diffusion-encoding gradient pulses yields, analogously to diffraction experiments, the modulus squared of the Fourier transform of the pore image which entails an inversion problem: An unambiguous reconstruction of the pore image requires both magnitude and phase. Here the phase information is recovered from the Fourier modulus by applying a phase retrieval algorithm. This allows omitting experimentally challenging phase measurements using specialized temporal gradient profiles. A combination of the hybrid input-output algorithm and the error reduction algorithm was used with dynamically adapting support (shrinkwrap extension). No a priori knowledge on the pore shape was fed to the algorithm except for a finite pore extent. The phase retrieval approach proved successful for simulated data with and without noise and was validated in phantom experiments with well-defined pores using hyperpolarized xenon gas.

  17. Ligand-Induced Modulation of the Free-Energy Landscape of G Protein-Coupled Receptors Explored by Adaptive Biasing Techniques

    PubMed Central

    Provasi, Davide; Artacho, Marta Camacho; Negri, Ana; Mobarec, Juan Carlos; Filizola, Marta

    2011-01-01

    Extensive experimental information supports the formation of ligand-specific conformations of G protein-coupled receptors (GPCRs) as a possible molecular basis for their functional selectivity for signaling pathways. Taking advantage of the recently published inactive and active crystal structures of GPCRs, we have implemented an all-atom computational strategy that combines different adaptive biasing techniques to identify ligand-specific conformations along pre-determined activation pathways. Using the prototypic GPCR β2-adrenergic receptor as a suitable test case for validation, we show that ligands with different efficacies (either inverse agonists, neutral antagonists, or agonists) modulate the free-energy landscape of the receptor by shifting the conformational equilibrium towards active or inactive conformations depending on their elicited physiological response. Notably, we provide for the first time a quantitative description of the thermodynamics of the receptor in an explicit atomistic environment, which accounts for the receptor basal activity and the stabilization of different active-like states by differently potent agonists. Structural inspection of these metastable states reveals unique conformations of the receptor that may have been difficult to retrieve experimentally. PMID:22022248

  18. Geant4 hadronic physics for space radiation environment.

    PubMed

    Ivantchenko, Anton V; Ivanchenko, Vladimir N; Molina, Jose-Manuel Quesada; Incerti, Sebastien L

    2012-01-01

    To test and to develop Geant4 (Geometry And Tracking version 4) Monte Carlo hadronic models with focus on applications in a space radiation environment. The Monte Carlo simulations have been performed using the Geant4 toolkit. Binary (BIC), its extension for incident light ions (BIC-ion) and Bertini (BERT) cascades were used as main Monte Carlo generators. For comparisons purposes, some other models were tested too. The hadronic testing suite has been used as a primary tool for model development and validation against experimental data. The Geant4 pre-compound (PRECO) and de-excitation (DEE) models were revised and improved. Proton, neutron, pion, and ion nuclear interactions were simulated with the recent version of Geant4 9.4 and were compared with experimental data from thin and thick target experiments. The Geant4 toolkit offers a large set of models allowing effective simulation of interactions of particles with matter. We have tested different Monte Carlo generators with our hadronic testing suite and accordingly we can propose an optimal configuration of Geant4 models for the simulation of the space radiation environment.

  19. Experimental determination of pore shapes using phase retrieval from q-space NMR diffraction.

    PubMed

    Demberg, Kerstin; Laun, Frederik Bernd; Bertleff, Marco; Bachert, Peter; Kuder, Tristan Anselm

    2018-05-01

    This paper presents an approach to solving the phase problem in nuclear magnetic resonance (NMR) diffusion pore imaging, a method that allows imaging the shape of arbitrary closed pores filled with an NMR-detectable medium for investigation of the microstructure of biological tissue and porous materials. Classical q-space imaging composed of two short diffusion-encoding gradient pulses yields, analogously to diffraction experiments, the modulus squared of the Fourier transform of the pore image which entails an inversion problem: An unambiguous reconstruction of the pore image requires both magnitude and phase. Here the phase information is recovered from the Fourier modulus by applying a phase retrieval algorithm. This allows omitting experimentally challenging phase measurements using specialized temporal gradient profiles. A combination of the hybrid input-output algorithm and the error reduction algorithm was used with dynamically adapting support (shrinkwrap extension). No a priori knowledge on the pore shape was fed to the algorithm except for a finite pore extent. The phase retrieval approach proved successful for simulated data with and without noise and was validated in phantom experiments with well-defined pores using hyperpolarized xenon gas.

  20. Turbulent mixing noise from supersonic jets

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Chen, Ping

    1994-01-01

    There is now a substantial body of theoretical and experimental evidence that the dominant part of the turbulent noise of supersonic jets is generated directly by the large turbulence structures/instability waves of the jet flow. Earlier, Tam and Burton provided a description of the physical mechanism by which supersonically traveling instability waves can generate sound efficiently. They used the method of matched asymptotic expansions to construct an instability wave solution which is valid in the far field. The present work is an extension of the theory of Tam and Burton. It is argued that the instability wave spectrum of the jet may be regarded as generated by stochastic white noise excitation at the nozzle lip region. The reason why the excitation has white noise characteristics is that near the nozzle lip region the flow in the jet mixing layer has no intrinsic length and time scales. The present stochastic wave model theory of supersonic jet noise contains a single unknown multiplicative constant. Comparisons between the calculated noise directivities at selected Strouhal numbers and experimental measurements of a Mach 2 jet at different jet temperatures have been carried out. Favorable agreements are found.

  1. Model-based design of an intermittent simulated moving bed process for recovering lactic acid from ternary mixture.

    PubMed

    Song, Mingkai; Cui, Linlin; Kuang, Han; Zhou, Jingwei; Yang, Pengpeng; Zhuang, Wei; Chen, Yong; Liu, Dong; Zhu, Chenjie; Chen, Xiaochun; Ying, Hanjie; Wu, Jinglan

    2018-08-10

    An intermittent simulated moving bed (3F-ISMB) operation scheme, the extension of the 3W-ISMB to the non-linear adsorption region, has been introduced for separation of glucose, lactic acid and acetic acid ternary-mixture. This work focuses on exploring the feasibility of the proposed process theoretically and experimentally. Firstly, the real 3F-ISMB model coupled with the transport dispersive model (TDM) and the Modified-Langmuir isotherm was established to build up the separation parameter plane. Subsequently, three operating conditions were selected from the plane to run the 3F-ISMB unit. The experimental results were used to verify the model. Afterwards, the influences of the various flow rates on the separation performances were investigated systematically by means of the validated 3F-ISMB model. The intermittent-retained component lactic acid was finally obtained with the purity of 98.5%, recovery of 95.5% and the average concentration of 38 g/L. The proposed 3F-ISMB process can efficiently separate the mixture with low selectivity into three fractions. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Rapid modification of urban land surface temperature during rainfall

    NASA Astrophysics Data System (ADS)

    Omidvar, H.; Bou-Zeid, E.; Song, J.; Yang, J.; Arwatz, G.; Wang, Z.; Hultmark, M.; Kaloush, K.

    2017-12-01

    We study the runoff dynamics and heat transfer over urban pavements during rainfall. A kinematic wave approach is combined with heat storage and transfer schemes to develop a model for impervious (with runoff) and pervious (without runoff) pavements. The resulting framework is a numerical prognostic model that can simulate the temperature fields in the subsurface and runoff layers to capture the rapid cooling of the surface, as well as the thermal pollution advected in the runoff. Extensive field measurements were then conducted over experimental pavements in Arizona to probe the physics and better represent the relevant processes in the model, and then to validate the model. The experimental data and the model results were in very good agreements, and their joint analysis elucidated the physics of the rapid heat transfer from the subsurface to the runoff layer. Finally, we apply the developed model to investigate how the various hydrological and thermal properties of the pavements, as well as ambient environmental conditions, modulate the surface and runoff thermal dynamics, what is the relative importance of each of them, and how we can apply the model mitigate the adverse impacts of urbanization.

  3. Harvesting liquid from unsaturated vapor - nanoflows induced by capillary condensation

    NASA Astrophysics Data System (ADS)

    Vincent, Olivier; Marguet, Bastien; Stroock, Abraham

    2016-11-01

    A vapor, even subsaturated, can spontaneously form liquid in nanoscale spaces. This process, known as capillary condensation, plays a fundamental role in various contexts, such as the formation of clouds or the dynamics of hydrocarbons in the geological subsurface. However, large uncertainties remain on the thermodynamics and fluid mechanics of the phenomenon, due to experimental challenges as well as outstanding questions about the validity of macroscale physics at the nanometer scale. We studied experimentally the spatio-temporal dynamics of water condensation in a model nanoporous medium (pore radius 2 nm), taking advantage of the color change of the material upon hydration. We found that at low relative humidities (< 60 % RH), capillary condensation progressed in a diffusive fashion, while it occurred through a well-defined capillary-viscous imbibition front at > 60 % RH, driven by a balance between the pore capillary pressure and the condensation stress given by Kelvin equation. Further analyzing the imbibition dynamics as a function of saturation allowed us to extract detailed information about the physics of nano-confined fluids. Our results suggest excellent extension of macroscale fluid dynamics and thermodynamics even in pores 10 molecules in diameter.

  4. Camera-tracking gaming control device for evaluation of active wrist flexion and extension.

    PubMed

    Shefer Eini, Dalit; Ratzon, Navah Z; Rizzo, Albert A; Yeh, Shih-Ching; Lange, Belinda; Yaffe, Batia; Daich, Alexander; Weiss, Patrice L; Kizony, Rachel

    Cross sectional. Measuring wrist range of motion (ROM) is an essential procedure in hand therapy clinics. To test the reliability and validity of a dynamic ROM assessment, the Camera Wrist Tracker (CWT). Wrist flexion and extension ROM of 15 patients with distal radius fractures and 15 matched controls were assessed with the CWT and with a universal goniometer. One-way model intraclass correlation coefficient analysis indicated high test-retest reliability for extension (ICC = 0.92) and moderate reliability for flexion (ICC = 0.49). Standard error for extension was 2.45° and for flexion was 4.07°. Repeated-measures analysis revealed a significant main effect for group; ROM was greater in the control group (F[1, 28] = 47.35; P < .001). The concurrent validity of the CWT was partially supported. The results indicate that the CWT may provide highly reliable scores for dynamic wrist extension ROM, and moderately reliable scores for flexion, in people recovering from a distal radius fracture. N/A. Copyright © 2016 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.

  5. An experimentally validated network of nine haematopoietic transcription factors reveals mechanisms of cell state stability

    PubMed Central

    Schütte, Judith; Wang, Huange; Antoniou, Stella; Jarratt, Andrew; Wilson, Nicola K; Riepsaame, Joey; Calero-Nieto, Fernando J; Moignard, Victoria; Basilico, Silvia; Kinston, Sarah J; Hannah, Rebecca L; Chan, Mun Chiang; Nürnberg, Sylvia T; Ouwehand, Willem H; Bonzanni, Nicola; de Bruijn, Marella FTR; Göttgens, Berthold

    2016-01-01

    Transcription factor (TF) networks determine cell-type identity by establishing and maintaining lineage-specific expression profiles, yet reconstruction of mammalian regulatory network models has been hampered by a lack of comprehensive functional validation of regulatory interactions. Here, we report comprehensive ChIP-Seq, transgenic and reporter gene experimental data that have allowed us to construct an experimentally validated regulatory network model for haematopoietic stem/progenitor cells (HSPCs). Model simulation coupled with subsequent experimental validation using single cell expression profiling revealed potential mechanisms for cell state stabilisation, and also how a leukaemogenic TF fusion protein perturbs key HSPC regulators. The approach presented here should help to improve our understanding of both normal physiological and disease processes. DOI: http://dx.doi.org/10.7554/eLife.11469.001 PMID:26901438

  6. Experimental demonstration of OSPF-TE extensions in muiti-domain OBS networks connected by GMPLS network

    NASA Astrophysics Data System (ADS)

    Tian, Chunlei; Yin, Yawei; Wu, Jian; Lin, Jintong

    2008-11-01

    The interworking network of Generalized Multi-Protocol Label Switching (GMPLS) and Optical Burst Switching (OBS) is attractive network architecture for the future IP/DWDM network nowadays. In this paper, OSPF-TE extensions for multi-domain Optical Burst Switching networks connected by GMPLS controlled WDM network are proposed, the corresponding experimental results such as the advertising latency are also presented by using an OBS network testbed. The experimental results show that it works effectively on the OBS/GMPLS networks.

  7. On the Validity of Student Evaluation of Teaching: The State of the Art

    ERIC Educational Resources Information Center

    Spooren, Pieter; Brockx, Bert; Mortelmans, Dimitri

    2013-01-01

    This article provides an extensive overview of the recent literature on student evaluation of teaching (SET) in higher education. The review is based on the SET meta-validation model, drawing upon research reports published in peer-reviewed journals since 2000. Through the lens of validity, we consider both the more traditional research themes in…

  8. In Silico Mining for Antimalarial Structure-Activity Knowledge and Discovery of Novel Antimalarial Curcuminoids.

    PubMed

    Viira, Birgit; Gendron, Thibault; Lanfranchi, Don Antoine; Cojean, Sandrine; Horvath, Dragos; Marcou, Gilles; Varnek, Alexandre; Maes, Louis; Maran, Uko; Loiseau, Philippe M; Davioud-Charvet, Elisabeth

    2016-06-29

    Malaria is a parasitic tropical disease that kills around 600,000 patients every year. The emergence of resistant Plasmodium falciparum parasites to artemisinin-based combination therapies (ACTs) represents a significant public health threat, indicating the urgent need for new effective compounds to reverse ACT resistance and cure the disease. For this, extensive curation and homogenization of experimental anti-Plasmodium screening data from both in-house and ChEMBL sources were conducted. As a result, a coherent strategy was established that allowed compiling coherent training sets that associate compound structures to the respective antimalarial activity measurements. Seventeen of these training sets led to the successful generation of classification models discriminating whether a compound has a significant probability to be active under the specific conditions of the antimalarial test associated with each set. These models were used in consensus prediction of the most likely active from a series of curcuminoids available in-house. Positive predictions together with a few predicted as inactive were then submitted to experimental in vitro antimalarial testing. A large majority from predicted compounds showed antimalarial activity, but not those predicted as inactive, thus experimentally validating the in silico screening approach. The herein proposed consensus machine learning approach showed its potential to reduce the cost and duration of antimalarial drug discovery.

  9. Improving diffuse optical tomography with structural a priori from fluorescence diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Ma, Wenjuan; Gao, Feng; Duan, Linjing; Zhu, Qingzhen; Wang, Xin; Zhang, Wei; Wu, Linhui; Yi, Xi; Zhao, Huijuan

    2012-03-01

    We obtain absorption and scattering reconstructed images by incorporating a priori information of target location obtained from fluorescence diffuse optical tomography (FDOT) into the diffuse optical tomography (DOT). The main disadvantage of DOT lies in the low spatial resolution resulting from highly scattering nature of tissue in the near-infrared (NIR), but one can use it to monitor hemoglobin concentration and oxygen saturation simultaneously, as well as several other cheomphores such as water, lipids, and cytochrome-c-oxidase. Up to date, extensive effort has been made to integrate DOT with other imaging modalities such as MRI, CT, to obtain accurate optical property maps of the tissue. However, the experimental apparatus is intricate. In this study, DOT image reconstruction algorithm that incorporates a prior structural information provided by FDOT is investigated in an attempt to optimize recovery of a simulated optical property distribution. By use of a specifically designed multi-channel time-correlated single photon counting system, the proposed scheme in a transmission mode is experimentally validated to achieve simultaneous reconstruction of the fluorescent yield, lifetime, absorption and scattering coefficient. The experimental results demonstrate that the quantitative recovery of the tumor optical properties has doubled and the spatial resolution improves as well by applying the new improved method.

  10. Model validations for low-global warming potential refrigerants in mini-split air-conditioning units

    DOE PAGES

    Shen, Bo; Shrestha, Som; Abdelaziz, Omar

    2016-09-02

    To identify low GWP (global warming potential) refrigerants to replace R-22 and R-410A, extensive experimental evaluations were conducted for multiple candidates of refrigerant at the standard test conditions and at high-ambient conditions with outdoor temperature varying from 27.8 C to 55.0 C.. In the study, R-22 was compared to propane (R-290), DR-3, ARM-20B, N-20B and R-444B in a mini-split air conditioning unit originally designed for R-22; R-410A was compared to R-32, DR-55, ARM-71A, L41-2 (R-447A) in a mini-split unit designed for R-410A. To reveal physics behind the measured performance results, thermodynamic properties of the alternative refrigerants were analysed. In addition,more » the experimental data was used to calibrate a physics-based equipment model, i.e. ORNL Heat Pump Design Model (HPDM). The calibrated model translated the experimental results to key calculated parameters, i.e. compressor efficiencies, refrigerant side two-phase heat transfer coefficients, corresponding to each refrigerant. As a result, these calculated values provide scientific insights on the performance of the alternative refrigerants and are useful for other applications beyond mini-split air conditioning units.« less

  11. Nonlinear Poisson Equation for Heterogeneous Media

    PubMed Central

    Hu, Langhua; Wei, Guo-Wei

    2012-01-01

    The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. PMID:22947937

  12. Model validations for low-global warming potential refrigerants in mini-split air-conditioning units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Bo; Shrestha, Som; Abdelaziz, Omar

    To identify low GWP (global warming potential) refrigerants to replace R-22 and R-410A, extensive experimental evaluations were conducted for multiple candidates of refrigerant at the standard test conditions and at high-ambient conditions with outdoor temperature varying from 27.8 C to 55.0 C.. In the study, R-22 was compared to propane (R-290), DR-3, ARM-20B, N-20B and R-444B in a mini-split air conditioning unit originally designed for R-22; R-410A was compared to R-32, DR-55, ARM-71A, L41-2 (R-447A) in a mini-split unit designed for R-410A. To reveal physics behind the measured performance results, thermodynamic properties of the alternative refrigerants were analysed. In addition,more » the experimental data was used to calibrate a physics-based equipment model, i.e. ORNL Heat Pump Design Model (HPDM). The calibrated model translated the experimental results to key calculated parameters, i.e. compressor efficiencies, refrigerant side two-phase heat transfer coefficients, corresponding to each refrigerant. As a result, these calculated values provide scientific insights on the performance of the alternative refrigerants and are useful for other applications beyond mini-split air conditioning units.« less

  13. Assessing the stretch-blow moulding FE simulation of PET over a large process window

    NASA Astrophysics Data System (ADS)

    Nixon, J.; Menary, G. H.; Yan, S.

    2017-10-01

    Injection stretch blow moulding has been extensively researched for numerous years and is a well-established method of forming thin-walled containers. This paper is concerned with validating the finite element analysis of the stretch-blow-moulding (SBM) process in an effort to progress the development of injection stretch blow moulding of poly(ethylene terephthalate). Extensive data was obtained experimentally over a wide process window accounting for material temperature, air flow rate and stretch-rod speed while capturing cavity pressure, stretch-rod reaction force, in-mould contact timing and material thickness distribution. This data was then used to assess the accuracy of the correlating FE simulation constructed using ABAQUS/Explicit solver and an appropriate user-defined viscoelastic material subroutine. Results reveal that the simulation was able to pick up the general trends of how the pressure, reaction force and in-mould contact timings vary with the variation in preform temperature and air flow rate. Trends in material thickness were also accurately predicted over the length of the bottle relative to the process conditions. The knowledge gained from these analyses provides insight into the mechanisms of bottle formation, subsequently improving the blow moulding simulation and potentially providing a reduction in production costs.

  14. Pinning and gas oversaturation imply stable single surface nanobubbles.

    PubMed

    Lohse, Detlef; Zhang, Xuehua

    2015-03-01

    Surface nanobubbles are experimentally known to survive for days at hydrophobic surfaces immersed in gas-oversaturated water. This is different from bulk nanobubbles, which are pressed out by the Laplace pressure against any gas oversaturation and dissolve in submilliseconds, as derived by Epstein and Plesset [J. Chem. Phys. 18, 1505 (1950)]. Pinning of the contact line has been speculated to be the reason for the stability of the surface nanobubbles. Building on an exact result by Popov [Phys. Rev. E 71, 036313 (2005)] on coffee stain evaporation, here we confirm this speculation by an exact calculation for single surface nanobubbles. It is based only on (i) the diffusion equation, (ii) Laplace pressure, and (iii) Henry's equation, i.e., fluid dynamical equations which are all known to be valid down to the nanometer scale. The crucial parameter is the gas oversaturation ζ of the liquid. At the stable equilibrium, the gas overpressures due to this oversaturation and the Laplace pressure balance. The theory predicts how the contact angle of the pinned bubble depends on ζ and the surface nanobubble's footprint lateral extension L. It also predicts an upper lateral extension threshold for stable surface nanobubbles to exist.

  15. On-road and wind-tunnel measurement of motorcycle helmet noise.

    PubMed

    Kennedy, J; Carley, M; Walker, I; Holt, N

    2013-09-01

    The noise source mechanisms involved in motorcycling include various aerodynamic sources and engine noise. The problem of noise source identification requires extensive data acquisition of a type and level that have not previously been applied. Data acquisition on track and on road are problematic due to rider safety constraints and the portability of appropriate instrumentation. One way to address this problem is the use of data from wind tunnel tests. The validity of these measurements for noise source identification must first be demonstrated. In order to achieve this extensive wind tunnel tests have been conducted and compared with the results from on-track measurements. Sound pressure levels as a function of speed were compared between on track and wind tunnel tests and were found to be comparable. Spectral conditioning techniques were applied to separate engine and wind tunnel noise from aerodynamic noise and showed that the aerodynamic components were equivalent in both cases. The spectral conditioning of on-track data showed that the contribution of engine noise to the overall noise is a function of speed and is more significant than had previously been thought. These procedures form a basis for accurate experimental measurements of motorcycle noise.

  16. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    PubMed Central

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  17. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    PubMed

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  18. Aberration hubs in protein interaction networks highlight actionable targets in cancer.

    PubMed

    Karimzadeh, Mehran; Jandaghi, Pouria; Papadakis, Andreas I; Trainor, Sebastian; Rung, Johan; Gonzàlez-Porta, Mar; Scelo, Ghislaine; Vasudev, Naveen S; Brazma, Alvis; Huang, Sidong; Banks, Rosamonde E; Lathrop, Mark; Najafabadi, Hamed S; Riazalhosseini, Yasser

    2018-05-18

    Despite efforts for extensive molecular characterization of cancer patients, such as the international cancer genome consortium (ICGC) and the cancer genome atlas (TCGA), the heterogeneous nature of cancer and our limited knowledge of the contextual function of proteins have complicated the identification of targetable genes. Here, we present Aberration Hub Analysis for Cancer (AbHAC) as a novel integrative approach to pinpoint aberration hubs, i.e. individual proteins that interact extensively with genes that show aberrant mutation or expression. Our analysis of the breast cancer data of the TCGA and the renal cancer data from the ICGC shows that aberration hubs are involved in relevant cancer pathways, including factors promoting cell cycle and DNA replication in basal-like breast tumors, and Src kinase and VEGF signaling in renal carcinoma. Moreover, our analysis uncovers novel functionally relevant and actionable targets, among which we have experimentally validated abnormal splicing of spleen tyrosine kinase as a key factor for cell proliferation in renal cancer. Thus, AbHAC provides an effective strategy to uncover novel disease factors that are only identifiable by examining mutational and expression data in the context of biological networks.

  19. Modeling functional Magnetic Resonance Imaging (fMRI) experimental variables in the Ontology of Experimental Variables and Values (OoEVV)

    PubMed Central

    Burns, Gully A.P.C.; Turner, Jessica A.

    2015-01-01

    Neuroimaging data is raw material for cognitive neuroscience experiments, leading to scientific knowledge about human neurological and psychological disease, language, perception, attention and ultimately, cognition. The structure of the variables used in the experimental design defines the structure of the data gathered in the experiments; this in turn structures the interpretative assertions that may be presented as experimental conclusions. Representing these assertions and the experimental data which support them in a computable way means that they could be used in logical reasoning environments, i.e. for automated meta-analyses, or linking hypotheses and results across different levels of neuroscientific experiments. Therefore, a crucial first step in being able to represent neuroimaging results in a clear, computable way is to develop representations for the scientific variables involved in neuroimaging experiments. These representations should be expressive, computable, valid, extensible, and easy-to-use. They should also leverage existing semantic standards to interoperate easily with other systems. We present an ontology design pattern called the Ontology of Experimental Variables and Values (OoEVV). This is designed to provide a lightweight framework to capture mathematical properties of data, with appropriate ‘hooks’ to permit linkage to other ontology-driven projects (such as the Ontology of Biomedical Investigations, OBI). We instantiate the OoEVV system with a small number of functional Magnetic Resonance Imaging datasets, to demonstrate the system’s ability to describe the variables of a neuroimaging experiment. OoEVV is designed to be compatible with the XCEDE neuroimaging data standard for data collection terminology, and with the Cognitive Paradigm Ontology (CogPO) for specific reasoning elements of neuroimaging experimental designs. PMID:23684873

  20. [CRITERION-RELATED VALIDITY OF SIT-AND-REACH TEST AS A MEASURE OF HAMSTRING EXTENSIBILITY IN OLDER WOMEN].

    PubMed

    López-Miñarro, Pedro Ángel; Vaquero-Cristóbal, Raquel; Muyor, José María; Espejo-Antúnez, Luis

    2015-07-01

    lumbo-sacral posture and the sit-andreach score have been proposed as measures of hamstring extensibility. However, the validity is influenced by sample characteristics. to determine the validity of lumbo-horizontal angle and score in the sit-and-reach test as measures of hamstring extensibility in older women. a hundred and twenty older women performed the straight leg raise test with both leg, and the sit-and-reach test (SR) in a random order. For the sitand- reach test, the score and the lumbo-sacral posture in bending (lumbo-horizontal angle, L-Hfx) were measured. the mean values of straight leg raise in left and right leg were 81.70 ± 13.83º and 82.10 ± 14.36º, respectively. The mean value of EPR of both legs was 81.90 ± 12.70º. The mean values of SR score and L-Hfx were -1.54 ± 8.09 cm and 91.08º ± 9.32º, respectively. The correlation values between the mean straight leg raise test with respect to lumbo-sacral posture and SR score were moderate (L-Hfx: r = -0.72, p < 0.01; SR: r = 0.70, p < 0.01). Both variables independently explained about 50% of the variance (L-Hfx: R2 = 0.52, p < 0,001; SR: R2 = 0.49, p < 0,001). the validity of lumbo-sacral posture in bending as measure of hamstring muscle extensibility on older women is moderate, with similar values than SR score. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  1. Validation of a Radiosensitivity Molecular Signature in Breast Cancer

    PubMed Central

    Eschrich, Steven A.; Fulp, William J.; Pawitan, Yudi; Foekens, John A.; Smid, Marcel; Martens, John W. M.; Echevarria, Michelle; Kamath, Vidya; Lee, Ji-Hyun; Harris, Eleanor E.; Bergh, Jonas; Torres-Roca, Javier F.

    2014-01-01

    Purpose Previously, we developed a radiosensitivity molecular signature (RSI) that was clinically-validated in three independent datasets (rectal, esophageal, head and neck) in 118 patients. Here, we test RSI in radiotherapy (RT) treated breast cancer patients. Experimental Design RSI was tested in two previously published breast cancer datasets. Patients were treated at the Karolinska University Hospital (n=159) and Erasmus Medical Center (n=344). RSI was applied as previously described. Results We tested RSI in RT-treated patients (Karolinska). Patients predicted to be radiosensitive (RS) had an improved 5 yr relapse-free survival when compared with radioresistant (RR) patients (95% vs. 75%, p=0.0212) but there was no difference between RS/RR patients treated without RT (71% vs. 77%, p=0.6744), consistent with RSI being RT-specific (interaction term RSIxRT, p=0.05). Similarly, in the Erasmus dataset RT-treated RS patients had an improved 5-year distant-metastasis-free survival over RR patients (77% vs. 64%, p=0.0409) but no difference was observed in patients treated without RT (RS vs. RR, 80% vs. 81%, p=0.9425). Multivariable analysis showed RSI is the strongest variable in RT-treated patients (Karolinska, HR=5.53, p=0.0987, Erasmus, HR=1.64, p=0.0758) and in backward selection (removal alpha of 0.10) RSI was the only variable remaining in the final model. Finally, RSI is an independent predictor of outcome in RT-treated ER+ patients (Erasmus, multivariable analysis, HR=2.64, p=0.0085). Conclusions RSI is validated in two independent breast cancer datasets totaling 503 patients. Including prior data, RSI is validated in five independent cohorts (621 patients) and represents, to our knowledge, the most extensively validated molecular signature in radiation oncology. PMID:22832933

  2. Reliability, Validity, and Usability of Data Extraction Programs for Single-Case Research Designs.

    PubMed

    Moeyaert, Mariola; Maggin, Daniel; Verkuilen, Jay

    2016-11-01

    Single-case experimental designs (SCEDs) have been increasingly used in recent years to inform the development and validation of effective interventions in the behavioral sciences. An important aspect of this work has been the extension of meta-analytic and other statistical innovations to SCED data. Standard practice within SCED methods is to display data graphically, which requires subsequent users to extract the data, either manually or using data extraction programs. Previous research has examined issues of reliability and validity of data extraction programs in the past, but typically at an aggregate level. Little is known, however, about the coding of individual data points. We focused on four different software programs that can be used for this purpose (i.e., Ungraph, DataThief, WebPlotDigitizer, and XYit), and examined the reliability of numeric coding, the validity compared with real data, and overall program usability. This study indicates that the reliability and validity of the retrieved data are independent of the specific software program, but are dependent on the individual single-case study graphs. Differences were found in program usability in terms of user friendliness, data retrieval time, and license costs. Ungraph and WebPlotDigitizer received the highest usability scores. DataThief was perceived as unacceptable and the time needed to retrieve the data was double that of the other three programs. WebPlotDigitizer was the only program free to use. As a consequence, WebPlotDigitizer turned out to be the best option in terms of usability, time to retrieve the data, and costs, although the usability scores of Ungraph were also strong. © The Author(s) 2016.

  3. MEMHDX: an interactive tool to expedite the statistical validation and visualization of large HDX-MS datasets.

    PubMed

    Hourdel, Véronique; Volant, Stevenn; O'Brien, Darragh P; Chenal, Alexandre; Chamot-Rooke, Julia; Dillies, Marie-Agnès; Brier, Sébastien

    2016-11-15

    With the continued improvement of requisite mass spectrometers and UHPLC systems, Hydrogen/Deuterium eXchange Mass Spectrometry (HDX-MS) workflows are rapidly evolving towards the investigation of more challenging biological systems, including large protein complexes and membrane proteins. The analysis of such extensive systems results in very large HDX-MS datasets for which specific analysis tools are required to speed up data validation and interpretation. We introduce a web application and a new R-package named 'MEMHDX' to help users analyze, validate and visualize large HDX-MS datasets. MEMHDX is composed of two elements. A statistical tool aids in the validation of the results by applying a mixed-effects model for each peptide, in each experimental condition, and at each time point, taking into account the time dependency of the HDX reaction and number of independent replicates. Two adjusted P-values are generated per peptide, one for the 'Change in dynamics' and one for the 'Magnitude of ΔD', and are used to classify the data by means of a 'Logit' representation. A user-friendly interface developed with Shiny by RStudio facilitates the use of the package. This interactive tool allows the user to easily and rapidly validate, visualize and compare the relative deuterium incorporation on the amino acid sequence and 3D structure, providing both spatial and temporal information. MEMHDX is freely available as a web tool at the project home page http://memhdx.c3bi.pasteur.fr CONTACT: marie-agnes.dillies@pasteur.fr or sebastien.brier@pasteur.frSupplementary information: Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  4. CFD Validation Experiment of a Mach 2.5 Axisymmetric Shock-Wave/Boundary-Layer Interaction

    NASA Technical Reports Server (NTRS)

    Davis, David O.

    2015-01-01

    Experimental investigations of specific flow phenomena, e.g., Shock Wave Boundary-Layer Interactions (SWBLI), provide great insight to the flow behavior but often lack the necessary details to be useful as CFD validation experiments. Reasons include: 1.Undefined boundary conditions Inconsistent results 2.Undocumented 3D effects (CL only measurements) 3.Lack of uncertainty analysis While there are a number of good subsonic experimental investigations that are sufficiently documented to be considered test cases for CFD and turbulence model validation, the number of supersonic and hypersonic cases is much less. This was highlighted by Settles and Dodsons [1] comprehensive review of available supersonic and hypersonic experimental studies. In all, several hundred studies were considered for their database.Of these, over a hundred were subjected to rigorous acceptance criteria. Based on their criteria, only 19 (12 supersonic, 7 hypersonic) were considered of sufficient quality to be used for validation purposes. Aeschliman and Oberkampf [2] recognized the need to develop a specific methodology for experimental studies intended specifically for validation purposes.

  5. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less

  6. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    DOE PAGES

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; ...

    2017-03-23

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less

  7. Pest management in traditional maize stores in West Africa: a farmer's perspective.

    PubMed

    Meikle, W G; Markham, R H; Nansen, C; Holst, N; Degbey, P; Azoma, K; Korie, S

    2002-10-01

    Farmers in the Republic of Benin have few resources to invest in protection of stored maize, and prophylactic pesticide application is often recommended by extension and development agencies. Neither the efficacy nor profitability of such an application in traditional maize storage facilities has been addressed quantitatively. In this study, existing management options for stored maize were evaluated monthly over 6 mo in central and southern Benin with respect to their effects on grain injury and on densities of Prostephanus truncatus (Horn) and Sitophilus zeamais Motschulsky. P. truncatus infested 54% of the experimental stores in the study even though Teretrius nigrescens (Lewis), a natural enemy introduced against P. truncatus, was well established in the region. S. zeamais was the most common pest, found in 85% of the experimental storage facilities. Prophylactically treated maize was, on average, worth more than untreated maize for month 1 through 5 in southern Benin, after taking into account market price, pesticide costs, percentage grain damage and weight loss, but maize storage was not profitable overall. No difference was observed between treatments in central Benin. After 6 mo treated storage facilities were not significantly different from untreated storage facilities in terms of either percentage damage or profit in either region. A rapid scouting plan intended to provide farmers with a means for identifying storage facilities at greatest risk of severe P. truncatus infestation was field validated. Given that unsafe pesticide use is prevalent in Benin, research and extension services should clearly state the limitations to prophylactic treatment and increase the effort to educate farmers on appropriate pesticide use, store monitoring and marketing.

  8. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    NASA Astrophysics Data System (ADS)

    Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.

    2018-01-01

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.

  9. 75 FR 53371 - Liquefied Natural Gas Facilities: Obtaining Approval of Alternative Vapor-Gas Dispersion Models

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-31

    ... factors as the approved models, are validated by experimental test data, and receive the Administrator's... stage of the MEP involves applying the model against a database of experimental test cases including..., particularly the requirement for validation by experimental test data. That guidance is based on the MEP's...

  10. Effects of Hot Streak and Phantom Cooling on Heat Transfer in a Cooled Turbine Stage Including Particulate Deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bons, Jeffrey; Ameri, Ali

    2016-01-08

    The objective of this research effort was to develop a validated computational modeling capability for the characterization of the effects of hot streaks and particulate deposition on the heat load of modern gas turbines. This was accomplished with a multi-faceted approach including analytical, experimental, and computational components. A 1-year no cost extension request was approved for this effort, so the total duration was 4 years. The research effort succeeded in its ultimate objective by leveraging extensive experimental deposition studies complemented by computational modeling. Experiments were conducted with hot streaks, vane cooling, and combinations of hot streaks with vane cooling. Thesemore » studies contributed to a significant body of corporate knowledge of deposition, in combination with particle rebound and deposition studies funded by other agencies, to provide suitable conditions for the development of a new model. The model includes the following physical phenomena: elastic deformation, plastic deformation, adhesion, and shear removal. It also incorporates material property sensitivity to temperature and tangential-normal velocity rebound cross-dependencies observed in experiments. The model is well-suited for incorporation in CFD simulations of complex gas turbine flows due to its algebraic (explicit) formulation. This report contains model predictions compared to coefficient of restitution data available in the open literature as well as deposition results from two different high temperature turbine deposition facilities. While the model comparisons with experiments are in many cases promising, several key aspects of particle deposition remain elusive. The simple phenomenological nature of the model allows for parametric dependencies to be evaluated in a straightforward manner. This effort also included the first-ever full turbine stage deposition model published in the open literature. The simulations included hot streaks and simulated vane cooling. The new deposition model was implemented into the CFD model as a wall boundary condition, with various particle sizes investigated in the simulation. Simulations utilizing a steady mixing plane formulation and an unsteady sliding mesh were conducted and the flow solution of each was validated against experimental data. Results from each of these simulations, including impact and capture distributions and efficiencies, were compared and potential reasons for differences discussed in detail. The inclusion of a large range of particle sizes allowed investigation of trends with particle size, such as increased radial migration and reduced sticking efficiency at the larger particle sizes. The unsteady simulation predicted lower sticking efficiencies on the rotor blades than the mixing plane simulation for the majority of particle sizes. This is postulated to be due to the preservation of the hot streak and cool vane wake through the vane-rotor interface (which are smeared out circumferentially in the mixing-plane simulation). The results reported here represent the successful implementation of a novel deposition model into validated vane-rotor flow solutions that include a non-uniform inlet temperature profile and simulated vane cooling.« less

  11. DDML Schema Validation

    DTIC Science & Technology

    2016-02-08

    Data Display Markup Language HUD heads-up display IRIG Inter-Range Instrumentation Group RCC Range Commanders Council SVG Scalable Vector Graphics...T&E test and evaluation TMATS Telemetry Attributes Transfer Standard XML eXtensible Markup Language DDML Schema Validation, RCC 126-16, February...2016 viii This page intentionally left blank. DDML Schema Validation, RCC 126-16, February 2016 1 1. Introduction This Data Display Markup

  12. The Resilience Scale for Adults: Construct Validity and Measurement in a Belgian Sample

    ERIC Educational Resources Information Center

    Hjemdal, Odin; Friborg, Oddgeir; Braun, Stephanie; Kempenaers, Chantal; Linkowski, Paul; Fossion, Pierre

    2011-01-01

    The Resilience Scale for Adults (RSA) was developed and has been extensively validated in Norwegian samples. The purpose of this study was to explore the construct validity of the Resilience Scale for Adults in a French-speaking Belgian sample and test measurement invariance between the Belgian and a Norwegian sample. A Belgian student sample (N =…

  13. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  14. Hohlraum-driven mid-Z (SiO2) double-shell implosions on the omega laser facility and their scaling to NIF.

    PubMed

    Robey, H F; Amendt, P A; Milovich, J L; Park, H-S; Hamza, A V; Bono, M J

    2009-10-02

    High-convergence, hohlraum-driven implosions of double-shell capsules using mid-Z (SiO2) inner shells have been performed on the OMEGA laser facility [T. R. Boehly, Opt. Commun. 133, 495 (1997)]. These experiments provide an essential extension of the results of previous low-Z (CH) double-shell implosions [P. A. Amendt, Phys. Rev. Lett. 94, 065004 (2005)] to materials of higher density and atomic number. Analytic modeling, supported by highly resolved 2D numerical simulations, is used to account for the yield degradation due to interfacial atomic mixing. This extended experimental database from OMEGA enables a validation of the mix model, and provides a means for quantitatively assessing the prospects for high-Z double-shell implosions on the National Ignition Facility [Paisner, Laser Focus World 30, 75 (1994)].

  15. Bayesian decision and mixture models for AE monitoring of steel-concrete composite shear walls

    NASA Astrophysics Data System (ADS)

    Farhidzadeh, Alireza; Epackachi, Siamak; Salamone, Salvatore; Whittaker, Andrew S.

    2015-11-01

    This paper presents an approach based on an acoustic emission technique for the health monitoring of steel-concrete (SC) composite shear walls. SC composite walls consist of plain (unreinforced) concrete sandwiched between steel faceplates. Although the use of SC system construction has been studied extensively for nearly 20 years, little-to-no attention has been devoted to the development of structural health monitoring techniques for the inspection of damage of the concrete behind the steel plates. In this work an unsupervised pattern recognition algorithm based on probability theory is proposed to assess the soundness of the concrete infill, and eventually provide a diagnosis of the SC wall’s health. The approach is validated through an experimental study on a large-scale SC shear wall subjected to a displacement controlled reversed cyclic loading.

  16. Space vehicle acoustics prediction improvement for payloads. [space shuttle

    NASA Technical Reports Server (NTRS)

    Dandridge, R. E.

    1979-01-01

    The modal analysis method was extensively modified for the prediction of space vehicle noise reduction in the shuttle payload enclosure, and this program was adapted to the IBM 360 computer. The predicted noise reduction levels for two test cases were compared with experimental results to determine the validity of the analytical model for predicting space vehicle payload noise environments in the 10 Hz one-third octave band regime. The prediction approach for the two test cases generally gave reasonable magnitudes and trends when compared with the measured noise reduction spectra. The discrepancies in the predictions could be corrected primarily by improved modeling of the vehicle structural walls and of the enclosed acoustic space to obtain a more accurate assessment of normal modes. Techniques for improving and expandng the noise prediction for a payload environment are also suggested.

  17. Inner-shell Ionization With Relativistic Corrections By Electron Impact

    NASA Astrophysics Data System (ADS)

    Saha, Bidhan; Patoary, M. A. R.; Alfaz Uddin, M.; Haque, A. K. F.; Basak, Arun K.

    2007-06-01

    A simple method is proposed and tested by evaluating the electron impact inner-shell ionization cross sections of various targets up to ultra high energy region. In this energy region there are not many calculations due to lack of reliable method. In this work we extend the validity of the siBED model [1] in terms of targets and incident energies. The extension of our earlier RQIBED model [2] is also reported here and we examined its findings for the description of the experimental EIICS data of various targets up to E=1000 MeV. Details will be presented at the meeting. [1] W. M. Huo, Phys. Rev A 64, 042719 (2001). [2] M. A. Uddin, A. K. F. Haque, M. S. Mahbub, K. R. Karim, A. K. Basak and B. C. Saha, Phys. Rev. A 71, 032715 (2005).

  18. Four experimental demonstrations of active vibration control for flexible structures

    NASA Technical Reports Server (NTRS)

    Phillips, Doug; Collins, Emmanuel G., Jr.

    1990-01-01

    Laboratory experiments designed to test prototype active-vibration-control systems under development for future flexible space structures are described, summarizing previously reported results. The control-synthesis technique employed for all four experiments was the maximum-entropy optimal-projection (MEOP) method (Bernstein and Hyland, 1988). Consideration is given to: (1) a pendulum experiment on large-amplitude LF dynamics; (2) a plate experiment on broadband vibration suppression in a two-dimensional structure; (3) a multiple-hexagon experiment combining the factors studied in (1) and (2) to simulate the complexity of a large space structure; and (4) the NASA Marshall ACES experiment on a lightweight deployable 45-foot beam. Extensive diagrams, drawings, graphs, and photographs are included. The results are shown to validate the MEOP design approach, demonstrating that good performance is achievable using relatively simple low-order decentralized controllers.

  19. Architecture Design and Experimental Platform Demonstration of Optical Network based on OpenFlow Protocol

    NASA Astrophysics Data System (ADS)

    Xing, Fangyuan; Wang, Honghuan; Yin, Hongxi; Li, Ming; Luo, Shenzi; Wu, Chenguang

    2016-02-01

    With the extensive application of cloud computing and data centres, as well as the constantly emerging services, the big data with the burst characteristic has brought huge challenges to optical networks. Consequently, the software defined optical network (SDON) that combines optical networks with software defined network (SDN), has attracted much attention. In this paper, an OpenFlow-enabled optical node employed in optical cross-connect (OXC) and reconfigurable optical add/drop multiplexer (ROADM), is proposed. An open source OpenFlow controller is extended on routing strategies. In addition, the experiment platform based on OpenFlow protocol for software defined optical network, is designed. The feasibility and availability of the OpenFlow-enabled optical nodes and the extended OpenFlow controller are validated by the connectivity test, protection switching and load balancing experiments in this test platform.

  20. Experimental investigation of an RNA sequence space

    NASA Technical Reports Server (NTRS)

    Lee, Youn-Hyung; Dsouza, Lisa; Fox, George E.

    1993-01-01

    Modern rRNAs are the historic consequence of an ongoing evolutionary exploration of a sequence space. These extant sequences belong to a special subset of the sequence space that is comprised only of those primary sequences that can validly perform the biological function(s) required of the particular RNA. If it were possible to readily identify all such valid sequences, stochastic predictions could be made about the relative likelihood of various evolutionary pathways available to an RNA. Herein an experimental system which can assess whether a particular sequence is likely to have validity as a eubacterial 5S rRNA is described. A total of ten naturally occurring, and hence known to be valid, sequences and two point mutants of unknown validity were used to test the usefulness of the approach. Nine of the ten valid sequences tested positive whereas both mutants tested as clearly defective. The tenth valid sequence gave results that would be interpreted as reflecting a borderline status were the answer not known. These results demonstrate that it is possible to experimentally determine which sequences in local regions of the sequence space are potentially valid 5S rRNAs.

  1. Extension, validation and application of the NASCAP code

    NASA Technical Reports Server (NTRS)

    Katz, I.; Cassidy, J. J., III; Mandell, M. J.; Schnuelle, G. W.; Steen, P. G.; Parks, D. E.; Rotenberg, M.; Alexander, J. H.

    1979-01-01

    Numerous extensions were made in the NASCAP code. They fall into three categories: a greater range of definable objects, a more sophisticated computational model, and simplified code structure and usage. An important validation of NASCAP was performed using a new two dimensional computer code (TWOD). An interactive code (MATCHG) was written to compare material parameter inputs with charging results. The first major application of NASCAP was performed on the SCATHA satellite. Shadowing and charging calculation were completed. NASCAP was installed at the Air Force Geophysics Laboratory, where researchers plan to use it to interpret SCATHA data.

  2. Validating Farmers' Indigenous Social Networks for Local Seed Supply in Central Rift Valley of Ethiopia.

    ERIC Educational Resources Information Center

    Seboka, B.; Deressa, A.

    2000-01-01

    Indigenous social networks of Ethiopian farmers participate in seed exchange based on mutual interdependence and trust. A government-imposed extension program must validate the role of local seed systems in developing a national seed industry. (SK)

  3. Experimental Validation and Combustion Modeling of a JP-8 Surrogate in a Single Cylinder Diesel Engine

    DTIC Science & Technology

    2014-04-15

    SINGLE CYLINDER DIESEL ENGINE Amit Shrestha, Umashankar Joshi, Ziliang Zheng, Tamer Badawy, Naeim A. Henein, Wayne State University, Detroit, MI, USA...13-03-2014 4. TITLE AND SUBTITLE EXPERIMENTAL VALIDATION AND COMBUSTION MODELING OF A JP-8 SURROGATE IN A SINGLE CYLINDER DIESEL ENGINE 5a...INTERNATIONAL UNCLASSIFIED • Validate a two-component JP-8 surrogate in a single cylinder diesel engine. Validation parameters include – Ignition delay

  4. Validation of High-Fidelity Reactor Physics Models for Support of the KJRR Experimental Campaign in the Advanced Test Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigg, David W.; Nielsen, Joseph W.; Norman, Daren R.

    The Korea Atomic Energy Research Institute is currently in the process of qualifying a Low-Enriched Uranium fuel element design for the new Ki-Jang Research Reactor (KJRR). As part of this effort, a prototype KJRR fuel element was irradiated for several operating cycles in the Northeast Flux Trap of the Advanced Test Reactor (ATR) at the Idaho National Laboratory. The KJRR fuel element contained a very large quantity of fissile material (618g 235U) in comparison with historical ATR experiment standards (<1g 235U), and its presence in the ATR flux trap was expected to create a neutronic configuration that would be wellmore » outside of the approved validation envelope for the reactor physics analysis methods used to support ATR operations. Accordingly it was necessary, prior to high-power irradiation of the KJRR fuel element in the ATR, to conduct an extensive set of new low-power physics measurements with the KJRR fuel element installed in the ATR Critical Facility (ATRC), a companion facility to the ATR that is located in an immediately adjacent building, sharing the same fuel handling and storage canal. The new measurements had the objective of expanding the validation envelope for the computational reactor physics tools used to support ATR operations and safety analysis to include the planned KJRR irradiation in the ATR and similar experiments that are anticipated in the future. The computational and experimental results demonstrated that the neutronic behavior of the KJRR fuel element in the ATRC is well-understood, both in terms of its general effects on core excess reactivity and fission power distributions, its effects on the calibration of the core lobe power measurement system, as well as in terms of its own internal fission rate distribution and total fission power per unit ATRC core power. Taken as a whole, these results have significantly extended the ATR physics validation envelope, thereby enabling an entire new class of irradiation experiments.« less

  5. Structures and mechanism of dipeptidyl peptidases 8 and 9, important players in cellular homeostasis and cancer.

    PubMed

    Ross, Breyan; Krapp, Stephan; Augustin, Martin; Kierfersauer, Reiner; Arciniega, Marcelino; Geiss-Friedlander, Ruth; Huber, Robert

    2018-02-13

    Dipeptidyl peptidases 8 and 9 are intracellular N-terminal dipeptidyl peptidases (preferentially postproline) associated with pathophysiological roles in immune response and cancer biology. While the DPP family member DPP4 is extensively characterized in molecular terms as a validated therapeutic target of type II diabetes, experimental 3D structures and ligand-/substrate-binding modes of DPP8 and DPP9 have not been reported. In this study we describe crystal and molecular structures of human DPP8 (2.5 Å) and DPP9 (3.0 Å) unliganded and complexed with a noncanonical substrate and a small molecule inhibitor, respectively. Similar to DPP4, DPP8 and DPP9 molecules consist of one β-propeller and α/β hydrolase domain, forming a functional homodimer. However, they differ extensively in the ligand binding site structure. In intriguing contrast to DPP4, where liganded and unliganded forms are closely similar, ligand binding to DPP8/9 induces an extensive rearrangement at the active site through a disorder-order transition of a 26-residue loop segment, which partially folds into an α-helix (R-helix), including R160/133, a key residue for substrate binding. As vestiges of this helix are also seen in one of the copies of the unliganded form, conformational selection may contributes to ligand binding. Molecular dynamics simulations support increased flexibility of the R-helix in the unliganded state. Consistently, enzyme kinetics assays reveal a cooperative allosteric mechanism. DPP8 and DPP9 are closely similar and display few opportunities for targeted ligand design. However, extensive differences from DPP4 provide multiple cues for specific inhibitor design and development of the DPP family members as therapeutic targets or antitargets.

  6. Investigation of anisotropic thermal transport in cross-linked polymers

    NASA Astrophysics Data System (ADS)

    Simavilla, David Nieto

    Thermal transport in lightly cross-linked polyisoprene and polybutadine subjected to uniaxial elongation is investigated experimentally. We employ two experimental techniques to assess the effect that deformation has on this class of materials. The first technique, which is based on Forced Rayleigh Scattering (FRS), allows us to measure the two independent components of the thermal diffusivity tensor as a function of deformation. These measurements along with independent measurements of the tensile stress and birefringence are used to evaluate the stress-thermal and stress-optic rules. The stress-thermal rule is found to be valid for the entire range of elongations applied. In contrast, the stress-optic rule fails for moderate to large stretch ratios. This suggests that the degree of anisotropy in thermal conductivity depends on both orientation and tension in polymer chain segments. The second technique, which is based on infrared thermography (IRT), allows us to measure anisotropy in thermal conductivity and strain induced changes in heat capacity. We validate this method measurements of anisotropic thermal conductivity by comparing them with those obtained using FRS. We find excellent agreement between the two techniques. Uncertainty in the infrared thermography method measurements is estimated to be about 2-5 %. The accuracy of the method and its potential application to non-transparent materials makes it a good alternative to extend current research on anisotropic thermal transport in polymeric materials. A second IRT application allows us to investigate the dependence of heat capacity on deformation. We find that heat capacity increases with stretch ratio in polyisoprene specimens under uniaxial extension. The deviation from the equilibrium value of heat capacity is consistent with an independent set of experiments comparing anisotropy in thermal diffusivity and conductivity employing FRS and IRT techniques. We identify finite extensibility and strain-induced crystallization as the possible causes explaining our observations and evaluate their contribution making use of classical rubber elasticity results. Finally, we study of the role of evaporation-induced thermal effects in the well-know phenomena of the tears of wine. We develop a transport model and support its predictions by experimentally measuring the temperature gradient present in wine and cognac films using IRT. Our results demonstrate that the Marangoni flow responsible for wine tears results from both composition and temperature gradients, whose relative contribution strongly depends on the thermodynamic properties of ethanol-water mixtures. The methods developed here can be used to obtain a deeper understanding of Marangoni flows, which are ubiquitous in nature and modern technology.

  7. An extension of the Lighthill theory of jet noise to encompass refraction and shielding

    NASA Technical Reports Server (NTRS)

    Ribner, Herbert S.

    1995-01-01

    A formalism for jet noise prediction is derived that includes the refractive 'cone of silence' and other effects; outside the cone it approximates the simple Lighthill format. A key step is deferral of the simplifying assumption of uniform density in the dominant 'source' term. The result is conversion to a convected wave equation retaining the basic Lighthill source term. The main effect is to amend the Lighthill solution to allow for refraction by mean flow gradients, achieved via a frequency-dependent directional factor. A general formula for power spectral density emitted from unit volume is developed as the Lighthill-based value multiplied by a squared 'normalized' Green's function (the directional factor), referred to a stationary point source. The convective motion of the sources, with its powerful amplifying effect, also directional, is already accounted for in the Lighthill format: wave convection and source convection are decoupled. The normalized Green's function appears to be near unity outside the refraction dominated 'cone of silence', this validates our long term practice of using Lighthill-based approaches outside the cone, with extension inside via the Green's function. The function is obtained either experimentally (injected 'point' source) or numerically (computational aeroacoustics). Approximation by unity seems adequate except near the cone and except when there are shrouding jets: in that case the difference from unity quantifies the shielding effect. Further extension yields dipole and monopole source terms (cf. Morfey, Mani, and others) when the mean flow possesses density gradients (e.g., hot jets).

  8. Validated Predictions of Metabolic Energy Consumption for Submaximal Effort Movement

    PubMed Central

    Tsianos, George A.; MacFadden, Lisa N.

    2016-01-01

    Physical performance emerges from complex interactions among many physiological systems that are largely driven by the metabolic energy demanded. Quantifying metabolic demand is an essential step for revealing the many mechanisms of physical performance decrement, but accurate predictive models do not exist. The goal of this study was to investigate if a recently developed model of muscle energetics and force could be extended to reproduce the kinematics, kinetics, and metabolic demand of submaximal effort movement. Upright dynamic knee extension against various levels of ergometer load was simulated. Task energetics were estimated by combining the model of muscle contraction with validated models of lower limb musculotendon paths and segment dynamics. A genetic algorithm was used to compute the muscle excitations that reproduced the movement with the lowest energetic cost, which was determined to be an appropriate criterion for this task. Model predictions of oxygen uptake rate (VO2) were well within experimental variability for the range over which the model parameters were confidently known. The model's accurate estimates of metabolic demand make it useful for assessing the likelihood and severity of physical performance decrement for a given task as well as investigating underlying physiologic mechanisms. PMID:27248429

  9. Discriminative Prediction of A-To-I RNA Editing Events from DNA Sequence

    PubMed Central

    Sun, Jiangming; Singh, Pratibha; Bagge, Annika; Valtat, Bérengère; Vikman, Petter; Spégel, Peter; Mulder, Hindrik

    2016-01-01

    RNA editing is a post-transcriptional alteration of RNA sequences that, via insertions, deletions or base substitutions, can affect protein structure as well as RNA and protein expression. Recently, it has been suggested that RNA editing may be more frequent than previously thought. A great impediment, however, to a deeper understanding of this process is the paramount sequencing effort that needs to be undertaken to identify RNA editing events. Here, we describe an in silico approach, based on machine learning, that ameliorates this problem. Using 41 nucleotide long DNA sequences, we show that novel A-to-I RNA editing events can be predicted from known A-to-I RNA editing events intra- and interspecies. The validity of the proposed method was verified in an independent experimental dataset. Using our approach, 203 202 putative A-to-I RNA editing events were predicted in the whole human genome. Out of these, 9% were previously reported. The remaining sites require further validation, e.g., by targeted deep sequencing. In conclusion, the approach described here is a useful tool to identify potential A-to-I RNA editing events without the requirement of extensive RNA sequencing. PMID:27764195

  10. Applications of a damage tolerance analysis methodology in aircraft design and production

    NASA Technical Reports Server (NTRS)

    Woodward, M. R.; Owens, S. D.; Law, G. E.; Mignery, L. A.

    1992-01-01

    Objectives of customer mandated aircraft structural integrity initiatives in design are to guide material selection, to incorporate fracture resistant concepts in the design, to utilize damage tolerance based allowables and planned inspection procedures necessary to enhance the safety and reliability of manned flight vehicles. However, validated fracture analysis tools for composite structures are needed to accomplish these objectives in a timely and economical manner. This paper briefly describes the development, validation, and application of a damage tolerance methodology for composite airframe structures. A closed-form analysis code, entitled SUBLAM was developed to predict the critical biaxial strain state necessary to cause sublaminate buckling-induced delamination extension in an impact damaged composite laminate. An embedded elliptical delamination separating a thin sublaminate from a thick parent laminate is modelled. Predicted failure strains were correlated against a variety of experimental data that included results from compression after impact coupon and element tests. An integrated analysis package was developed to predict damage tolerance based margin-of-safety (MS) using NASTRAN generated loads and element information. Damage tolerance aspects of new concepts are quickly and cost-effectively determined without the need for excessive testing.

  11. Electro-thermal analysis of Lithium Iron Phosphate battery for electric vehicles

    NASA Astrophysics Data System (ADS)

    Saw, L. H.; Somasundaram, K.; Ye, Y.; Tay, A. A. O.

    2014-03-01

    Lithium ion batteries offer an attractive solution for powering electric vehicles due to their relatively high specific energy and specific power, however, the temperature of the batteries greatly affects their performance as well as cycle life. In this work, an empirical equation characterizing the battery's electrical behavior is coupled with a lumped thermal model to analyze the electrical and thermal behavior of the 18650 Lithium Iron Phosphate cell. Under constant current discharging mode, the cell temperature increases with increasing charge/discharge rates. The dynamic behavior of the battery is also analyzed under a Simplified Federal Urban Driving Schedule and it is found that heat generated from the battery during this cycle is negligible. Simulation results are validated with experimental data. The validated single cell model is then extended to study the dynamic behavior of an electric vehicle battery pack. The modeling results predict that more heat is generated on an aggressive US06 driving cycle as compared to UDDS and HWFET cycle. An extensive thermal management system is needed for the electric vehicle battery pack especially during aggressive driving conditions to ensure that the cells are maintained within the desirable operating limits and temperature uniformity is achieved between the cells.

  12. A Philosophical Perspective on Construct Validation: Application of Inductive Logic to the Analysis of Experimental Episode Construct Validity.

    ERIC Educational Resources Information Center

    Rossi, Robert Joseph

    Methods drawn from four logical theories associated with studies of inductive processes are applied to the assessment and evaluation of experimental episode construct validity. It is shown that this application provides for estimates of episode informativeness with respect to the person examined in terms of the construct and to the construct…

  13. The Structured Assessment of Violence Risk in Adults with Intellectual Disability: A Systematic Review.

    PubMed

    Hounsome, J; Whittington, R; Brown, A; Greenhill, B; McGuire, J

    2018-01-01

    While structured professional judgement approaches to assessing and managing the risk of violence have been extensively examined in mental health/forensic settings, the application of the findings to people with an intellectual disability is less extensively researched and reviewed. This review aimed to assess whether risk assessment tools have adequate predictive validity for violence in adults with an intellectual disability. Standard systematic review methodology was used to identify and synthesize appropriate studies. A total of 14 studies were identified as meeting the inclusion criteria. These studies assessed the predictive validity of 18 different risk assessment tools, mainly in forensic settings. All studies concluded that the tools assessed were successful in predicting violence. Studies were generally of a high quality. There is good quality evidence that risk assessment tools are valid for people with intellectual disability who offend but further research is required to validate tools for use with people with intellectual disability who offend. © 2016 John Wiley & Sons Ltd.

  14. Expanding the Nomological Net of the Pathological Narcissism Inventory: German Validation and Extension in a Clinical Inpatient Sample.

    PubMed

    Morf, Carolyn C; Schürch, Eva; Küfner, Albrecht; Siegrist, Philip; Vater, Aline; Back, Mitja; Mestel, Robert; Schröder-Abé, Michela

    2017-06-01

    The Pathological Narcissism Inventory (PNI) is a multidimensional measure for assessing grandiose and vulnerable features in narcissistic pathology. The aim of the present research was to construct and validate a German translation of the PNI and to provide further information on the PNI's nomological net. Findings from a first study confirm the psychometric soundness of the PNI and replicate its seven-factor first-order structure. A second-order structure was also supported but with several equivalent models. A second study investigating associations with a broad range of measures ( DSM Axis I and II constructs, emotions, personality traits, interpersonal and dysfunctional behaviors, and well-being) supported the concurrent validity of the PNI. Discriminant validity with the Narcissistic Personality Inventory was also shown. Finally, in a third study an extension in a clinical inpatient sample provided further evidence that the PNI is a useful tool to assess the more pathological end of narcissism.

  15. A Decision Tree for Nonmetric Sex Assessment from the Skull.

    PubMed

    Langley, Natalie R; Dudzik, Beatrix; Cloutier, Alesia

    2018-01-01

    This study uses five well-documented cranial nonmetric traits (glabella, mastoid process, mental eminence, supraorbital margin, and nuchal crest) and one additional trait (zygomatic extension) to develop a validated decision tree for sex assessment. The decision tree was built and cross-validated on a sample of 293 U.S. White individuals from the William M. Bass Donated Skeletal Collection. Ordinal scores from the six traits were analyzed using the partition modeling option in JMP Pro 12. A holdout sample of 50 skulls was used to test the model. The most accurate decision tree includes three variables: glabella, zygomatic extension, and mastoid process. This decision tree yielded 93.5% accuracy on the training sample, 94% on the cross-validated sample, and 96% on a holdout validation sample. Linear weighted kappa statistics indicate acceptable agreement among observers for these variables. Mental eminence should be avoided, and definitions and figures should be referenced carefully to score nonmetric traits. © 2017 American Academy of Forensic Sciences.

  16. Ensuring Data Quality in Extension Research and Evaluation Studies

    ERIC Educational Resources Information Center

    Radhakrishna, Rama; Tobin, Daniel; Brennan, Mark; Thomson, Joan

    2012-01-01

    This article presents a checklist as a guide for Extension professionals to use in research and evaluation studies they carry out. A total of 40 statements grouped under eight data quality components--relevance, objectivity, validity, reliability, integrity, generalizability, completeness, and utility--are identified to ensure that research…

  17. Leadership Development Seminar: Developing Human Capital through Extension Leadership Programs. Proceedings (Manhattan, Kansas, August 6, 1989).

    ERIC Educational Resources Information Center

    Bolton, Elizabeth B.; White, Lynn

    Nineteen papers are included in this document: "Potential and Impact: Assessment and Validation in Leadership Development" (Boatman); "Using an Organizational Diagnostic Instrument to Analyze Perceptions of the Virginia Extension Homemakers Council" (Newhouse, Chandler, Tuckwiller); "Image: Who Needs It?" (Hendricks,…

  18. If Anything Can Go Wrong, Maybe It Will.

    ERIC Educational Resources Information Center

    Wager, Jane C.; Rayner, Gail T.

    Thirty personnel involved in various stages of the Training Extension Course (TEC) design, development, and distribution process were interviewed by telephone to determine the major problems perceived within each stage of the program, which provides validated extension training wherever U.S. soldiers are stationed. Those interviewed included TEC…

  19. Effects of a Stretching Development and Maintenance Program on Hamstring Extensibility in Schoolchildren: A Cluster-Randomized Controlled Trial

    PubMed Central

    Mayorga-Vega, Daniel; Merino-Marban, Rafael; Manzano-Lagunas, Jorge; Blanco, Humberto; Viciana, Jesús

    2016-01-01

    The main purpose of the present study was to examine the effects of a physical education-based stretching development and maintenance program on hamstring extensibility in schoolchildren. A sample of 150 schoolchildren aged 7-10 years old from a primary school participated in the present study (140 participants were finally included). The six classes balanced by grade were cluster randomly assigned to the experimental group 1 (n = 51), experimental group 2 (n = 51) or control group (n = 49) (i.e., a cluster randomized controlled trial design was used). During the physical education classes, the students from the experimental groups 1 and 2 performed a four-minute stretching program twice a week for nine weeks (first semester). Then, after a five-week period of detraining coinciding with the Christmas holidays, the students from the experimental groups 1 and 2 completed another stretching program twice a week for eleven weeks (second semester). The students from the experimental group 1 continued performing the stretching program for four minutes while those from the experimental group 2 completed a flexibility maintenance program for only one minute. The results of the two-way analysis of variance showed that the physical education-based stretching development program significantly improved the students’ hamstring extensibility (p < 0.001), as well as that these gains obtained remained after the stretching maintenance program (p < 0.001). Additionally, statistically significant differences between the two experimental groups were not found (p > 0.05). After a short-term stretching development program, a physical education-based stretching maintenance program of only one-minute sessions twice a week is effective in maintaining hamstring extensibility among schoolchildren. This knowledge could help and guide teachers to design programs that allow a feasible and effective development and maintenance of students’ flexibility in the physical education setting. Key points A physical education-based stretching maintenance program of only one-minute sessions twice a week is effective in maintaining hamstring extensibility among schoolchildren. A four-minute maintenance program shows similar effects that the one-minute maintenance program on hamstring extensibility among schoolchildren. Physical education teachers and other practitioners could carry out one-minute programs for a feasible and effective maintenance of students’ flexibility. PMID:26957928

  20. Additional Evidence for the Reliability and Validity of the Student Risk Screening Scale at the High School Level: A Replication and Extension

    ERIC Educational Resources Information Center

    Lane, Kathleen Lynne; Oakes, Wendy P.; Ennis, Robin Parks; Cox, Meredith Lucille; Schatschneider, Christopher; Lambert, Warren

    2013-01-01

    This study reports findings from a validation study of the Student Risk Screening Scale for use with 9th- through 12th-grade students (N = 1854) attending a rural fringe school. Results indicated high internal consistency, test-retest stability, and inter-rater reliability. Predictive validity was established across two academic years, with Spring…

  1. Validity of Factors of the Psychopathy Checklist-Revised in Female Prisoners: Discriminant Relations with Antisocial Behavior, Substance Abuse, and Personality

    ERIC Educational Resources Information Center

    Kennealy, Patrick J.; Hicks, Brian M.; Patrick, Christopher J.

    2007-01-01

    The validity of the Psychopathy Checklist-Revised (PCL-R) has been examined extensively in men, but its validity for women remains understudied. Specifically, the correlates of the general construct of psychopathy and its components as assessed by PCL-R total, factor, and facet scores have yet to be examined in depth. Based on previous research…

  2. An Extension Convergent Validity Study of the "Systematic Screening for Behavior Disorders" and the Achenbach "Teacher's Report Form" with Middle and High School Students with Emotional Disturbances

    ERIC Educational Resources Information Center

    Benner, Gregory J.; Uhing, Brad M.; Pierce, Corey D.; Beaudoin, Kathleen M.; Ralston, Nicole C.; Mooney, Paul

    2009-01-01

    We sought to extend instrument validation research for the Systematic Screening for Behavior Disorders (SSBD) (Walker & Severson, 1990) using convergent validation techniques. Associations between Critical Events, Adaptive Behavior, and Maladaptive Behavior indices of the SSBD were examined in relation to syndrome, broadband, and total scores…

  3. Experimental Modal Analysis and Dynaic Strain Fiber Bragg Gratings for Structural Health Monitoring of Composite Aerospace Structures

    NASA Astrophysics Data System (ADS)

    Panopoulou, A.; Fransen, S.; Gomez Molinero, V.; Kostopoulos, V.

    2012-07-01

    The objective of this work is to develop a new structural health monitoring system for composite aerospace structures based on dynamic response strain measurements and experimental modal analysis techniques. Fibre Bragg Grating (FBG) optical sensors were used for monitoring the dynamic response of the composite structure. The structural dynamic behaviour has been numerically simulated and experimentally verified by means of vibration testing. The hypothesis of all vibration tests was that actual damage in composites reduces their stiffness and produces the same result as mass increase produces. Thus, damage was simulated by slightly varying locally the mass of the structure at different zones. Experimental modal analysis based on the strain responses was conducted and the extracted strain mode shapes were the input for the damage detection expert system. A feed-forward back propagation neural network was the core of the damage detection system. The features-input to the neural network consisted of the strain mode shapes, extracted from the experimental modal analysis. Dedicated training and validation activities were carried out based on the experimental results. The system showed high reliability, confirmed by the ability of the neural network to recognize the size and the position of damage on the structure. The experiments were performed on a real structure i.e. a lightweight antenna sub- reflector, manufactured and tested at EADS CASA ESPACIO. An integrated FBG sensor network, based on the advantage of multiplexing, was mounted on the structure with optimum topology. Numerical simulation of both structures was used as a support tool at all the steps of the work. Potential applications for the proposed system are during ground qualification extensive tests of space structures and during the mission as modal analysis tool on board, being able via the FBG responses to identify a potential failure.

  4. Reliability and validity of the Performance Recorder 1 for measuring isometric knee flexor and extensor strength.

    PubMed

    Neil, Sarah E; Myring, Alec; Peeters, Mon Jef; Pirie, Ian; Jacobs, Rachel; Hunt, Michael A; Garland, S Jayne; Campbell, Kristin L

    2013-11-01

    Muscular strength is a key parameter of rehabilitation programs and a strong predictor of functional capacity. Traditional methods to measure strength, such as manual muscle testing (MMT) and hand-held dynamometry (HHD), are limited by the strength and experience of the tester. The Performance Recorder 1 (PR1) is a strength assessment tool attached to resistance training equipment and may be a time- and cost-effective tool to measure strength in clinical practice that overcomes some limitations of MMT and HHD. However, reliability and validity of the PR1 have not been reported. Test-retest and inter-rater reliability was assessed using the PR1 in healthy adults (n  =  15) during isometric knee flexion and extension. Criterion-related validity was assessed through comparison of values obtained from the PR1 and Biodex® isokinetic dynamometer. Test-retest reliability was excellent for peak knee flexion (intra-class correlation coefficient [ICC] of 0.96, 95% CI: 0.85, 0.99) and knee extension (ICC  =  0.96, 95% CI: 0.87, 0.99). Inter-rater reliability was also excellent for peak knee flexion (ICC  =  0.95, 95% CI: 0.85, 0.99) and peak knee extension (ICC  =  0.97, 95% CI: 0.91, 0.99). Validity was moderate for peak knee flexion (ICC  =  0.75, 95% CI: 0.38, 0.92) but poor for peak knee extension (ICC  =  0.37, 95% CI: 0, 0.73). The PR1 provides a reliable measure of isometric knee flexor and extensor strength in healthy adults that could be used in the clinical setting, but absolute values may not be comparable to strength assessment by gold-standard measures.

  5. Validation of a photography-based goniometry method for measuring joint range of motion.

    PubMed

    Blonna, Davide; Zarkadas, Peter C; Fitzsimmons, James S; O'Driscoll, Shawn W

    2012-01-01

    A critical component of evaluating the outcomes after surgery to restore lost elbow motion is the range of motion (ROM) of the elbow. This study examined if digital photography-based goniometry is as accurate and reliable as clinical goniometry for measuring elbow ROM. Instrument validity and reliability for photography-based goniometry were evaluated for a consecutive series of 50 elbow contractures by 4 observers with different levels of elbow experience. Goniometric ROM measurements were taken with the elbows in full extension and full flexion directly in the clinic (once) and from digital photographs (twice in a blinded random manner). Instrument validity for photography-based goniometry was extremely high (intraclass correlation coefficient: extension = 0.98, flexion = 0.96). For extension and flexion measurements by the expert surgeon, systematic error was negligible (0° and 1°, respectively). Limits of agreement were 7° (95% confidence interval [CI], 5° to 9°) and -7° (95% CI, -5° to -9°) for extension and 8° (95% CI, 6° to 10°) and -7° (95% CI, -5° to -9°) for flexion. Interobserver reliability for photography-based goniometry was better than that for clinical goniometry. The least experienced observer's photographic goniometry measurements were closer to the reference measurements than the clinical goniometry measurements. Photography-based goniometry is accurate and reliable for measuring elbow ROM. The photography-based method relied less on observer expertise than clinical goniometry. This validates an objective measure of patient outcome without requiring doctor-patient contact at a tertiary care center, where most contracture surgeries are done. Copyright © 2012 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.

  6. Validation of the Leap Motion Controller using markered motion capture technology.

    PubMed

    Smeragliuolo, Anna H; Hill, N Jeremy; Disla, Luis; Putrino, David

    2016-06-14

    The Leap Motion Controller (LMC) is a low-cost, markerless motion capture device that tracks hand, wrist and forearm position. Integration of this technology into healthcare applications has begun to occur rapidly, making validation of the LMC׳s data output an important research goal. Here, we perform a detailed evaluation of the kinematic data output from the LMC, and validate this output against gold-standard, markered motion capture technology. We instructed subjects to perform three clinically-relevant wrist (flexion/extension, radial/ulnar deviation) and forearm (pronation/supination) movements. The movements were simultaneously tracked using both the LMC and a marker-based motion capture system from Motion Analysis Corporation (MAC). Adjusting for known inconsistencies in the LMC sampling frequency, we compared simultaneously acquired LMC and MAC data by performing Pearson׳s correlation (r) and root mean square error (RMSE). Wrist flexion/extension and radial/ulnar deviation showed good overall agreement (r=0.95; RMSE=11.6°, and r=0.92; RMSE=12.4°, respectively) with the MAC system. However, when tracking forearm pronation/supination, there were serious inconsistencies in reported joint angles (r=0.79; RMSE=38.4°). Hand posture significantly influenced the quality of wrist deviation (P<0.005) and forearm supination/pronation (P<0.001), but not wrist flexion/extension (P=0.29). We conclude that the LMC is capable of providing data that are clinically meaningful for wrist flexion/extension, and perhaps wrist deviation. It cannot yet return clinically meaningful data for measuring forearm pronation/supination. Future studies should continue to validate the LMC as updated versions of their software are developed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Manipulating glucocorticoids in wild animals: basic and applied perspectives

    PubMed Central

    Sopinka, Natalie M.; Patterson, Lucy D.; Redfern, Julia C.; Pleizier, Naomi K.; Belanger, Cassia B.; Midwood, Jon D.; Crossin, Glenn T.; Cooke, Steven J.

    2015-01-01

    One of the most comprehensively studied responses to stressors in vertebrates is the endogenous production and regulation of glucocorticoids (GCs). Extensive laboratory research using experimental elevation of GCs in model species is instrumental in learning about stressor-induced physiological and behavioural mechanisms; however, such studies fail to inform our understanding of ecological and evolutionary processes in the wild. We reviewed emerging research that has used GC manipulations in wild vertebrates to assess GC-mediated effects on survival, physiology, behaviour, reproduction and offspring quality. Within and across taxa, exogenous manipulation of GCs increased, decreased or had no effect on traits examined in the reviewed studies. The notable diversity in responses to GC manipulation could be associated with variation in experimental methods, inherent differences among species, morphs, sexes and age classes, and the ecological conditions in which responses were measured. In their current form, results from experimental studies may be applied to animal conservation on a case-by-case basis in contexts such as threshold-based management. We discuss ways to integrate mechanistic explanations for changes in animal abundance in altered environments with functional applications that inform conservation practitioners of which species and traits may be most responsive to environmental change or human disturbance. Experimental GC manipulation holds promise for determining mechanisms underlying fitness impairment and population declines. Future work in this area should examine multiple life-history traits, with consideration of individual variation and, most importantly, validation of GC manipulations within naturally occurring and physiologically relevant ranges. PMID:27293716

  8. Genome-wide identification of suitable zebrafish Danio rerio reference genes for normalization of gene expression data by RT-qPCR.

    PubMed

    Xu, H; Li, C; Zeng, Q; Agrawal, I; Zhu, X; Gong, Z

    2016-06-01

    In this study, to systematically identify the most stably expressed genes for internal reference in zebrafish Danio rerio investigations, 37 D. rerio transcriptomic datasets (both RNA sequencing and microarray data) were collected from gene expression omnibus (GEO) database and unpublished data, and gene expression variations were analysed under three experimental conditions: tissue types, developmental stages and chemical treatments. Forty-four putative candidate genes were identified with the c.v. <0·2 from all datasets. Following clustering into different functional groups, 21 genes, in addition to four conventional housekeeping genes (eef1a1l1, b2m, hrpt1l and actb1), were selected from different functional groups for further quantitative real-time (qrt-)PCR validation using 25 RNA samples from different adult tissues, developmental stages and chemical treatments. The qrt-PCR data were then analysed using the statistical algorithm refFinder for gene expression stability. Several new candidate genes showed better expression stability than the conventional housekeeping genes in all three categories. It was found that sep15 and metap1 were the top two stable genes for tissue types, ube2a and tmem50a the top two for different developmental stages, and rpl13a and rp1p0 the top two for chemical treatments. Thus, based on the extensive transcriptomic analyses and qrt-PCR validation, these new reference genes are recommended for normalization of D. rerio qrt-PCR data respectively for the three different experimental conditions. © 2016 The Fisheries Society of the British Isles.

  9. Quantification of the transferability of a designed protein specificity switch reveals extensive epistasis in molecular recognition

    DOE PAGES

    Melero, Cristina; Ollikainen, Noah; Harwood, Ian; ...

    2014-10-13

    Re-engineering protein–protein recognition is an important route to dissecting and controlling complex interaction networks. Experimental approaches have used the strategy of “second-site suppressors,” where a functional interaction is inferred between two proteins if a mutation in one protein can be compensated by a mutation in the second. Mimicking this strategy, computational design has been applied successfully to change protein recognition specificity by predicting such sets of compensatory mutations in protein–protein interfaces. To extend this approach, it would be advantageous to be able to “transplant” existing engineered and experimentally validated specificity changes to other homologous protein–protein complexes. Here, we test thismore » strategy by designing a pair of mutations that modulates peptide recognition specificity in the Syntrophin PDZ domain, confirming the designed interaction biochemically and structurally, and then transplanting the mutations into the context of five related PDZ domain–peptide complexes. We find a wide range of energetic effects of identical mutations in structurally similar positions, revealing a dramatic context dependence (epistasis) of designed mutations in homologous protein–protein interactions. To better understand the structural basis of this context dependence, we apply a structure-based computational model that recapitulates these energetic effects and we use this model to make and validate forward predictions. The context dependence of these mutations is captured by computational predictions, our results both highlight the considerable difficulties in designing protein–protein interactions and provide challenging benchmark cases for the development of improved protein modeling and design methods that accurately account for the context.« less

  10. Can the PHS model (ISO7933) predict reasonable thermophysiological responses while wearing protective clothing in hot environments?

    PubMed

    Wang, Faming; Kuklane, Kalev; Gao, Chuansi; Holmér, Ingvar

    2011-02-01

    In this paper, the prediction accuracy of the PHS (predicted heat strain) model on human physiological responses while wearing protective clothing ensembles was examined. Six human subjects (aged 29 ± 3 years) underwent three experimental trials in three different protective garments (clothing thermal insulation I(cl) ranges from 0.63 to 2.01 clo) in two hot environments (40 °C, relative humidities: 30% and 45%). The observed and predicted mean skin temperature, core body temperature and sweat rate were presented and statistically compared. A significant difference was found in the metabolic rate between FIRE (firefighting clothing) and HV (high visibility clothing) or MIL (military clothing) (p < 0.001). Also, the development of heart rate demonstrated the significant effects of the exposure time and clothing ensembles. In addition, the predicted evaporation rate during HV, MIL and FIRE was much lower than the experimental values. Hence, the current PHS model is not applicable for protective clothing with intrinsic thermal insulations above 1.0 clo. The results showed that the PHS model generated unreliable predictions on body core temperature when human subjects wore thick protective clothing such as firefighting clothing (I(cl) > 1.0 clo). The predicted mean skin temperatures in three clothing ensembles HV, MIL and FIRE were also outside the expected limits. Thus, there is a need for further extension for the clothing insulation validation range of the PHS model. It is recommended that the PHS model should be amended and validated by individual algorithms, physical or physiological parameters, and further subject studies.

  11. Diffusive deposition of aerosols in Phebus containment during FPT-2 test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontautas, A.; Urbonavicius, E.

    2012-07-01

    At present the lumped-parameter codes is the main tool to investigate the complex response of the containment of Nuclear Power Plant in case of an accident. Continuous development and validation of the codes is required to perform realistic investigation of the processes that determine the possible source term of radioactive products to the environment. Validation of the codes is based on the comparison of the calculated results with the measurements performed in experimental facilities. The most extensive experimental program to investigate fission product release from the molten fuel, transport through the cooling circuit and deposition in the containment is performedmore » in PHEBUS test facility. Test FPT-2 performed in this facility is considered for analysis of processes taking place in containment. Earlier performed investigations using COCOSYS code showed that the code could be successfully used for analysis of thermal-hydraulic processes and deposition of aerosols, but there was also noticed that diffusive deposition on the vertical walls does not fit well with the measured results. In the CPA module of ASTEC code there is implemented different model for diffusive deposition, therefore the PHEBUS containment model was transferred from COCOSYS code to ASTEC-CPA to investigate the influence of the diffusive deposition modelling. Analysis was performed using PHEBUS containment model of 16 nodes. The calculated thermal-hydraulic parameters are in good agreement with measured results, which gives basis for realistic simulation of aerosol transport and deposition processes. Performed investigations showed that diffusive deposition model has influence on the aerosol deposition distribution on different surfaces in the test facility. (authors)« less

  12. Characterization of the olfactory impact around a wastewater treatment plant: optimization and validation of a hydrogen sulfide determination procedure based on passive diffusion sampling.

    PubMed

    Colomer, Fernando Llavador; Espinós-Morató, Héctor; Iglesias, Enrique Mantilla; Pérez, Tatiana Gómez; Campos-Candel, Andreu; Lozano, Caterina Coll

    2012-08-01

    A monitoring program based on an indirect method was conducted to assess the approximation of the olfactory impact in several wastewater treatment plants (in the present work, only one is shown). The method uses H2S passive sampling using Palmes-type diffusion tubes impregnated with silver nitrate and fluorometric analysis employing fluorescein mercuric acetate. The analytical procedure was validated in the exposure chamber. Exposure periods ofat least 4 days are recommended. The quantification limit of the procedure is 0.61 ppb for a 5-day sampling, which allows the H2S immission (ground concentration) level to be measured within its low odor threshold, from 0.5 to 300 ppb. Experimental results suggest an exposure time greater than 4 days, while recovery efficiency of the procedure, 93.0+/-1.8%, seems not to depend on the amount of H2S collected by the samplers within their application range. The repeatability, expressed as relative standard deviation, is lower than 7%, which is within the limits normally accepted for this type of sampler. Statistical comparison showed that this procedure and the reference method provide analogous accuracy. The proposed procedure was applied in two experimental campaigns, one intensive and the other extensive, and concentrations within the H2S low odor threshold were quantified at each sampling point. From these results, it can be concluded that the procedure shows good potential for monitoring the olfactory impact around facilities where H2S emissions are dominant.

  13. Characterization of the olfactory impact around a wastewater treatment plant: Optimization and validation of a hydrogen sulfide determination procedure based on passive diffusion sampling.

    PubMed

    Colomer, Fernando Llavador; Espinós-Morató, Héctor; Iglesias, Enrique Mantilla; Pérez, Tatiana Gómez; Campos-Candel, Andreu; Coll Lozano, Caterina

    2012-08-01

    A monitoring program based on an indirect method was conducted to assess the approximation of the olfactory impact in several wastewater treatment plants (in the present work, only one is shown). The method uses H 2 S passive sampling using Palmes-type diffusion tubes impregnated with silver nitrate and fluorometric analysis employing fluorescein mercuric acetate. The analytical procedure was validated in the exposure chamber. Exposure periods of at least 4 days are recommended. The quantification limit of the procedure is 0.61 ppb for a 5-day sampling, which allows the H 2 S immission (ground concentration) level to be measured within its low odor threshold, from 0.5 to 300 ppb. Experimental results suggest an exposure time greater than 4 days, while recovery efficiency of the procedure, 93.0 ± 1.8%, seems not to depend on the amount of H 2 S collected by the samplers within their application range. The repeatability, expressed as relative standard deviation, is lower than 7%, which is within the limits normally accepted for this type of sampler. Statistical comparison showed that this procedure and the reference method provide analogous accuracy. The proposed procedure was applied in two experimental campaigns, one intensive and the other extensive, and concentrations within the H 2 S low odor threshold were quantified at each sampling point. From these results, it can be concluded that the procedure shows good potential for monitoring the olfactory impact around facilities where H 2 S emissions are dominant. [Box: see text].

  14. Quantification of the transferability of a designed protein specificity switch reveals extensive epistasis in molecular recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melero, Cristina; Ollikainen, Noah; Harwood, Ian

    Re-engineering protein–protein recognition is an important route to dissecting and controlling complex interaction networks. Experimental approaches have used the strategy of “second-site suppressors,” where a functional interaction is inferred between two proteins if a mutation in one protein can be compensated by a mutation in the second. Mimicking this strategy, computational design has been applied successfully to change protein recognition specificity by predicting such sets of compensatory mutations in protein–protein interfaces. To extend this approach, it would be advantageous to be able to “transplant” existing engineered and experimentally validated specificity changes to other homologous protein–protein complexes. Here, we test thismore » strategy by designing a pair of mutations that modulates peptide recognition specificity in the Syntrophin PDZ domain, confirming the designed interaction biochemically and structurally, and then transplanting the mutations into the context of five related PDZ domain–peptide complexes. We find a wide range of energetic effects of identical mutations in structurally similar positions, revealing a dramatic context dependence (epistasis) of designed mutations in homologous protein–protein interactions. To better understand the structural basis of this context dependence, we apply a structure-based computational model that recapitulates these energetic effects and we use this model to make and validate forward predictions. The context dependence of these mutations is captured by computational predictions, our results both highlight the considerable difficulties in designing protein–protein interactions and provide challenging benchmark cases for the development of improved protein modeling and design methods that accurately account for the context.« less

  15. Development and validation of a LC-MS/MS method for quantitation of fosfomycin - Application to in vitro antimicrobial resistance study using hollow-fiber infection model.

    PubMed

    Gandhi, Adarsh; Matta, Murali; Garimella, Narayana; Zere, Tesfalem; Weaver, James

    2018-06-01

    Extensive use and misuse of antibiotics over the past 50 years has contributed to the emergence and spread of antibiotic-resistant bacterial strains, rendering them as a global health concern. To address this issue, a dynamic in vitro hollow-fiber system, which mimics the in vivo environment more closely than the static model, was used to study the emergence of bacterial resistance of Escherichia coli against fosfomycin (FOS). To aid in this endeavor we developed and validated a liquid chromatography-tandem mass spectrometry (LC-MS/MS) assay for quantitative analysis of FOS in lysogeny broth. FOS was resolved on a Kinetex HILIC (2.1 × 50 mm, 2.6 μm) column with 2 mm ammonium acetate (pH 4.76) and acetonitrile as mobile phase within 3 min. Multiple reaction monitoring was used to acquire data on a triple quadrupole mass spectrometer. The assay was linear from 1 to 1000 μg/mL. Inter- and intra-assay precision and accuracy were <15% and between ±85 and 115% respectively. No significant matrix effect was observed when corrected with the internal standard. FOS was stable for up to 24 h at room temperature, up to three freeze-thaw cycles and up to 24 h when stored at 4°C in the autosampler. In vitro experimental data were similar to the simulated plasma pharmacokinetic data, further confirming the appropriateness of the experimental design to quantitate antibiotics and study occurrence of antimicrobial resistance in real time. The validated LC-MS/MS assays for quantitative determination of FOS in lysogeny broth will help antimicrobial drug resistance studies. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.

  16. Can jurors recognize missing control groups, confounds, and experimenter bias in psychological science?

    PubMed

    McAuliff, Bradley D; Kovera, Margaret Bull; Nunez, Gabriel

    2009-06-01

    This study examined the ability of jury-eligible community members (N = 248) to detect internal validity threats in psychological science presented during a trial. Participants read a case summary in which an expert testified about a study that varied in internal validity (valid, missing control group, confound, and experimenter bias) and ecological validity (high, low). Ratings of expert evidence quality and expert credibility were higher for the valid versus missing control group versions only. Internal validity did not influence verdict or ratings of plaintiff credibility and no differences emerged as a function of ecological validity. Expert evidence quality, expert credibility, and plaintiff credibility were positively correlated with verdict. Implications for the scientific reasoning literature and for trials containing psychological science are discussed.

  17. Validation of WIND for a Series of Inlet Flows

    NASA Technical Reports Server (NTRS)

    Slater, John W.; Abbott, John M.; Cavicchi, Richard H.

    2002-01-01

    Validation assessments compare WIND CFD simulations to experimental data for a series of inlet flows ranging in Mach number from low subsonic to hypersonic. The validation procedures follow the guidelines of the AIAA. The WIND code performs well in matching the available experimental data. The assessments demonstrate the use of WIND and provide confidence in its use for the analysis of aircraft inlets.

  18. Impact of model development, calibration and validation decisions on hydrological simulations in West Lake Erie Basin

    USDA-ARS?s Scientific Manuscript database

    Watershed simulation models are used extensively to investigate hydrologic processes, landuse and climate change impacts, pollutant load assessments and best management practices (BMPs). Developing, calibrating and validating these models require a number of critical decisions that will influence t...

  19. 77 FR 25469 - Applications for New Awards; Investing in Innovation Fund, Validation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-30

    ... DEPARTMENT OF EDUCATION Applications for New Awards; Investing in Innovation Fund, Validation... Innovation and Improvement, Department of Education. ACTION: Notice; extension of deadline date and correction. SUMMARY: On March 27, 2012, the Office of Innovation and Improvement in the U.S. Department of...

  20. Experimental Validation of a Thermoelastic Model for SMA Hybrid Composites

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.

    2001-01-01

    This study presents results from experimental validation of a recently developed model for predicting the thermomechanical behavior of shape memory alloy hybrid composite (SMAHC) structures, composite structures with an embedded SMA constituent. The model captures the material nonlinearity of the material system with temperature and is capable of modeling constrained, restrained, or free recovery behavior from experimental measurement of fundamental engineering properties. A brief description of the model and analysis procedures is given, followed by an overview of a parallel effort to fabricate and characterize the material system of SMAHC specimens. Static and dynamic experimental configurations for the SMAHC specimens are described and experimental results for thermal post-buckling and random response are presented. Excellent agreement is achieved between the measured and predicted results, fully validating the theoretical model for constrained recovery behavior of SMAHC structures.

  1. Non-destructive inspection in industrial equipment using robotic mobile manipulation

    NASA Astrophysics Data System (ADS)

    Maurtua, Iñaki; Susperregi, Loreto; Ansuategui, Ander; Fernández, Ane; Ibarguren, Aitor; Molina, Jorge; Tubio, Carlos; Villasante, Cristobal; Felsch, Torsten; Pérez, Carmen; Rodriguez, Jorge R.; Ghrissi, Meftah

    2016-05-01

    MAINBOT project has developed service robots based applications to autonomously execute inspection tasks in extensive industrial plants in equipment that is arranged horizontally (using ground robots) or vertically (climbing robots). The industrial objective has been to provide a means to help measuring several physical parameters in multiple points by autonomous robots, able to navigate and climb structures, handling non-destructive testing sensors. MAINBOT has validated the solutions in two solar thermal plants (cylindrical-parabolic collectors and central tower), that are very demanding from mobile manipulation point of view mainly due to the extension (e.g. a thermal solar plant of 50Mw, with 400 hectares, 400.000 mirrors, 180 km of absorber tubes, 140m height tower), the variability of conditions (outdoor, day-night), safety requirements, etc. Once the technology was validated in simulation, the system was deployed in real setups and different validation tests carried out. In this paper two of the achievements related with the ground mobile inspection system are presented: (1) Autonomous navigation localization and planning algorithms to manage navigation in huge extensions and (2) Non-Destructive Inspection operations: thermography based detection algorithms to provide automatic inspection abilities to the robots.

  2. 42 CFR 137.426 - May an Indian Tribe get an extension of time to file a notice of appeal?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-GOVERNANCE Appeals Pre-Award Disputes § 137.426 May an Indian Tribe get an extension of time to file a notice... time period. If the Indian Tribe has a valid reason for not filing its notice of appeal on time, it may...

  3. Development and Validation of the Career Competencies Indicator (CCI)

    ERIC Educational Resources Information Center

    Francis-Smythe, Jan; Haase, Sandra; Thomas, Erica; Steele, Catherine

    2013-01-01

    This article describes the development and validation of the Career Competencies Indicator (CCI); a 43-item measure to assess career competencies (CCs). Following an extensive literature review, a comprehensive item generation process involving consultation with subject matter experts, a pilot study and a factor analytic study on a large sample…

  4. Using Evaluation to Guide and Validate Improvements to the Utah Master Naturalist Program

    ERIC Educational Resources Information Center

    Larese-Casanova, Mark

    2015-01-01

    Integrating evaluation into an Extension program offers multiple opportunities to understand program success through achieving program goals and objectives, delivering programming using the most effective techniques, and refining program audiences. It is less common that evaluation is used to guide and validate the effectiveness of program…

  5. The Open Curriculum and Selection of Qualified Staff: Instrument Validation.

    ERIC Educational Resources Information Center

    Greene, John F.; And Others

    The impact of open education on today's curriculum has been extensive. Of the many requests for research in this area, none is more important than instrument validation. This study examines the internal structure of Barth's Assumptions about Learning and Knowledge scale and explores its relationship to established "progressivism" and…

  6. Confined cattle feeding trail to validate fecal DNA metabarcoding to inform rangeland free-roaming diet applications

    USDA-ARS?s Scientific Manuscript database

    Diet composition of free roaming livestock and wildlife in extensive rangelands are difficult to quantify. Recent technological advances now allow us to reconstruct plant species-specific dietary protein composition using fecal samples. However, it has been suggested that validation of the method i...

  7. AOAC Official MethodSM Matrix Extension Validation Study of Assurance GDSTM for the Detection of Salmonella in Selected Spices.

    PubMed

    Feldsine, Philip; Kaur, Mandeep; Shah, Khyati; Immerman, Amy; Jucker, Markus; Lienau, Andrew

    2015-01-01

    Assurance GDSTM for Salmonella Tq has been validated according to the AOAC INTERNATIONAL Methods Committee Guidelines for Validation of Microbiological Methods for Food and Environmental Surfaces for the detection of selected foods and environmental surfaces (Official Method of AnalysisSM 2009.03, Performance Tested MethodSM No. 050602). The method also completed AFNOR validation (following the ISO 16140 standard) compared to the reference method EN ISO 6579. For AFNOR, GDS was given a scope covering all human food, animal feed stuff, and environmental surfaces (Certificate No. TRA02/12-01/09). Results showed that Assurance GDS for Salmonella (GDS) has high sensitivity and is equivalent to the reference culture methods for the detection of motile and non-motile Salmonella. As part of the aforementioned validations, inclusivity and exclusivity studies, stability, and ruggedness studies were also conducted. Assurance GDS has 100% inclusivity and exclusivity among the 100 Salmonella serovars and 35 non-Salmonella organisms analyzed. To add to the scope of the Assurance GDS for Salmonella method, a matrix extension study was conducted, following the AOAC guidelines, to validate the application of the method for selected spices, specifically curry powder, cumin powder, and chili powder, for the detection of Salmonella.

  8. Predicting the performance of a power amplifier using large-signal circuit simulations of an AlGaN/GaN HFET model

    NASA Astrophysics Data System (ADS)

    Bilbro, Griff L.; Hou, Danqiong; Yin, Hong; Trew, Robert J.

    2009-02-01

    We have quantitatively modeled the conduction current and charge storage of an HFET in terms its physical dimensions and material properties. For DC or small-signal RF operation, no adjustable parameters are necessary to predict the terminal characteristics of the device. Linear performance measures such as small-signal gain and input admittance can be predicted directly from the geometric structure and material properties assumed for the device design. We have validated our model at low-frequency against experimental I-V measurements and against two-dimensional device simulations. We discuss our recent extension of our model to include a larger class of electron velocity-field curves. We also discuss the recent reformulation of our model to facilitate its implementation in commercial large-signal high-frequency circuit simulators. Large signal RF operation is more complex. First, the highest CW microwave power is fundamentally bounded by a brief, reversible channel breakdown in each RF cycle. Second, the highest experimental measurements of efficiency, power, or linearity always require harmonic load pull and possibly also harmonic source pull. Presently, our model accounts for these facts with an adjustable breakdown voltage and with adjustable load impedances and source impedances for the fundamental frequency and its harmonics. This has allowed us to validate our model for large signal RF conditions by simultaneously fitting experimental measurements of output power, gain, and power added efficiency of real devices. We show that the resulting model can be used to compare alternative device designs in terms of their large signal performance, such as their output power at 1dB gain compression or their third order intercept points. In addition, the model provides insight into new device physics features enabled by the unprecedented current and voltage levels of AlGaN/GaN HFETs, including non-ohmic resistance in the source access regions and partial depletion of the 2DEG in the drain access region.

  9. Development, validity and reliability of a new pressure air biofeedback device (PAB) for measuring isometric extension strength of the lumbar spine.

    PubMed

    Pienaar, Andries W; Barnard, Justhinus G

    2017-04-01

    This study describes the development of a new portable muscle testing device, using air pressure as a biofeedback and strength testing tool. For this purpose, a pressure air biofeedback device (PAB ® ) was developed to measure and record the isometric extension strength of the lumbar multifidus muscle in asymptomatic and low back pain (LBP) persons. A total of 42 subjects (age 47.58 years, ±18.58) participated in this study. The validity of PAB ® was assessed by comparing a selected measure, air pressure force in millibar (mb), to a standard criterion; calibrated weights in kilograms (kg) during day-to-day tests. Furthermore, clinical trial-to-trial and day-to-day tests of maximum voluntary isometric contraction (MVIC) of L5 lumbar multifidus were done to compare air pressure force (mb) to electromyography (EMG) in microvolt (μV) and to measure the reliability of PAB ® . A highly significant relationship were found between air pressure output (mb) and calibrated weights (kg). In addition, Pearson correlation calculations showed a significant relationship between PAB ® force (mb) and EMG activity (μV) for all subjects (n = 42) examined, as well as for the asymptomatic group (n = 24). No relationship was detected for the LBP group (n = 18). In terms of lumbar extension strength, we found that asymptomatic subjects were significantly stronger than LBP subjects. The results of the PAB ® test differentiated between LBP and asymptomatic subject's lumbar isometric extension strength without any risk to the subjects and also indicate that the lumbar isometric extension test with the new PAB ® device is reliable and valid.

  10. Equations of state for hydrogen and deuterium.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerley, Gerald Irwin

    2003-12-01

    This report describes the complete revision of a deuterium equation of state (EOS) model published in 1972. It uses the same general approach as the 1972 EOS, i.e., the so-called 'chemical model,' but incorporates a number of theoretical advances that have taken place during the past thirty years. Three phases are included: a molecular solid, an atomic solid, and a fluid phase consisting of both molecular and atomic species. Ionization and the insulator-metal transition are also included. The most important improvements are in the liquid perturbation theory, the treatment of molecular vibrations and rotations, and the ionization equilibrium and mixturemore » models. In addition, new experimental data and theoretical calculations are used to calibrate certain model parameters, notably the zero-Kelvin isotherms for the molecular and atomic solids, and the quantum corrections to the liquid phase. The report gives a general overview of the model, followed by detailed discussions of the most important theoretical issues and extensive comparisons with the many experimental data that have been obtained during the last thirty years. Questions about the validity of the chemical model are also considered. Implications for modeling the 'giant planets' are also discussed.« less

  11. Early contributions to theoretical chemistry: Inga Fischer-Hjalmars, a founder of the Swedish school

    NASA Astrophysics Data System (ADS)

    Johansson, Adam Johannes

    2017-09-01

    Inga Fischer-Hjalmars was one of the pioneers in the creation of the Swedish school of theoretical chemistry. She started her scientific endeavours in pharmacy and biochemistry, but soon sought a deeper understanding of molecules and chemistry. With a genuine experimental background and quantum chemical skills learned from Charles Coulson in the late 1940s, Inga was well prepared to continue her research and to contribute to the establishment of theoretical chemistry as it was later defined by Coulson; the use of quantum mechanics to explain experimental phenomena in all branches of chemistry. During the 1950s and 1960s Inga made important contributions to our understanding of chemical bonding and reactivity. For example, she made key insights into the dissociation of molecular hydrogen, the influence of heteroatoms on dipole moments in organic compounds, the electronic configuration of ozone and on the validity of different approximations in molecular theory. Inga Fischer-Hjalmars and her students developed extensions of the Pariser-Parr-Pople method and during the latter part of her career, she returned to the biomolecules that once had brought her into science, now applying quantum chemical methods to understand bonding and spectral properties of these molecules at greater depth.

  12. A New Unsteady Model for Dense Cloud Cavitation in Cryogenic Fluids

    NASA Technical Reports Server (NTRS)

    Hosangadi, A.; Ahuja, V.

    2005-01-01

    A new unsteady, cavitation model is presented wherein the phase change process (bubble growth/collapse) is coupled to the acoustic field in a cryogenic fluid. It predicts the number density and radius of bubbles in vapor clouds by tracking both the aggregate surface area and volume fraction of the cloud. Hence, formulations for the dynamics of individual bubbles (e.g. Rayleigh-Plesset equation) may be integrated within the macroscopic context of a dense vapor cloud i.e. a cloud that occupies a significant fraction of available volume and contains numerous bubbles. This formulation has been implemented within the CRUNCH CFD, which has a compressible real fluid formulation, a multi-element, unstructured grid framework, and has been validated extensively for liquid rocket turbopump inducers. Detailed unsteady simulations of a cavitating ogive in liquid nitrogen are presented where time-averaged mean cavity pressure and temperature depressions due to cavitation are compared with experimental data. The model also provides the spatial and temporal history of the bubble size distribution in the vapor clouds that are shed, an important physical parameter that is difficult to measure experimentally and is a significant advancement in the modeling of dense cloud cavitation.

  13. Possible safety hazards associated with the operation of the 0.3-m transonic cryogenic tunnel at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Webster, T. J.

    1982-01-01

    The 0.3 m Transonic Cryogenic Tunnel (TCT) at the NASA Langley Research Center was built in 1973 as a facility intended to be used for no more than 60 hours in order to verify the validity of the cryogenic wind tunnel concept at transonic speeds. The role of the 0.3 m TCT has gradually changed until now, after over 3000 hours of operation, it is classified as a major NASA research facility and, under the administration of the Experimental Techniques Branch, it is used extensively for the testing of airfoils at high Reynolds numbers and for the development of various technologies related to the efficient operation and use of cryogenic wind tunnels. The purpose of this report is to document the results of a recent safety analysis of the 0.3 m TCT facility. This analysis was made as part of an on going program with the Experimental Techniques Branch designed to ensure that the existing equipment and current operating procedures of the 0.3 m TCT facility are acceptable in terms of today's standards of safety for cryogenic systems.

  14. Theory and experiments in model-based space system anomaly management

    NASA Astrophysics Data System (ADS)

    Kitts, Christopher Adam

    This research program consists of an experimental study of model-based reasoning methods for detecting, diagnosing and resolving anomalies that occur when operating a comprehensive space system. Using a first principles approach, several extensions were made to the existing field of model-based fault detection and diagnosis in order to develop a general theory of model-based anomaly management. Based on this theory, a suite of algorithms were developed and computationally implemented in order to detect, diagnose and identify resolutions for anomalous conditions occurring within an engineering system. The theory and software suite were experimentally verified and validated in the context of a simple but comprehensive, student-developed, end-to-end space system, which was developed specifically to support such demonstrations. This space system consisted of the Sapphire microsatellite which was launched in 2001, several geographically distributed and Internet-enabled communication ground stations, and a centralized mission control complex located in the Space Technology Center in the NASA Ames Research Park. Results of both ground-based and on-board experiments demonstrate the speed, accuracy, and value of the algorithms compared to human operators, and they highlight future improvements required to mature this technology.

  15. A megawatt-level surface wave oscillator in Y-band with large oversized structure driven by annular relativistic electron beam.

    PubMed

    Wang, Jianguo; Wang, Guangqiang; Wang, Dongyang; Li, Shuang; Zeng, Peng

    2018-05-03

    High power vacuum electronic devices of millimeter wave to terahertz regime are attracting extensive interests due to their potential applications in science and technologies. In this paper, the design and experimental results of a powerful compact oversized surface wave oscillator (SWO) in Y-band are presented. The cylindrical slow wave structure (SWS) with rectangular corrugations and large diameter about 6.8 times the radiation wavelength is proposed to support the surface wave interacting with annular relativistic electron beam. By choosing appropriate beam parameters, the beam-wave interaction takes place near the π-point of TM 01 mode dispersion curve, giving high coupling impedance and temporal growth rate compared with higher TM 0n modes. The fundamental mode operation of the device is verified by the particle-in-cell (PIC) simulation results, which also indicate its capability of tens of megawatts power output in the Y-band. Finally, a compact experimental setup is completed to validate our design. Measurement results show that a terahertz pulse with frequency in the range of 0.319-0.349 THz, duration of about 2 ns and radiation power of about 2.1 MW has been generated.

  16. The Paucity Problem: Where Have All the Space Reactor Experiments Gone?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Marshall, Margaret A.

    2016-10-01

    The Handbooks of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) together contain a plethora of documented and evaluated experiments essential in the validation of nuclear data, neutronics codes, and modeling of various nuclear systems. Unfortunately, only a minute selection of handbook data (twelve evaluations) are of actual experimental facilities and mockups designed specifically for space nuclear research. There is a paucity problem, such that the multitude of space nuclear experimental activities performed in the past several decades have yet to be recovered and made available in such detail that themore » international community could benefit from these valuable historical research efforts. Those experiments represent extensive investments in infrastructure, expertise, and cost, as well as constitute significantly valuable resources of data supporting past, present, and future research activities. The ICSBEP and IRPhEP were established to identify and verify comprehensive sets of benchmark data; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data. See full abstract in attached document.« less

  17. Extension of Miles Equation for Ring Baffle Damping Predictions to Small Slosh Amplitudes and Large Baffle Widths

    NASA Technical Reports Server (NTRS)

    West, Jeff; Yang, H. Q.; Brodnick, Jacob; Sansone, Marco; Westra, Douglas

    2016-01-01

    The Miles equation has long been used to predict slosh damping in liquid propellant tanks due to ring baffles. The original work by Miles identifies defined limits to its range of application. Recent evaluations of the Space Launch System identified that the Core Stage baffle designs resulted in violating the limits of the application of the Miles equation. This paper describes the work conducted by NASA/MSFC to develop methods to predict slosh damping from ring baffles for conditions for which Miles equation is not applicable. For asymptotically small slosh amplitudes or conversely large baffle widths, an asymptotic expression for slosh damping was developed and calibrated using historical experimental sub-scale slosh damping data. For the parameter space that lies between region of applicability of the asymptotic expression and the Miles equation, Computational Fluid Dynamics simulations of slosh damping were used to develop an expression for slosh damping. The combined multi-regime slosh prediction methodology is shown to be smooth at regime boundaries and consistent with both sub-scale experimental slosh damping data and the results of validated Computational Fluid Dynamics predictions of slosh damping due to ring baffles.

  18. On the exactness of effective Floquet Hamiltonians employed in solid-state NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Garg, Rajat; Ramachandran, Ramesh

    2017-05-01

    Development of theoretical models based on analytic theory has remained an active pursuit in molecular spectroscopy for its utility both in the design of experiments as well as in the interpretation of spectroscopic data. In particular, the role of "Effective Hamiltonians" in the evolution of theoretical frameworks is well known across all forms of spectroscopy. Nevertheless, a constant revalidation of the approximations employed in the theoretical frameworks is necessitated by the constant improvements on the experimental front in addition to the complexity posed by the systems under study. Here in this article, we confine our discussion to the derivation of effective Floquet Hamiltonians based on the contact transformation procedure. While the importance of the effective Floquet Hamiltonians in the qualitative description of NMR experiments has been realized in simpler cases, its extension in quantifying spectral data deserves a cautious approach. With this objective, the validity of the approximations employed in the derivation of the effective Floquet Hamiltonians is re-examined through a comparison with exact numerical methods under differing experimental conditions. The limitations arising from the existing analytic methods are outlined along with remedial measures for improving the accuracy of the derived effective Floquet Hamiltonians.

  19. Modeling Molecular and Cellular Aspects of Human Disease using the Nematode Caenorhabditis elegans

    PubMed Central

    Silverman, Gary A.; Luke, Cliff J.; Bhatia, Sangeeta R.; Long, Olivia S.; Vetica, Anne C.; Perlmutter, David H.; Pak, Stephen C.

    2009-01-01

    As an experimental system, Caenorhabditis elegans, offers a unique opportunity to interrogate in vivo the genetic and molecular functions of human disease-related genes. For example, C. elegans has provided crucial insights into fundamental biological processes such as cell death and cell fate determinations, as well as pathological processes such as neurodegeneration and microbial susceptibility. The C. elegans model has several distinct advantages including a completely sequenced genome that shares extensive homology with that of mammals, ease of cultivation and storage, a relatively short lifespan and techniques for generating null and transgenic animals. However, the ability to conduct unbiased forward and reverse genetic screens in C. elegans remains one of the most powerful experimental paradigms for discovering the biochemical pathways underlying human disease phenotypes. The identification of these pathways leads to a better understanding of the molecular interactions that perturb cellular physiology, and forms the foundation for designing mechanism-based therapies. To this end, the ability to process large numbers of isogenic animals through automated work stations suggests that C. elegans, manifesting different aspects of human disease phenotypes, will become the platform of choice for in vivo drug discovery and target validation using high-throughput/content screening technologies. PMID:18852689

  20. A computational model of in vitro angiogenesis based on extracellular matrix fibre orientation.

    PubMed

    Edgar, Lowell T; Sibole, Scott C; Underwood, Clayton J; Guilkey, James E; Weiss, Jeffrey A

    2013-01-01

    Recent interest in the process of vascularisation within the biomedical community has motivated numerous new research efforts focusing on the process of angiogenesis. Although the role of chemical factors during angiogenesis has been well documented, the role of mechanical factors, such as the interaction between angiogenic vessels and the extracellular matrix, remains poorly understood. In vitro methods for studying angiogenesis exist; however, measurements available using such techniques often suffer from limited spatial and temporal resolutions. For this reason, computational models have been extensively employed to investigate various aspects of angiogenesis. This paper outlines the formulation and validation of a simple and robust computational model developed to accurately simulate angiogenesis based on length, branching and orientation morphometrics collected from vascularised tissue constructs. Microvessels were represented as a series of connected line segments. The morphology of the vessels was determined by a linear combination of the collagen fibre orientation, the vessel density gradient and a random walk component. Excellent agreement was observed between computational and experimental morphometric data over time. Computational predictions of microvessel orientation within an anisotropic matrix correlated well with experimental data. The accuracy of this modelling approach makes it a valuable platform for investigating the role of mechanical interactions during angiogenesis.

  1. Overview of HATP Experimental Aerodynamics Data for the Baseline F/A-18 Configuration

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Murri, Daniel G.; Erickson, Gary E.; Fisher, David F.; Banks, Daniel W.; Lanser, Wendy, R.

    1996-01-01

    Determining the baseline aerodynamics of the F/A-18 was one of the major objectives of the High-Angle-of-Attack Technology Program (HATP). This paper will review the key data bases that have contributed to our knowledge of the baseline aerodynamics and the improvements in test techniques that have resulted from the experimental program. Photographs are given highlighting the forebody and leading-edge-extension (LEX) vortices. Other data representing the impact of Mach and Reynolds numbers on the forebody and LEX vortices will also be detailed. The level of agreement between different tunnels and between tunnels and flight will be illustrated using pressures, forces, and moments measured on a 0.06-scale model tested in the Langley 7- by 10-Foot High Speed Tunnel, a 0.16-scale model in the Langley 30- by 60-Foot Tunnel, a full-scale vehicle in the Ames 80- by 120-Foot Wind Tunnel, and the flight F/A-18 High Alpha Research Vehicle (HARV). Next, creative use of wind tunnel resources that accelerated the validation of the computational fluid dynamics (CFD) codes will be described. Lastly, lessons learned, deliverables, and program conclusions are presented.

  2. Capturing RNA Folding Free Energy with Coarse-Grained Molecular Dynamics Simulations

    PubMed Central

    Bell, David R.; Cheng, Sara Y.; Salazar, Heber; Ren, Pengyu

    2017-01-01

    We introduce a coarse-grained RNA model for molecular dynamics simulations, RACER (RnA CoarsE-gRained). RACER achieves accurate native structure prediction for a number of RNAs (average RMSD of 2.93 Å) and the sequence-specific variation of free energy is in excellent agreement with experimentally measured stabilities (R2 = 0.93). Using RACER, we identified hydrogen-bonding (or base pairing), base stacking, and electrostatic interactions as essential driving forces for RNA folding. Also, we found that separating pairing vs. stacking interactions allowed RACER to distinguish folded vs. unfolded states. In RACER, base pairing and stacking interactions each provide an approximate stability of 3–4 kcal/mol for an A-form helix. RACER was developed based on PDB structural statistics and experimental thermodynamic data. In contrast with previous work, RACER implements a novel effective vdW potential energy function, which led us to re-parameterize hydrogen bond and electrostatic potential energy functions. Further, RACER is validated and optimized using a simulated annealing protocol to generate potential energy vs. RMSD landscapes. Finally, RACER is tested using extensive equilibrium pulling simulations (0.86 ms total) on eleven RNA sequences (hairpins and duplexes). PMID:28393861

  3. Final Report: "Recreating Planet Cores in the Laboratory"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeanloz, Raymond

    2017-06-02

    The grant supported a combination of experimental and theoretical research characterizing materials at high pressures (above 0.1-1 TPa = 1-10 million atmospheres) and modest temperatures (below 20,000-100,000 K). This is the “warm dense” (sub-nuclear) regime relevant to understanding the properties of planets, and also to characterizing the chemical bonding forces between atoms. As such, the experiments provide important validation and extensions of theoretical simulations based on quantum mechanics, and offer new insights into the nature and evolution of planets, including the thousands of recently discovered extra-solar planets. In particular, our experiments have documented that: 1) helium can separate from hydrogenmore » at conditions existing inside Jupiter and Saturn, providing much of these planets’ internal energy hence observed luminosities; 2) water ice is likely present in a superionic state with mobile protons inside Uranus and Neptune; 3) rock (oxides) can become metallic at conditions inside “super-Earths” and other large planets, thereby contributing to their magnetic fields; and 4) the “statistical atom” regime that provides the theoretical foundation for characterizing materials at planetary and astrophysical conditions is now accessible to experimental testing.« less

  4. Compressive strength of delaminated aerospace composites.

    PubMed

    Butler, Richard; Rhead, Andrew T; Liu, Wenli; Kontis, Nikolaos

    2012-04-28

    An efficient analytical model is described which predicts the value of compressive strain below which buckle-driven propagation of delaminations in aerospace composites will not occur. An extension of this efficient strip model which accounts for propagation transverse to the direction of applied compression is derived. In order to provide validation for the strip model a number of laminates were artificially delaminated producing a range of thin anisotropic sub-laminates made up of 0°, ±45° and 90° plies that displayed varied buckling and delamination propagation phenomena. These laminates were subsequently subject to experimental compression testing and nonlinear finite element analysis (FEA) using cohesive elements. Comparison of strip model results with those from experiments indicates that the model can conservatively predict the strain at which propagation occurs to within 10 per cent of experimental values provided (i) the thin-film assumption made in the modelling methodology holds and (ii) full elastic coupling effects do not play a significant role in the post-buckling of the sub-laminate. With such provision, the model was more accurate and produced fewer non-conservative results than FEA. The accuracy and efficiency of the model make it well suited to application in optimum ply-stacking algorithms to maximize laminate strength.

  5. WSEAT Shock Testing Margin Assessment Using Energy Spectra Final Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisemore, Carl; Babuska, Vit; Booher, Jason

    Several programs at Sandia National Laboratories have adopted energy spectra as a metric to relate the severity of mechanical insults to structural capacity. The purpose being to gain insight into the system's capability, reliability, and to quantify the ultimate margin between the normal operating envelope and the likely system failure point -- a system margin assessment. The fundamental concern with the use of energy metrics was that the applicability domain and implementation details were not completely defined for many problems of interest. The goal of this WSEAT project was to examine that domain of applicability and work out the necessarymore » implementation details. The goal of this project was to provide experimental validation for the energy spectra based methods in the context of margin assessment as they relate to shock environments. The extensive test results concluded that failure predictions using energy methods did not agree with failure predictions using S-N data. As a result, a modification to the energy methods was developed following the form of Basquin's equation to incorporate the power law exponent for fatigue damage. This update to the energy-based framework brings the energy based metrics into agreement with experimental data and historical S-N data.« less

  6. Optimized molecular dynamics force fields applied to the helix-coil transition of polypeptides.

    PubMed

    Best, Robert B; Hummer, Gerhard

    2009-07-02

    Obtaining the correct balance of secondary structure propensities is a central priority in protein force-field development. Given that current force fields differ significantly in their alpha-helical propensities, a correction to match experimental results would be highly desirable. We have determined simple backbone energy corrections for two force fields to reproduce the fraction of helix measured in short peptides at 300 K. As validation, we show that the optimized force fields produce results in excellent agreement with nuclear magnetic resonance experiments for folded proteins and short peptides not used in the optimization. However, despite the agreement at ambient conditions, the dependence of the helix content on temperature is too weak, a problem shared with other force fields. A fit of the Lifson-Roig helix-coil theory shows that both the enthalpy and entropy of helix formation are too small: the helix extension parameter w agrees well with experiment, but its entropic and enthalpic components are both only about half the respective experimental estimates. Our structural and thermodynamic analyses point toward the physical origins of these shortcomings in current force fields, and suggest ways to address them in future force-field development.

  7. Using Virtual Testing for Characterization of Composite Materials

    NASA Astrophysics Data System (ADS)

    Harrington, Joseph

    Composite materials are finally providing uses hitherto reserved for metals in structural systems applications -- airframes and engine containment systems, wraps for repair and rehabilitation, and ballistic/blast mitigation systems. They have high strength-to-weight ratios, are durable and resistant to environmental effects, have high impact strength, and can be manufactured in a variety of shapes. Generalized constitutive models are being developed to accurately model composite systems so they can be used in implicit and explicit finite element analysis. These models require extensive characterization of the composite material as input. The particular constitutive model of interest for this research is a three-dimensional orthotropic elasto-plastic composite material model that requires a total of 12 experimental stress-strain curves, yield stresses, and Young's Modulus and Poisson's ratio in the material directions as input. Sometimes it is not possible to carry out reliable experimental tests needed to characterize the composite material. One solution is using virtual testing to fill the gaps in available experimental data. A Virtual Testing Software System (VTSS) has been developed to address the need for a less restrictive method to characterize a three-dimensional orthotropic composite material. The system takes in the material properties of the constituents and completes all 12 of the necessary characterization tests using finite element (FE) models. Verification and validation test cases demonstrate the capabilities of the VTSS.

  8. Air-water partition coefficients for a suite of polycyclic aromatic and other C10 through C20 unsaturated hydrocarbons.

    PubMed

    Rayne, Sierra; Forest, Kaya

    2016-09-18

    The air-water partition coefficients (Kaw) for 86 large polycyclic aromatic hydrocarbons and their unsaturated relatives were estimated using high-level G4(MP2) gas and aqueous phase calculations with the SMD, IEFPCM-UFF, and CPCM solvation models. An extensive method validation effort was undertaken which involved confirming that, via comparisons to experimental enthalpies of formation, gas-phase energies at the G4(MP2) level for the compounds of interest were at or near thermochemical accuracy. Investigations of the three solvation models using a range of neutral and ionic compounds suggested that while no clear preferential solvation model could be chosen in advance for accurate Kaw estimates of the target compounds, the employment of increasingly higher levels of theory would result in lower Kaw errors. Subsequent calculations on the polycyclic aromatic and unsaturated hydrocarbons at the G4(MP2) level revealed excellent agreement for the IEFPCM-UFF and CPCM models against limited available experimental data. The IEFPCM-UFF-G4(MP2) and CPCM-G4(MP2) solvation energy calculation approaches are anticipated to give Kaw estimates within typical experimental ranges, each having general Kaw errors of less than 0.5 log10 units. When applied to other large organic compounds, the method should allow development of a broad and reliable Kaw database for multimedia environmental modeling efforts on various contaminants.

  9. Nonlinear Poisson equation for heterogeneous media.

    PubMed

    Hu, Langhua; Wei, Guo-Wei

    2012-08-22

    The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  10. Extension of local front reconstruction method with controlled coalescence model

    NASA Astrophysics Data System (ADS)

    Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.

    2018-02-01

    The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.

  11. Approximate Evaluation of Acoustical Focal Beams by Phased Array Probes for Austenitic Weld Inspections

    NASA Astrophysics Data System (ADS)

    Kono, Naoyuki; Miki, Masahiro; Nakamura, Motoyuki; Ehara, Kazuya

    2007-03-01

    Phased array techniques are capable of the sensitive detection and precise sizing of flaws or cracks in components of nuclear power plants by using arbitrary focal beams with various depths, positions and angles. Aquantitative investigation of these focal beams is essential for the optimization of array probes, especially for austenitic weld inspection, in order to improve the detectability, sizing accuracy, and signal-to-noise ratio using these beams. In the present work, focal beams generated by phased array probes are calculated based on the Fresnel-Kirchhoff diffraction integral (FKDI) method, and an approximation formula between the actual focal depth and optical focal depth is proposed as an extension of the theory for conventional spherically focusing probes. The validity of the approximation formula for the array probes is confirmed by a comparison with simulation data using the FKDI method, and the experimental data.

  12. A hybrid organic-inorganic perovskite dataset

    NASA Astrophysics Data System (ADS)

    Kim, Chiho; Huan, Tran Doan; Krishnan, Sridevi; Ramprasad, Rampi

    2017-05-01

    Hybrid organic-inorganic perovskites (HOIPs) have been attracting a great deal of attention due to their versatility of electronic properties and fabrication methods. We prepare a dataset of 1,346 HOIPs, which features 16 organic cations, 3 group-IV cations and 4 halide anions. Using a combination of an atomic structure search method and density functional theory calculations, the optimized structures, the bandgap, the dielectric constant, and the relative energies of the HOIPs are uniformly prepared and validated by comparing with relevant experimental and/or theoretical data. We make the dataset available at Dryad Digital Repository, NoMaD Repository, and Khazana Repository (http://khazana.uconn.edu/), hoping that it could be useful for future data-mining efforts that can explore possible structure-property relationships and phenomenological models. Progressive extension of the dataset is expected as new organic cations become appropriate within the HOIP framework, and as additional properties are calculated for the new compounds found.

  13. MODEL CORRELATION STUDY OF A RETRACTABLE BOOM FOR A SOLAR SAIL SPACECRAFT

    NASA Technical Reports Server (NTRS)

    Adetona, O.; Keel, L. H.; Oakley, J. D.; Kappus, K.; Whorton, M. S.; Kim, Y. K.; Rakpczy, J. M.

    2005-01-01

    To realize design concepts, predict dynamic behavior and develop appropriate control strategies for high performance operation of a solar-sail spacecraft, we developed a simple analytical model that represents dynamic behavior of spacecraft with various sizes. Since motion of the vehicle is dominated by retractable booms that support the structure, our study concentrates on developing and validating a dynamic model of a long retractable boom. Extensive tests with various configurations were conducted for the 30 Meter, light-weight, retractable, lattice boom at NASA MSFC that is structurally and dynamically similar to those of a solar-sail spacecraft currently under construction. Experimental data were then compared with the corresponding response of the analytical model. Though mixed results were obtained, the analytical model emulates several key characteristics of the boom. The paper concludes with a detailed discussion of issues observed during the study.

  14. A physical optics/equivalent currents model for the RCS of trihedral corner reflectors

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Polycarpou, Anastasis C.

    1993-01-01

    The scattering in the interior regions of both square and triangular trihedral corner reflectors is examined. The theoretical model presented combines geometrical and physical optics (GO and PO), used to account for reflection terms, with equivalent edge currents (EEC), used to account for first-order diffractions from the edges. First-order, second-order, and third-order reflection terms are included. Calculating the first-order reflection terms involves integrating over the entire surface of the illuminated plate. Calculating the second- and third-order reflection terms, however, is much more difficult because the illuminated area is an arbitrary polygon whose shape is dependent upon the incident angles. The method for determining the area of integration is detailed. Extensive comparisons between the high-frequency model, Finite-Difference Time-Domain (FDTD) and experimental data are used for validation of the radar cross section (RCS) of both square and triangular trihedral reflectors.

  15. Detecting cheaters without thinking: testing the automaticity of the cheater detection module.

    PubMed

    Van Lier, Jens; Revlin, Russell; De Neys, Wim

    2013-01-01

    Evolutionary psychologists have suggested that our brain is composed of evolved mechanisms. One extensively studied mechanism is the cheater detection module. This module would make people very good at detecting cheaters in a social exchange. A vast amount of research has illustrated performance facilitation on social contract selection tasks. This facilitation is attributed to the alleged automatic and isolated operation of the module (i.e., independent of general cognitive capacity). This study, using the selection task, tested the critical automaticity assumption in three experiments. Experiments 1 and 2 established that performance on social contract versions did not depend on cognitive capacity or age. Experiment 3 showed that experimentally burdening cognitive resources with a secondary task had no impact on performance on the social contract version. However, in all experiments, performance on a non-social contract version did depend on available cognitive capacity. Overall, findings validate the automatic and effortless nature of social exchange reasoning.

  16. Direct observation of Young’s double-slit interferences in vibrationally resolved photoionization of diatomic molecules

    PubMed Central

    Canton, Sophie E.; Plésiat, Etienne; Bozek, John D.; Rude, Bruce S.; Decleva, Piero; Martín, Fernando

    2011-01-01

    Vibrationally resolved valence-shell photoionization spectra of H2, N2 and CO have been measured in the photon energy range 20–300 eV using third-generation synchrotron radiation. Young’s double-slit interferences lead to oscillations in the corresponding vibrational ratios, showing that the molecules behave as two-center electron-wave emitters and that the associated interferences leave their trace in the angle-integrated photoionization cross section. In contrast to previous work, the oscillations are directly observable in the experiment, thereby removing any possible ambiguity related to the introduction of external parameters or fitting functions. A straightforward extension of an original idea proposed by Cohen and Fano [Cohen HD, Fano U (1966) Phys Rev 150:30] confirms this interpretation and shows that it is also valid for diatomic heteronuclear molecules. Results of accurate theoretical calculations are in excellent agreement with the experimental findings.

  17. Toolkit for visualization of the cellular structure and organelles in Aspergillus niger.

    PubMed

    Buren, Emiel B J Ten; Karrenbelt, Michiel A P; Lingemann, Marit; Chordia, Shreyans; Deng, Ying; Hu, JingJing; Verest, Johanna M; Wu, Vincen; Gonzalez, Teresita J Bello; Heck, Ruben G A van; Odoni, Dorett I; Schonewille, Tom; Straat, Laura van der; Graaff, Leo H de; Passel, Mark W J van

    2014-12-19

    Aspergillus niger is a filamentous fungus that is extensively used in industrial fermentations for protein expression and the production of organic acids. Inherent biosynthetic capabilities, such as the capacity to secrete these biomolecules in high amounts, make A. niger an attractive production host. Although A. niger is renowned for this ability, the knowledge of the molecular components that underlie its production capacity, intercellular trafficking processes and secretion mechanisms is far from complete. Here, we introduce a standardized set of tools, consisting of an N-terminal GFP-actin fusion and codon optimized eforRed chromoprotein. Expression of the GFP-actin construct facilitates visualization of the actin filaments of the cytoskeleton, whereas expression of the chromoprotein construct results in a clearly distinguishable red phenotype. These experimentally validated constructs constitute the first set of standardized A. niger biomarkers, which can be used to study morphology, intercellular trafficking, and secretion phenomena.

  18. High-throughput real-time quantitative reverse transcription PCR.

    PubMed

    Bookout, Angie L; Cummins, Carolyn L; Mangelsdorf, David J; Pesola, Jean M; Kramer, Martha F

    2006-02-01

    Extensive detail on the application of the real-time quantitative polymerase chain reaction (QPCR) for the analysis of gene expression is provided in this unit. The protocols are designed for high-throughput, 384-well-format instruments, such as the Applied Biosystems 7900HT, but may be modified to suit any real-time PCR instrument. QPCR primer and probe design and validation are discussed, and three relative quantitation methods are described: the standard curve method, the efficiency-corrected DeltaCt method, and the comparative cycle time, or DeltaDeltaCt method. In addition, a method is provided for absolute quantification of RNA in unknown samples. RNA standards are subjected to RT-PCR in the same manner as the experimental samples, thus accounting for the reaction efficiencies of both procedures. This protocol describes the production and quantitation of synthetic RNA molecules for real-time and non-real-time RT-PCR applications.

  19. Integration of XNAT/PACS, DICOM, and Research Software for Automated Multi-modal Image Analysis.

    PubMed

    Gao, Yurui; Burns, Scott S; Lauzon, Carolyn B; Fong, Andrew E; James, Terry A; Lubar, Joel F; Thatcher, Robert W; Twillie, David A; Wirt, Michael D; Zola, Marc A; Logan, Bret W; Anderson, Adam W; Landman, Bennett A

    2013-03-29

    Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.

  20. Integration of XNAT/PACS, DICOM, and research software for automated multi-modal image analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.

    2013-03-01

    Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.

  1. Integration of XNAT/PACS, DICOM, and Research Software for Automated Multi-modal Image Analysis

    PubMed Central

    Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.

    2013-01-01

    Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software. PMID:24386548

  2. NASA Common Research Model Test Envelope Extension With Active Sting Damping at NTF

    NASA Technical Reports Server (NTRS)

    Rivers, Melissa B.; Balakrishna, S.

    2014-01-01

    The NASA Common Research Model (CRM) high Reynolds number transonic wind tunnel testing program was established to generate an experimental database for applied Computational Fluid Dynamics (CFD) validation studies. During transonic wind tunnel tests, the CRM encounters large sting vibrations when the angle of attack approaches the second pitching moment break, which can sometimes become divergent. CRM transonic test data analysis suggests that sting divergent oscillations are related to negative net sting damping episodes associated with flow separation instability. The National Transonic Facility (NTF) has been addressing remedies to extend polar testing up to and beyond the second pitching moment break point of the test articles using an active piezoceramic damper system for both ambient and cryogenic temperatures. This paper reviews CRM test results to gain understanding of sting dynamics with a simple model describing the mechanics of a sting-model system and presents the performance of the damper under cryogenic conditions.

  3. Prediction of enzymatic pathways by integrative pathway mapping

    PubMed Central

    Wichelecki, Daniel J; San Francisco, Brian; Zhao, Suwen; Rodionov, Dmitry A; Vetting, Matthew W; Al-Obaidi, Nawar F; Lin, Henry; O'Meara, Matthew J; Scott, David A; Morris, John H; Russel, Daniel; Almo, Steven C; Osterman, Andrei L

    2018-01-01

    The functions of most proteins are yet to be determined. The function of an enzyme is often defined by its interacting partners, including its substrate and product, and its role in larger metabolic networks. Here, we describe a computational method that predicts the functions of orphan enzymes by organizing them into a linear metabolic pathway. Given candidate enzyme and metabolite pathway members, this aim is achieved by finding those pathways that satisfy structural and network restraints implied by varied input information, including that from virtual screening, chemoinformatics, genomic context analysis, and ligand -binding experiments. We demonstrate this integrative pathway mapping method by predicting the L-gulonate catabolic pathway in Haemophilus influenzae Rd KW20. The prediction was subsequently validated experimentally by enzymology, crystallography, and metabolomics. Integrative pathway mapping by satisfaction of structural and network restraints is extensible to molecular networks in general and thus formally bridges the gap between structural biology and systems biology. PMID:29377793

  4. Two-Level Weld-Material Homogenization for Efficient Computational Analysis of Welded Structure Blast-Survivability

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Arakere, G.; Hariharan, A.; Pandurangan, B.

    2012-06-01

    The introduction of newer joining technologies like the so-called friction-stir welding (FSW) into automotive engineering entails the knowledge of the joint-material microstructure and properties. Since, the development of vehicles (including military vehicles capable of surviving blast and ballistic impacts) nowadays involves extensive use of the computational engineering analyses (CEA), robust high-fidelity material models are needed for the FSW joints. A two-level material-homogenization procedure is proposed and utilized in this study to help manage computational cost and computer storage requirements for such CEAs. The method utilizes experimental (microstructure, microhardness, tensile testing, and x-ray diffraction) data to construct: (a) the material model for each weld zone and (b) the material model for the entire weld. The procedure is validated by comparing its predictions with the predictions of more detailed but more costly computational analyses.

  5. Tailored Welding Technique for High Strength Al-Cu Alloy for Higher Mechanical Properties

    NASA Astrophysics Data System (ADS)

    Biradar, N. S.; Raman, R.

    AA2014 aluminum alloy, with 4.5% Cu as major alloying element, offers highest strength and hardness values in T6 temper and finds extensive use in aircraft primary structures. However, this alloy is difficult to weld by fusion welding because the dendritic structure formed can affect weld properties seriously. Among the welding processes, AC-TIG technique is largely used for welding. As welded yield strength was in the range of 190-195 MPa, using conventional TIG technique. Welding metallurgy of AA2014 was critically reviewed and factors responsible for lower properties were identified. Square-wave AC TIG with Transverse mechanical arc oscillation (TMAO) was postulated to improve the weld strength. A systematic experimentation using 4 mm thick plates produced YS in the range of 230-240 MPa, has been achieved. Through characterization including optical and SEM/EDX was conducted to validate the metallurgical phenomena attributable to improvement in weld properties.

  6. Cross-stream diffusion under pressure-driven flow in microchannels with arbitrary aspect ratios: a phase diagram study using a three-dimensional analytical model

    PubMed Central

    Song, Hongjun; Wang, Yi; Pant, Kapil

    2011-01-01

    This article presents a three-dimensional analytical model to investigate cross-stream diffusion transport in rectangular microchannels with arbitrary aspect ratios under pressure-driven flow. The Fourier series solution to the three-dimensional convection–diffusion equation is obtained using a double integral transformation method and associated eigensystem calculation. A phase diagram derived from the dimensional analysis is presented to thoroughly interrogate the characteristics in various transport regimes and examine the validity of the model. The analytical model is verified against both experimental and numerical models in terms of the concentration profile, diffusion scaling law, and mixing efficiency with excellent agreement (with <0.5% relative error). Quantitative comparison against other prior analytical models in extensive parameter space is also performed, which demonstrates that the present model accommodates much broader transport regimes with significantly enhanced applicability. PMID:22247719

  7. Cross-stream diffusion under pressure-driven flow in microchannels with arbitrary aspect ratios: a phase diagram study using a three-dimensional analytical model.

    PubMed

    Song, Hongjun; Wang, Yi; Pant, Kapil

    2012-01-01

    This article presents a three-dimensional analytical model to investigate cross-stream diffusion transport in rectangular microchannels with arbitrary aspect ratios under pressure-driven flow. The Fourier series solution to the three-dimensional convection-diffusion equation is obtained using a double integral transformation method and associated eigensystem calculation. A phase diagram derived from the dimensional analysis is presented to thoroughly interrogate the characteristics in various transport regimes and examine the validity of the model. The analytical model is verified against both experimental and numerical models in terms of the concentration profile, diffusion scaling law, and mixing efficiency with excellent agreement (with <0.5% relative error). Quantitative comparison against other prior analytical models in extensive parameter space is also performed, which demonstrates that the present model accommodates much broader transport regimes with significantly enhanced applicability.

  8. Irreproducibility in Preclinical Biomedical Research: Perceptions, Uncertainties, and Knowledge Gaps.

    PubMed

    Jarvis, Michael F; Williams, Michael

    2016-04-01

    Concerns regarding the reliability of biomedical research outcomes were precipitated by two independent reports from the pharmaceutical industry that documented a lack of reproducibility in preclinical research in the areas of oncology, endocrinology, and hematology. Given their potential impact on public health, these concerns have been extensively covered in the media. Assessing the magnitude and scope of irreproducibility is limited by the anecdotal nature of the initial reports and a lack of quantitative data on specific failures to reproduce published research. Nevertheless, remediation activities have focused on needed enhancements in transparency and consistency in the reporting of experimental methodologies and results. While such initiatives can effectively bridge knowledge gaps and facilitate best practices across established and emerging research disciplines and therapeutic areas, concerns remain on how these improve on the historical process of independent replication in validating research findings and their potential to inhibit scientific innovation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. H-Ransac a Hybrid Point Cloud Segmentation Combining 2d and 3d Data

    NASA Astrophysics Data System (ADS)

    Adam, A.; Chatzilari, E.; Nikolopoulos, S.; Kompatsiaris, I.

    2018-05-01

    In this paper, we present a novel 3D segmentation approach operating on point clouds generated from overlapping images. The aim of the proposed hybrid approach is to effectively segment co-planar objects, by leveraging the structural information originating from the 3D point cloud and the visual information from the 2D images, without resorting to learning based procedures. More specifically, the proposed hybrid approach, H-RANSAC, is an extension of the well-known RANSAC plane-fitting algorithm, incorporating an additional consistency criterion based on the results of 2D segmentation. Our expectation that the integration of 2D data into 3D segmentation will achieve more accurate results, is validated experimentally in the domain of 3D city models. Results show that HRANSAC can successfully delineate building components like main facades and windows, and provide more accurate segmentation results compared to the typical RANSAC plane-fitting algorithm.

  10. Readiness of the ATLAS Tile Calorimeter for LHC collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.; Abbott, B.; Abdallah, J.

    The Tile hadronic calorimeter of the ATLAS detector has undergone extensive testing in the experimental hall since its installation in late 2005. The readout, control and calibration systems have been fully operational since 2007 and the detector has successfully collected data from the LHC single beams in 2008 and first collisions in 2009. This paper gives an overview of the Tile Calorimeter performance as measured using random triggers, calibration data, data from cosmic ray muons and single beam data. The detector operation status, noise characteristics and performance of the calibration systems are presented, as well as the validation of themore » timing and energy calibration carried out with minimum ionising cosmic ray muons data. The calibration systems' precision is well below the design value of 1%. The determination of the global energy scale was performed with an uncertainty of 4%. © 2010 CERN for the benefit of the ATLAS collaboration.« less

  11. Readiness of the ATLAS Tile Calorimeter for LHC collisions

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2010-12-08

    The Tile hadronic calorimeter of the ATLAS detector has undergone extensive testing in the experimental hall since its installation in late 2005. The readout, control and calibration systems have been fully operational since 2007 and the detector has successfully collected data from the LHC single beams in 2008 and first collisions in 2009. This paper gives an overview of the Tile Calorimeter performance as measured using random triggers, calibration data, data from cosmic ray muons and single beam data. The detector operation status, noise characteristics and performance of the calibration systems are presented, as well as the validation of themore » timing and energy calibration carried out with minimum ionising cosmic ray muons data. The calibration systems' precision is well below the design value of 1%. The determination of the global energy scale was performed with an uncertainty of 4%. © 2010 CERN for the benefit of the ATLAS collaboration.« less

  12. Validation of the thermal challenge problem using Bayesian Belief Networks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McFarland, John; Swiler, Laura Painton

    The thermal challenge problem has been developed at Sandia National Laboratories as a testbed for demonstrating various types of validation approaches and prediction methods. This report discusses one particular methodology to assess the validity of a computational model given experimental data. This methodology is based on Bayesian Belief Networks (BBNs) and can incorporate uncertainty in experimental measurements, in physical quantities, and model uncertainties. The approach uses the prior and posterior distributions of model output to compute a validation metric based on Bayesian hypothesis testing (a Bayes' factor). This report discusses various aspects of the BBN, specifically in the context ofmore » the thermal challenge problem. A BBN is developed for a given set of experimental data in a particular experimental configuration. The development of the BBN and the method for ''solving'' the BBN to develop the posterior distribution of model output through Monte Carlo Markov Chain sampling is discussed in detail. The use of the BBN to compute a Bayes' factor is demonstrated.« less

  13. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    DOE PAGES

    Lockhart, M.; Henzlova, D.; Croft, S.; ...

    2017-09-20

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less

  14. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lockhart, M.; Henzlova, D.; Croft, S.

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less

  15. Engineering optical properties using plasmonic nanostructures

    NASA Astrophysics Data System (ADS)

    Tamma, Venkata Ananth

    Plasmonic nanostructures can be engineered to take on unusual optical properties not found in natural materials. The optical responses of plasmonic materials are functions of the structural parameters and symmetry of the nanostructures, material parameters of the nanostructure and its surroundings and the incidence angle, frequency and polarization state of light. The scattering and hence the visibility of an object could be reduced by coating it with a plasmonic material. In this thesis, presented is an optical frequency scattering cancelation device composed of a silicon nanorod coated by a plasmonic gold nanostructure. The principle of operation was theoretically analyzed using Mie theory and the device design was verified by extensive numerical simulations. The device was fabricated using a combination of nanofabrication techniques such as electron beam lithography and focused ion beam milling. The optical responses of the scattering cancelation device and a control sample of bare silicon rod were directly visualized using near-field microscopy coupled with heterodyne interferometric detection. The experimental results were analyzed and found to match very well with theoretical prediction from numerical simulations thereby validating the design principles and our implementation. Plasmonic nanostructures could be engineered to exhibit unique optical properties such as Fano resonance characterized by narrow asymmetrical lineshape. We present dynamic tuning and symmetry lowering of Fano resonances in plasmonic nanostructures fabricated on flexible substrates. The tuning of Fano resonance was achieved by application of uniaxial mechanical stress. The design of the nanostructures was facilitated by extensive numerical simulations and the symmetry lowering was analyzed using group theoretical methods. The nanostructures were fabricated using electron beam lithography and optically characterized for various mechanical stress. The experimental results were in good agreement with the numerical simulations. The mechanically tunable plasmonic nanostructure could serve as a platform for dynamically tunable nanophotonic devices such as sensors and tunable filters.

  16. Refolding dynamics of stretched biopolymers upon force quench

    PubMed Central

    Hyeon, Changbong; Morrison, Greg; Pincus, David L.; Thirumalai, D.

    2009-01-01

    Single-molecule force spectroscopy methods can be used to generate folding trajectories of biopolymers from arbitrary regions of the folding landscape. We illustrate the complexity of the folding kinetics and generic aspects of the collapse of RNA and proteins upon force quench by using simulations of an RNA hairpin and theory based on the de Gennes model for homopolymer collapse. The folding time, τF, depends asymmetrically on δfS = f S − f m and δf Q = f m − f Q where f S (f Q) is the stretch (quench) force and f m is the transition midforce of the RNA hairpin. In accord with experiments, the relaxation kinetics of the molecular extension, R(t), occurs in three stages: A rapid initial decrease in the extension is followed by a plateau and finally, an abrupt reduction in R(t) occurs as the native state is approached. The duration of the plateau increases as λ = τ Q/τ F decreases (where τ Q is the time in which the force is reduced from f S to f Q). Variations in the mechanisms of force-quench relaxation as λ is altered are reflected in the experimentally measurable time-dependent entropy, which is computed directly from the folding trajectories. An analytical solution of the de Gennes model under tension reproduces the multistage stage kinetics in R(t). The prediction that the initial stages of collapse should also be a generic feature of polymers is validated by simulation of the kinetics of toroid (globule) formation in semiflexible (flexible) homopolymers in poor solvents upon quenching the force from a fully stretched state. Our findings give a unified explanation for multiple disparate experimental observations of protein folding. PMID:19915145

  17. Twitter Chats: Connect, Foster, and Engage Internal Extension Networks

    ERIC Educational Resources Information Center

    Seger, Jamie; Hill, Paul; Stafne, Eric; Swadley, Emy

    2017-01-01

    The eXtension Educational Technology Learning Network (EdTechLN) has found Twitter to be an effective form of informal communication for routinely engaging network members. Twitter chats provide Extension professionals an opportunity to reach and engage one other. As the EdTechLN's experimentation with Twitter chats has demonstrated, the use of…

  18. Creep of plain weave polymer matrix composites

    NASA Astrophysics Data System (ADS)

    Gupta, Abhishek

    Polymer matrix composites are increasingly used in various industrial sectors to reduce structural weight and improve performance. Woven (also known as textile) composites are one class of polymer matrix composites with increasing market share mostly due to their lightweight, their flexibility to form into desired shape, their mechanical properties and toughness. Due to the viscoelasticity of the polymer matrix, time-dependent degradation in modulus (creep) and strength (creep rupture) are two of the major mechanical properties required by engineers to design a structure reliably when using these materials. Unfortunately, creep and creep rupture of woven composites have received little attention by the research community and thus, there is a dire need to generate additional knowledge and prediction models, given the increasing market share of woven composites in load bearing structural applications. Currently, available creep models are limited in scope and have not been validated for any loading orientation and time period beyond the experimental time window. In this thesis, an analytical creep model, namely the Modified Equivalent Laminate Model (MELM), was developed to predict tensile creep of plain weave composites for any orientation of the load with respect to the orientation of the fill and warp fibers, using creep of unidirectional composites. The ability of the model to predict creep for any orientation of the load is a "first" in this area. The model was validated using an extensive experimental involving the tensile creep of plain weave composites under varying loading orientation and service conditions. Plain weave epoxy (F263)/ carbon fiber (T300) composite, currently used in aerospace applications, was procured as fabrics from Hexcel Corporation. Creep tests were conducted under two loading conditions: on-axis loading (0°) and off-axis loading (45°). Constant load creep, in the temperature range of 80-240°C and stress range of 1-70% UTS of the composites, was experimentally evaluated for time periods ranging from 1--120 hours under both loading conditions. The composite showed increase in creep with increase in temperature and stress. Creep of composite increased with increase in angle of loading, from 1% under on-axis loading to 31% under off-axis loading, within the tested time window. The experimental creep data for plain weave composites were superposed using TTSP (Time Temperature Superposition Principle) to obtain a master curve of experimental data extending to several years and was compared with model predictions to validate the model. The experimental and model results were found in good agreement within an error range of +/-1-3% under both loading conditions. A parametric study was also conducted to understand the effect of microstructure of plain weave composites on its on-axis and off-axis creep. Generation of knowledge in this area is also "first". Additionally, this thesis generated knowledge on time-dependent damage m woven composites and its effect on creep and tensile properties and their prediction.

  19. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification

    PubMed Central

    Wen, Tingxi; Zhang, Zhongnan

    2017-01-01

    Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789

  20. Vibration control of beams using constrained layer damping with functionally graded viscoelastic cores: theory and experiments

    NASA Astrophysics Data System (ADS)

    El-Sabbagh, A.; Baz, A.

    2006-03-01

    Conventionally, the viscoelastic cores of Constrained Layer Damping (CLD) treatments are made of materials that have uniform shear modulus. Under such conditions, it is well-recognized that these treatments are only effective near their edges where the shear strains attain their highest values. In order to enhance the damping characteristics of the CLD treatments, we propose to manufacture the cores from Functionally Graded ViscoElastic Materials (FGVEM) that have optimally selected gradient of the shear modulus over the length of the treatments. With such optimized distribution of the shear modulus, the shear strain can be enhanced, and the energy dissipation can be maximized. The theory governing the vibration of beams treated with CLD, that has functionally graded viscoelastic cores, is presented using the finite element method (FEM). The predictions of the FEM are validated experimentally for plain beams, beams treated conventional CLD, and beams with CLD/FGVEM of different configurations. The obtained results indicate a close agreement between theory and experiments. Furthermore, the obtained results demonstrate the effectiveness of the new class of CLD with functionally graded cores in enhancing the energy dissipation over the conventional CLD over a broad frequency band. Extension of the proposed one-dimensional beam/CLD/FGVEM system to more complex structures is a natural extension to the present study.

  1. Simulation of orientational coherent effects via Geant4

    NASA Astrophysics Data System (ADS)

    Bagli, E.; Asai, M.; Brandt, D.; Dotti, A.; Guidi, V.; Verderi, M.; Wright, D.

    2017-10-01

    Simulation of orientational coherent effects via Geant4 beam manipulation of high-and very-high-energy particle beams is a hot topic in accelerator physics. Coherent effects of ultra-relativistic particles in bent crystals allow the steering of particle trajectories thanks to the strong electrical field generated between atomic planes. Recently, a collimation experiment with bent crystals was carried out at the CERN-LHC, paving the way to the usage of such technology in current and future accelerators. Geant4 is a widely used object-oriented tool-kit for the Monte Carlo simulation of the interaction of particles with matter in high-energy physics. Moreover, its areas of application include also nuclear and accelerator physics, as well as studies in medical and space science. We present the first Geant4 extension for the simulation of orientational effects in straight and bent crystals for high energy charged particles. The model allows the manipulation of particle trajectories by means of straight and bent crystals and the scaling of the cross sections of hadronic and electromagnetic processes for channeled particles. Based on such a model, an extension of the Geant4 toolkit has been developed. The code and the model have been validated by comparison with published experimental data regarding the deflection efficiency via channeling and the variation of the rate of inelastic nuclear interactions.

  2. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification.

    PubMed

    Wen, Tingxi; Zhang, Zhongnan

    2017-05-01

    In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.

  3. A model for growth of a single fungal hypha based on well-mixed tanks in series: simulation of nutrient and vesicle transport in aerial reproductive hyphae.

    PubMed

    Balmant, Wellington; Sugai-Guérios, Maura Harumi; Coradin, Juliana Hey; Krieger, Nadia; Furigo Junior, Agenor; Mitchell, David Alexander

    2015-01-01

    Current models that describe the extension of fungal hyphae and development of a mycelium either do not describe the role of vesicles in hyphal extension or do not correctly describe the experimentally observed profile for distribution of vesicles along the hypha. The present work uses the n-tanks-in-series approach to develop a model for hyphal extension that describes the intracellular transport of nutrient to a sub-apical zone where vesicles are formed and then transported to the tip, where tip extension occurs. The model was calibrated using experimental data from the literature for the extension of reproductive aerial hyphae of three different fungi, and was able to describe different profiles involving acceleration and deceleration of the extension rate. A sensitivity analysis showed that the supply of nutrient to the sub-apical vesicle-producing zone is a key factor influencing the rate of extension of the hypha. Although this model was used to describe the extension of a single reproductive aerial hypha, the use of the n-tanks-in-series approach to representing the hypha means that the model has the flexibility to be extended to describe the growth of other types of hyphae and the branching of hyphae to form a complete mycelium.

  4. Validation of the Soil Moisture Active Passive mission using USDA-ARS experimental watersheds

    USDA-ARS?s Scientific Manuscript database

    The calibration and validation program of the Soil Moisture Active Passive mission (SMAP) relies upon an international cooperative of in situ networks to provide ground truth references across a variety of landscapes. The USDA Agricultural Research Service operates several experimental watersheds wh...

  5. Scanned carbon beam irradiation of moving films: comparison of measured and calculated response

    PubMed Central

    2012-01-01

    Background Treatment of moving target volumes with scanned particle beams benefits from treatment planning that includes the time domain (4D). Part of 4D treatment planning is calculation of the expected result. These calculation codes should be verified against suitable measurements. We performed simulations and measurements to validate calculation of the film response in the presence of target motion. Methods All calculations were performed with GSI's treatment planning system TRiP. Interplay patterns between scanned particle beams and moving film detectors are very sensitive to slight deviations of the assumed motion parameters and therefore ideally suited to validate 4D calculations. In total, 14 film motion parameter combinations with lateral motion amplitudes of 8, 15, and 20 mm and 4 combinations for lateral motion including range changes were used. Experimental and calculated film responses were compared by relative difference, mean deviation in two regions-of-interest, as well as line profiles. Results Irradiations of stationary films resulted in a mean relative difference of -1.52% ± 2.06% of measured and calculated responses. In comparison to this reference result, measurements with translational film motion resulted in a mean difference of -0.92% ± 1.30%. In case of irradiations incorporating range changes with a stack of 5 films as detector the deviations increased to -6.4 ± 2.6% (-10.3 ± 9.0% if film in distal fall-off is included) in comparison to -3.6% ± 2.5% (-13.5% ± 19.9% including the distal film) for the stationary irradiation. Furthermore, the comparison of line profiles of 4D calculations and experimental data showed only slight deviations at the borders of the irradiated area. The comparisons of pure lateral motion were used to determine the number of motion states that are required for 4D calculations depending on the motion amplitude. 6 motion states per 10 mm motion amplitude are sufficient to calculate the film response in the presence of motion. Conclusions By comparison to experimental data, the 4D extension of GSI's treatment planning system TRiP has been successfully validated for film response calculations in the presence of target motion within the accuracy limitation given by film-based dosimetry. PMID:22462523

  6. MRI-based modeling for radiocarpal joint mechanics: validation criteria and results for four specimen-specific models.

    PubMed

    Fischer, Kenneth J; Johnson, Joshua E; Waller, Alexander J; McIff, Terence E; Toby, E Bruce; Bilgen, Mehmet

    2011-10-01

    The objective of this study was to validate the MRI-based joint contact modeling methodology in the radiocarpal joints by comparison of model results with invasive specimen-specific radiocarpal contact measurements from four cadaver experiments. We used a single validation criterion for multiple outcome measures to characterize the utility and overall validity of the modeling approach. For each experiment, a Pressurex film and a Tekscan sensor were sequentially placed into the radiocarpal joints during simulated grasp. Computer models were constructed based on MRI visualization of the cadaver specimens without load. Images were also acquired during the loaded configuration used with the direct experimental measurements. Geometric surface models of the radius, scaphoid and lunate (including cartilage) were constructed from the images acquired without the load. The carpal bone motions from the unloaded state to the loaded state were determined using a series of 3D image registrations. Cartilage thickness was assumed uniform at 1.0 mm with an effective compressive modulus of 4 MPa. Validation was based on experimental versus model contact area, contact force, average contact pressure and peak contact pressure for the radioscaphoid and radiolunate articulations. Contact area was also measured directly from images acquired under load and compared to the experimental and model data. Qualitatively, there was good correspondence between the MRI-based model data and experimental data, with consistent relative size, shape and location of radioscaphoid and radiolunate contact regions. Quantitative data from the model generally compared well with the experimental data for all specimens. Contact area from the MRI-based model was very similar to the contact area measured directly from the images. For all outcome measures except average and peak pressures, at least two specimen models met the validation criteria with respect to experimental measurements for both articulations. Only the model for one specimen met the validation criteria for average and peak pressure of both articulations; however the experimental measures for peak pressure also exhibited high variability. MRI-based modeling can reliably be used for evaluating the contact area and contact force with similar confidence as in currently available experimental techniques. Average contact pressure, and peak contact pressure were more variable from all measurement techniques, and these measures from MRI-based modeling should be used with some caution.

  7. Experimental Validation of ARFI Surveillance of Subcutaneous Hemorrhage (ASSH) Using Calibrated Infusions in a Tissue-Mimicking Model and Dogs.

    PubMed

    Geist, Rebecca E; DuBois, Chase H; Nichols, Timothy C; Caughey, Melissa C; Merricks, Elizabeth P; Raymer, Robin; Gallippi, Caterina M

    2016-09-01

    Acoustic radiation force impulse (ARFI) Surveillance of Subcutaneous Hemorrhage (ASSH) has been previously demonstrated to differentiate bleeding phenotype and responses to therapy in dogs and humans, but to date, the method has lacked experimental validation. This work explores experimental validation of ASSH in a poroelastic tissue-mimic and in vivo in dogs. The experimental design exploits calibrated flow rates and infusion durations of evaporated milk in tofu or heparinized autologous blood in dogs. The validation approach enables controlled comparisons of ASSH-derived bleeding rate (BR) and time to hemostasis (TTH) metrics. In tissue-mimicking experiments, halving the calibrated flow rate yielded ASSH-derived BRs that decreased by 44% to 48%. Furthermore, for calibrated flow durations of 5.0 minutes and 7.0 minutes, average ASSH-derived TTH was 5.2 minutes and 7.0 minutes, respectively, with ASSH predicting the correct TTH in 78% of trials. In dogs undergoing calibrated autologous blood infusion, ASSH measured a 3-minute increase in TTH, corresponding to the same increase in the calibrated flow duration. For a measured 5% decrease in autologous infusion flow rate, ASSH detected a 7% decrease in BR. These tissue-mimicking and in vivo preclinical experimental validation studies suggest the ASSH BR and TTH measures reflect bleeding dynamics. © The Author(s) 2015.

  8. COMBINING LIDAR ESTIMATES OF BIOMASS AND LANDSAT ESTIMATES OF STAND AGE FOR SPATIALLY EXTENSIVE VALIDATION OF MODELED FOREST PRODUCTIVITY. (R828309)

    EPA Science Inventory

    Extensive estimates of forest productivity are required to understand the
    relationships between shifting land use, changing climate and carbon storage
    and fluxes. Aboveground net primary production of wood (NPPAw) is a major component
    of total NPP and...

  9. An Exploration of Participative Motivations in a Community-Based Online English Extensive Reading Contest with Respect to Gender Difference

    ERIC Educational Resources Information Center

    Liu, I-Fan; Young, Shelley S. -C.

    2017-01-01

    The purpose of this study is to describe an online community-based English extensive reading contest to investigate whether the participants' intrinsic, extrinsic, and interpersonal motivations and learning results show significant gender differences. A total of 501 valid questionnaires (285 females and 216 males) from Taiwanese high school…

  10. Evaluating the Complementary Roles of an SJT and Academic Assessment for Entry into Clinical Practice

    ERIC Educational Resources Information Center

    Cousans, Fran; Patterson, Fiona; Edwards, Helena; Walker, Kim; McLachlan, John C.; Good, David

    2017-01-01

    Although there is extensive evidence confirming the predictive validity of situational judgement tests (SJTs) in medical education, there remains a shortage of evidence for their predictive validity for performance of postgraduate trainees in their first role in clinical practice. Moreover, to date few researchers have empirically examined the…

  11. Two-Method Planned Missing Designs for Longitudinal Research

    ERIC Educational Resources Information Center

    Garnier-Villarreal, Mauricio; Rhemtulla, Mijke; Little, Todd D.

    2014-01-01

    We examine longitudinal extensions of the two-method measurement design, which uses planned missingness to optimize cost-efficiency and validity of hard-to-measure constructs. These designs use a combination of two measures: a "gold standard" that is highly valid but expensive to administer, and an inexpensive (e.g., survey-based)…

  12. Perception of Competence in Middle School Physical Education: Instrument Development and Validation

    ERIC Educational Resources Information Center

    Scrabis-Fletcher, Kristin; Silverman, Stephen

    2010-01-01

    Perception of Competence (POC) has been studied extensively in physical activity (PA) research with similar instruments adapted for physical education (PE) research. Such instruments do not account for the unique PE learning environment. Therefore, an instrument was developed and the scores validated to measure POC in middle school PE. A…

  13. Sorbent, Sublimation, and Icing Modeling Methods: Experimental Validation and Application to an Integrated MTSA Subassembly Thermal Model

    NASA Technical Reports Server (NTRS)

    Bower, Chad; Padilla, Sebastian; Iacomini, Christie; Paul, Heather L.

    2010-01-01

    This paper details the validation of modeling methods for the three core components of a Metabolic heat regenerated Temperature Swing Adsorption (MTSA) subassembly, developed for use in a Portable Life Support System (PLSS). The first core component in the subassembly is a sorbent bed, used to capture and reject metabolically produced carbon dioxide (CO2). The sorbent bed performance can be augmented with a temperature swing driven by a liquid CO2 (LCO2) sublimation heat exchanger (SHX) for cooling the sorbent bed, and a condensing, icing heat exchanger (CIHX) for warming the sorbent bed. As part of the overall MTSA effort, scaled design validation test articles for each of these three components have been independently tested in laboratory conditions. Previously described modeling methodologies developed for implementation in Thermal Desktop and SINDA/FLUINT are reviewed and updated, their application in test article models outlined, and the results of those model correlations relayed. Assessment of the applicability of each modeling methodology to the challenge of simulating the response of the test articles and their extensibility to a full scale integrated subassembly model is given. The independent verified and validated modeling methods are applied to the development of a MTSA subassembly prototype model and predictions of the subassembly performance are given. These models and modeling methodologies capture simulation of several challenging and novel physical phenomena in the Thermal Desktop and SINDA/FLUINT software suite. Novel methodologies include CO2 adsorption front tracking and associated thermal response in the sorbent bed, heat transfer associated with sublimation of entrained solid CO2 in the SHX, and water mass transfer in the form of ice as low as 210 K in the CIHX.

  14. Crystal Plasticity Model Validation Using Combined High-Energy Diffraction Microscopy Data for a Ti-7Al Specimen

    DOE PAGES

    Turner, Todd J.; Shade, Paul A.; Bernier, Joel V.; ...

    2016-11-18

    High-Energy Diffraction Microscopy (HEDM) is a 3-d x-ray characterization method that is uniquely suited to measuring the evolving micromechanical state and microstructure of polycrystalline materials during in situ processing. The near-field and far-field configurations provide complementary information; orientation maps computed from the near-field measurements provide grain morphologies, while the high angular resolution of the far-field measurements provide intergranular strain tensors. The ability to measure these data during deformation in situ makes HEDM an ideal tool for validating micro-mechanical deformation models that make their predictions at the scale of individual grains. Crystal Plasticity Finite Element Models (CPFEM) are one such classmore » of micro-mechanical models. While there have been extensive studies validating homogenized CPFEM response at a macroscopic level, a lack of detailed data measured at the level of the microstructure has hindered more stringent model validation efforts. We utilize an HEDM dataset from an alphatitanium alloy (Ti-7Al), collected at the Advanced Photon Source, Argonne National Laboratory, under in situ tensile deformation. The initial microstructure of the central slab of the gage section, measured via near-field HEDM, is used to inform a CPFEM model. The predicted intergranular stresses for 39 internal grains are then directly compared to data from 4 far-field measurements taken between ~4% and ~80% of the macroscopic yield strength. In conclusion, the intergranular stresses from the CPFEM model and far-field HEDM measurements up to incipient yield are shown to be in good agreement, and implications for application of such an integrated computational/experimental approach to phenomena such as fatigue and crack propagation is discussed.« less

  15. Patterns of shading tolerance determined from experimental light reduction studies of seagrasses

    EPA Science Inventory

    An extensive review of the experimental literature on seagrass shading evaluated the relationship between experimental light reductions, duration of experiment and seagrass response metrics to determine whether there were consistent statistical patterns. There were highly signif...

  16. Further Validation of a CFD Code for Calculating the Performance of Two-Stage Light Gas Guns

    NASA Technical Reports Server (NTRS)

    Bogdanoff, David W.

    2017-01-01

    Earlier validations of a higher-order Godunov code for modeling the performance of two-stage light gas guns are reviewed. These validation comparisons were made between code predictions and experimental data from the NASA Ames 1.5" and 0.28" guns and covered muzzle velocities of 6.5 to 7.2 km/s. In the present report, five more series of code validation comparisons involving experimental data from the Ames 0.22" (1.28" pump tube diameter), 0.28", 0.50", 1.00" and 1.50" guns are presented. The total muzzle velocity range of the validation data presented herein is 3 to 11.3 km/s. The agreement between the experimental data and CFD results is judged to be very good. Muzzle velocities were predicted within 0.35 km/s for 74% of the cases studied with maximum differences being 0.5 km/s and for 4 out of 50 cases, 0.5 - 0.7 km/s.

  17. Knowledge Based Systems (KBS) Verification, Validation, Evaluation, and Testing (VVE&T) Bibliography: Topical Categorization

    DTIC Science & Technology

    2003-03-01

    Different?," Jour. of Experimental & Theoretical Artificial Intelligence, Special Issue on Al for Systems Validation and Verification, 12(4), 2000, pp...Hamilton, D., " Experiences in Improving the State of Practice in Verification and Validation of Knowledge-Based Systems," Workshop Notes of the AAAI...Unsuspected Power of the Standard Turing Test," Jour. of Experimental & Theoretical Artificial Intelligence., 12, 2000, pp3 3 1-3 4 0 . [30] Gaschnig

  18. Vivaldi: visualization and validation of biomacromolecular NMR structures from the PDB.

    PubMed

    Hendrickx, Pieter M S; Gutmanas, Aleksandras; Kleywegt, Gerard J

    2013-04-01

    We describe Vivaldi (VIsualization and VALidation DIsplay; http://pdbe.org/vivaldi), a web-based service for the analysis, visualization, and validation of NMR structures in the Protein Data Bank (PDB). Vivaldi provides access to model coordinates and several types of experimental NMR data using interactive visualization tools, augmented with structural annotations and model-validation information. The service presents information about the modeled NMR ensemble, validation of experimental chemical shifts, residual dipolar couplings, distance and dihedral angle constraints, as well as validation scores based on empirical knowledge and databases. Vivaldi was designed for both expert NMR spectroscopists and casual non-expert users who wish to obtain a better grasp of the information content and quality of NMR structures in the public archive. Copyright © 2013 Wiley Periodicals, Inc.

  19. Asymmetry in scientific method and limits to cross-disciplinary dialogue: toward a shared language and science policy in pharmacogenomics and human disease genetics.

    PubMed

    Ozdemir, Vural; Williams-Jones, Bryn; Graham, Janice E; Preskorn, Sheldon H; Gripeos, Dimitrios; Glatt, Stephen J; Friis, Robert H; Reist, Christopher; Szabo, Sandor; Lohr, James B; Someya, Toshiyuki

    2007-04-01

    Pharmacogenomics is a hybrid field of experimental science at the intersection of human disease genetics and clinical pharmacology sharing applications of the new genomic technologies. But this hybrid field is not yet stable or fully integrated, nor is science policy in pharmacogenomics fully equipped to resolve the challenges of this emerging hybrid field. The disciplines of human disease genetics and clinical pharmacology contain significant differences in their scientific practices. Whereas clinical pharmacology originates as an experimental science, human disease genetics is primarily observational in nature. The result is a significant asymmetry in scientific method that can differentially impact the degree to which gene-environment interactions are discerned and, by extension, the study sample size required in each discipline. Because the number of subjects enrolled in observational genetic studies of diseases is characteristically viewed as an important criterion of scientific validity and reliability, failure to recognize discipline-specific requirements for sample size may lead to inappropriate dismissal or silencing of meritorious, although smaller-scale, craft-based pharmacogenomic investigations using an experimental study design. Importantly, the recognition that pharmacogenomics is an experimental science creates an avenue for systematic policy response to the ethical imperative to prospectively pursue genetically customized therapies before regulatory approval of pharmaceuticals. To this end, we discuss the critical role of interdisciplinary engagement between medical sciences, policy, and social science. We emphasize the need for development of shared standards across scientific, methodologic, and socioethical epistemologic divides in the hybrid field of pharmacogenomics to best serve the interests of public health.

  20. NEDE: an open-source scripting suite for developing experiments in 3D virtual environments.

    PubMed

    Jangraw, David C; Johri, Ansh; Gribetz, Meron; Sajda, Paul

    2014-09-30

    As neuroscientists endeavor to understand the brain's response to ecologically valid scenarios, many are leaving behind hyper-controlled paradigms in favor of more realistic ones. This movement has made the use of 3D rendering software an increasingly compelling option. However, mastering such software and scripting rigorous experiments requires a daunting amount of time and effort. To reduce these startup costs and make virtual environment studies more accessible to researchers, we demonstrate a naturalistic experimental design environment (NEDE) that allows experimenters to present realistic virtual stimuli while still providing tight control over the subject's experience. NEDE is a suite of open-source scripts built on the widely used Unity3D game development software, giving experimenters access to powerful rendering tools while interfacing with eye tracking and EEG, randomizing stimuli, and providing custom task prompts. Researchers using NEDE can present a dynamic 3D virtual environment in which randomized stimulus objects can be placed, allowing subjects to explore in search of these objects. NEDE interfaces with a research-grade eye tracker in real-time to maintain precise timing records and sync with EEG or other recording modalities. Python offers an alternative for experienced programmers who feel comfortable mastering and integrating the various toolboxes available. NEDE combines many of these capabilities with an easy-to-use interface and, through Unity's extensive user base, a much more substantial body of assets and tutorials. Our flexible, open-source experimental design system lowers the barrier to entry for neuroscientists interested in developing experiments in realistic virtual environments. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Dietary Intake Following Experimentally Restricted Sleep in Adolescents

    PubMed Central

    Beebe, Dean W.; Simon, Stacey; Summer, Suzanne; Hemmer, Stephanie; Strotman, Daniel; Dolan, Lawrence M.

    2013-01-01

    Study Objective: To examine the relationship between sleep and dietary intake in adolescents using an experimental sleep restriction protocol. Design: Randomized crossover sleep restriction-extension paradigm. Setting: Sleep obtained and monitored at home, diet measured during an office visit. Participants: Forty-one typically developing adolescents age 14-16 years. Interventions: The 3-week protocol consisting of a baseline week designed to stabilize the circadian rhythm, followed randomly by 5 consecutive nights of sleep restriction (6.5 hours in bed Monday-Friday) versus healthy sleep duration (10 hours in bed), a 2-night washout period, and a 5-night crossover period. Measurements: Sleep was monitored via actigraphy and teens completed validated 24-hour diet recall interviews following each experimental condition. Results: Paired-sample t-tests examined differences between conditions for consumption of key macronutrients and choices from dietary categories. Compared with the healthy sleep condition, sleep-restricted adolescents' diets were characterized by higher glycemic index and glycemic load and a trend toward more calories and carbohydrates, with no differences in fat or protein consumption. Exploratory analyses revealed the consumption of significantly more desserts and sweets during sleep restriction than healthy sleep. Conclusions: Chronic sleep restriction during adolescence appears to cause increased consumption of foods with a high glycemic index, particularly desserts/sweets. The chronic sleep restriction common in adolescence may cause changes in dietary behaviors that increase risk of obesity and associated morbidity. Citation: Beebe DW; Simon S; Summer S; Hemmer S; Strotman D; Dolan LM. Dietary intake following experimentally restricted sleep in adolescents. SLEEP 2013;36(6):827-834. PMID:23729925

  2. Experimental validation of predicted cancer genes using FRET

    NASA Astrophysics Data System (ADS)

    Guala, Dimitri; Bernhem, Kristoffer; Ait Blal, Hammou; Jans, Daniel; Lundberg, Emma; Brismar, Hjalmar; Sonnhammer, Erik L. L.

    2018-07-01

    Huge amounts of data are generated in genome wide experiments, designed to investigate diseases with complex genetic causes. Follow up of all potential leads produced by such experiments is currently cost prohibitive and time consuming. Gene prioritization tools alleviate these constraints by directing further experimental efforts towards the most promising candidate targets. Recently a gene prioritization tool called MaxLink was shown to outperform other widely used state-of-the-art prioritization tools in a large scale in silico benchmark. An experimental validation of predictions made by MaxLink has however been lacking. In this study we used Fluorescence Resonance Energy Transfer, an established experimental technique for detection of protein-protein interactions, to validate potential cancer genes predicted by MaxLink. Our results provide confidence in the use of MaxLink for selection of new targets in the battle with polygenic diseases.

  3. Solar-Diesel Hybrid Power System Optimization and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Jacobus, Headley Stewart

    As of 2008 1.46 billion people, or 22 percent of the World's population, were without electricity. Many of these people live in remote areas where decentralized generation is the only method of electrification. Most mini-grids are powered by diesel generators, but new hybrid power systems are becoming a reliable method to incorporate renewable energy while also reducing total system cost. This thesis quantifies the measurable Operational Costs for an experimental hybrid power system in Sierra Leone. Two software programs, Hybrid2 and HOMER, are used during the system design and subsequent analysis. Experimental data from the installed system is used to validate the two programs and to quantify the savings created by each component within the hybrid system. This thesis bridges the gap between design optimization studies that frequently lack subsequent validation and experimental hybrid system performance studies.

  4. Use of the FDA nozzle model to illustrate validation techniques in computational fluid dynamics (CFD) simulations

    PubMed Central

    Hariharan, Prasanna; D’Souza, Gavin A.; Horner, Marc; Morrison, Tina M.; Malinauskas, Richard A.; Myers, Matthew R.

    2017-01-01

    A “credible” computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing “model credibility” is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a “threshold-based” validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results (“S”) of velocity and viscous shear stress were compared with inter-laboratory experimental measurements (“D”). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student’s t-test. However, following the threshold-based approach, a Student’s t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices. PMID:28594889

  5. Use of the FDA nozzle model to illustrate validation techniques in computational fluid dynamics (CFD) simulations.

    PubMed

    Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R

    2017-01-01

    A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices.

  6. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    NASA Astrophysics Data System (ADS)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).

  7. Searching for Feedbacks between Land-use/Land-cover Changes and the Water Budget in Complex Terrain at the Dry Creek Experimental Watershed in Idaho, USA

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Engdahl, N.

    2017-12-01

    Proactive management to improve water resource sustainability is often limited by a lack of understanding about the hydrological consequences of human activities and climate induced land use and land cover (LULC) change. Changes in LULC can alter runoff, soil moisture, and evapotranspiration, but these effects are complex and traditional modeling techniques have had limited successes in realistically simulating the relevant feedbacks. Recent studies have investigated the coupled interactions but typically do so at coarse resolutions with simple topographic settings, so it is unclear if the previous conclusions remain valid in the steep, complex terrains that dominate the western USA. This knowledge gap was explored with a series of integrated hydrologic simulations based on the Dry Creek Experimental Watershed (DCEW) in southwestern Idaho, USA, using the ParFlow.CLM model. The DCEW has extensive monitoring data that allowed for a direct calibration and validation of the base-case simulation, which is not commonly done with integrated models. The effects of LULC change on the hydrologic and water budgets were then assessed at two grid resolutions (20m and 40m) under four LULC scenarios: 1) current LULC; 2) LULC change from a small but gradual decrease in potential recharge (PR); 3) LULC change from a large but rapid decrease in PR; and 4) LULC change from a large but gradual decrease in PR. The results show that the methods used for terrain processing and the grid resolution can both heavily impact the simulation results and that LULC change can significantly alter the relative amounts of groundwater storage and runoff.

  8. Validation Results for LEWICE 3.0

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    2005-01-01

    A research project is underway at NASA Glenn to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 3.0 of this software, which is called LEWICE. This version differs from previous releases in that it incorporates additional thermal analysis capabilities, a pneumatic boot model, interfaces to computational fluid dynamics (CFD) flow solvers and has an empirical model for the supercooled large droplet (SLD) regime. An extensive comparison of the results in a quantifiable manner against the database of ice shapes and collection efficiency that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has also been performed. The complete set of data used for this comparison will eventually be available in a contractor report. This paper will show the differences in collection efficiency between LEWICE 3.0 and experimental data. Due to the large amount of validation data available, a separate report is planned for ice shape comparison. This report will first describe the LEWICE 3.0 model for water collection. A semi-empirical approach was used to incorporate first order physical effects of large droplet phenomena into icing software. Comparisons are then made to every single element two-dimensional case in the water collection database. Each condition was run using the following five assumptions: 1) potential flow, no splashing; 2) potential flow, no splashing with 21 bin drop size distributions and a lift correction (angle of attack adjustment); 3) potential flow, with splashing; 4) Navier-Stokes, no splashing; and 5) Navier-Stokes, with splashing. Quantitative comparisons are shown for impingement limit, maximum water catch, and total collection efficiency. The results show that the predicted results are within the accuracy limits of the experimental data for the majority of cases.

  9. Computational-experimental approach to drug-target interaction mapping: A case study on kinase inhibitors

    PubMed Central

    Ravikumar, Balaguru; Parri, Elina; Timonen, Sanna; Airola, Antti; Wennerberg, Krister

    2017-01-01

    Due to relatively high costs and labor required for experimental profiling of the full target space of chemical compounds, various machine learning models have been proposed as cost-effective means to advance this process in terms of predicting the most potent compound-target interactions for subsequent verification. However, most of the model predictions lack direct experimental validation in the laboratory, making their practical benefits for drug discovery or repurposing applications largely unknown. Here, we therefore introduce and carefully test a systematic computational-experimental framework for the prediction and pre-clinical verification of drug-target interactions using a well-established kernel-based regression algorithm as the prediction model. To evaluate its performance, we first predicted unmeasured binding affinities in a large-scale kinase inhibitor profiling study, and then experimentally tested 100 compound-kinase pairs. The relatively high correlation of 0.77 (p < 0.0001) between the predicted and measured bioactivities supports the potential of the model for filling the experimental gaps in existing compound-target interaction maps. Further, we subjected the model to a more challenging task of predicting target interactions for such a new candidate drug compound that lacks prior binding profile information. As a specific case study, we used tivozanib, an investigational VEGF receptor inhibitor with currently unknown off-target profile. Among 7 kinases with high predicted affinity, we experimentally validated 4 new off-targets of tivozanib, namely the Src-family kinases FRK and FYN A, the non-receptor tyrosine kinase ABL1, and the serine/threonine kinase SLK. Our sub-sequent experimental validation protocol effectively avoids any possible information leakage between the training and validation data, and therefore enables rigorous model validation for practical applications. These results demonstrate that the kernel-based modeling approach offers practical benefits for probing novel insights into the mode of action of investigational compounds, and for the identification of new target selectivities for drug repurposing applications. PMID:28787438

  10. A method for simulating the entire leaking process and calculating the liquid leakage volume of a damaged pressurized pipeline.

    PubMed

    He, Guoxi; Liang, Yongtu; Li, Yansong; Wu, Mengyu; Sun, Liying; Xie, Cheng; Li, Feng

    2017-06-15

    The accidental leakage of long-distance pressurized oil pipelines is a major area of risk, capable of causing extensive damage to human health and environment. However, the complexity of the leaking process, with its complex boundary conditions, leads to difficulty in calculating the leakage volume. In this study, the leaking process is divided into 4 stages based on the strength of transient pressure. 3 models are established to calculate the leaking flowrate and volume. First, a negative pressure wave propagation attenuation model is applied to calculate the sizes of orifices. Second, a transient oil leaking model, consisting of continuity, momentum conservation, energy conservation and orifice flow equations, is built to calculate the leakage volume. Third, a steady-state oil leaking model is employed to calculate the leakage after valves and pumps shut down. Moreover, sensitive factors that affect the leak coefficient of orifices and volume are analyzed respectively to determine the most influential one. To validate the numerical simulation, two types of leakage test with different sizes of leakage holes were conducted from Sinopec product pipelines. More validations were carried out by applying commercial software to supplement the experimental insufficiency. Thus, the leaking process under different leaking conditions are described and analyzed. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Driving-forces model on individual behavior in scenarios considering moving threat agents

    NASA Astrophysics Data System (ADS)

    Li, Shuying; Zhuang, Jun; Shen, Shifei; Wang, Jia

    2017-09-01

    The individual behavior model is a contributory factor to improve the accuracy of agent-based simulation in different scenarios. However, few studies have considered moving threat agents, which often occur in terrorist attacks caused by attackers with close-range weapons (e.g., sword, stick). At the same time, many existing behavior models lack validation from cases or experiments. This paper builds a new individual behavior model based on seven behavioral hypotheses. The driving-forces model is an extension of the classical social force model considering scenarios including moving threat agents. An experiment was conducted to validate the key components of the model. Then the model is compared with an advanced Elliptical Specification II social force model, by calculating the fitting errors between the simulated and experimental trajectories, and being applied to simulate a specific circumstance. Our results show that the driving-forces model reduced the fitting error by an average of 33.9% and the standard deviation by an average of 44.5%, which indicates the accuracy and stability of the model in the studied situation. The new driving-forces model could be used to simulate individual behavior when analyzing the risk of specific scenarios using agent-based simulation methods, such as risk analysis of close-range terrorist attacks in public places.

  12. Dense 3D Face Alignment from 2D Video for Real-Time Use

    PubMed Central

    Jeni, László A.; Cohn, Jeffrey F.; Kanade, Takeo

    2018-01-01

    To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person’s face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of landmarks and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction, extension to multi-view reconstruction, temporal integration for videos and 3D head-pose estimation. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org. PMID:29731533

  13. Advanced numerical models and material characterisation techniques for composite materials subject to impact and shock wave loading

    NASA Astrophysics Data System (ADS)

    Clegg, R. A.; White, D. M.; Hayhurst, C.; Ridel, W.; Harwick, W.; Hiermaier, S.

    2003-09-01

    The development and validation of an advanced material model for orthotropic materials, such as fibre reinforced composites, is described. The model is specifically designed to facilitate the numerical simulation of impact and shock wave propagation through orthotropic materials and the prediction of subsequent material damage. Initial development of the model concentrated on correctly representing shock wave propagation in composite materials under high and hypervelocity impact conditions [1]. This work has now been extended to further concentrate on the development of improved numerical models and material characterisation techniques for the prediction of damage, including residual strength, in fibre reinforced composite materials. The work is focussed on Kevlar-epoxy however materials such as CFRP are also being considered. The paper describes our most recent activities in relation to the implementation of advanced material modelling options in this area. These enable refined non-liner directional characteristics of composite materials to be modelled, in addition to the correct thermodynamic response under shock wave loading. The numerical work is backed by an extensive experimental programme covering a wide range of static and dynamic tests to facilitate derivation of model input data and to validate the predicted material response. Finally, the capability of the developing composite material model is discussed in relation to a hypervelocity impact problem.

  14. Estimation of ground reaction forces and joint moments on the basis on plantar pressure insoles and wearable sensors for joint angle measurement.

    PubMed

    Ostaszewski, Michal; Pauk, Jolanta

    2018-05-16

    Gait analysis is a useful tool medical staff use to support clinical decision making. There is still an urgent need to develop low-cost and unobtrusive mobile health monitoring systems. The goal of this study was twofold. Firstly, a wearable sensor system composed of plantar pressure insoles and wearable sensors for joint angle measurement was developed. Secondly, the accuracy of the system in the measurement of ground reaction forces and joint moments was examined. The measurements included joint angles and plantar pressure distribution. To validate the wearable sensor system and examine the effectiveness of the proposed method for gait analysis, an experimental study on ten volunteer subjects was conducted. The accuracy of measurement of ground reaction forces and joint moments was validated against the results obtained from a reference motion capture system. Ground reaction forces and joint moments measured by the wearable sensor system showed a root mean square error of 1% for min. GRF and 27.3% for knee extension moment. The correlation coefficient was over 0.9, in comparison with the stationary motion capture system. The study suggests that the wearable sensor system could be recommended both for research and clinical applications outside a typical gait laboratory.

  15. [Finite Element Modelling of the Eye for the Investigation of Accommodation].

    PubMed

    Martin, H; Stachs, O; Guthoff, R; Grabow, N

    2016-12-01

    Background: Accommodation research increasingly uses engineering methods. This article presents the use of the finite element method in accommodation research. Material and Methods: Geometry, material data and boundary conditions are prerequisites for the application of the finite element method. Published data on geometry and materials are reviewed. It is shown how boundary conditions are important and how they influence the results. Results: Two dimensional and three dimensional models of the anterior chamber of the eye are presented. With simple two dimensional models, it is shown that realistic results for the accommodation amplitude can always be achieved. More complex three dimensional models of the accommodation mechanism - including the ciliary muscle - require further investigations of the material data and of the morphology of the ciliary muscle, if they are to achieve realistic results for accommodation. Discussion and Conclusion: The efficiency and the limitations of the finite element method are especially clear for accommodation. Application of the method requires extensive preparation, including acquisition of geometric and material data and experimental validation. However, a validated model can be used as a basis for parametric studies, by systematically varying material data and geometric dimensions. This allows systematic investigation of how essential input parameters influence the results. Georg Thieme Verlag KG Stuttgart · New York.

  16. A Reference Method for Measuring Emissions of SVOCs in ...

    EPA Pesticide Factsheets

    Semivolatile organic compounds (SVOCs) are indoor air pollutants that may may have significant adverse effects on human health, and emission of SVOCs from building materials and consumer products is of growing concern. Few chamber studies have been conducted due to the challenges associated with SVOC analysis and the lack of validation procedures. Thus there is an urgent need for a reliable and accurate chamber test method to verify the performance of these measurements. A reference method employing a specially-designed chamber and experimental protocol has been developed and is undergoing extensive evaluation. A pilot interlaboratory study (ILS) has been conducted with five laboratories performing chamber tests under identical conditions. Results showed inter-laboratory variations at 25% for SVOC emission rates, with greater agreement observed between intra-laboratory measurements for most of the participating laboratories. The measured concentration profiles also compared reasonably well to the mechanistic model, demonstrating the feasibility of the proposed reference method to independently assess laboratory performance and validate SVOC emission tests. There is an urgent need for improved understanding of the measurement uncertainties associated with SVOC emissions testing. The creation of specially-designed chambers and well-characterized materials serves as a critical prerequisite for improving the procedure used to measure SVOCs emitted from indoor

  17. Elucidation of the binding mechanism of renin using a wide array of computational techniques and biological assays.

    PubMed

    Tzoupis, Haralambos; Leonis, Georgios; Avramopoulos, Aggelos; Reis, Heribert; Czyżnikowska, Żaneta; Zerva, Sofia; Vergadou, Niki; Peristeras, Loukas D; Papavasileiou, Konstantinos D; Alexis, Michael N; Mavromoustakos, Thomas; Papadopoulos, Manthos G

    2015-11-01

    We investigate the binding mechanism in renin complexes, involving three drugs (remikiren, zankiren and enalkiren) and one lead compound, which was selected after screening the ZINC database. For this purpose, we used ab initio methods (the effective fragment potential, the variational perturbation theory, the energy decomposition analysis, the atoms-in-molecules), docking, molecular dynamics, and the MM-PBSA method. A biological assay for the lead compound has been performed to validate the theoretical findings. Importantly, binding free energy calculations for the three drug complexes are within 3 kcal/mol of the experimental values, thus further justifying our computational protocol, which has been validated through previous studies on 11 drug-protein systems. The main elements of the discovered mechanism are: (i) minor changes are induced to renin upon drug binding, (ii) the three drugs form an extensive network of hydrogen bonds with renin, whilst the lead compound presented diminished interactions, (iii) ligand binding in all complexes is driven by favorable van der Waals interactions and the nonpolar contribution to solvation, while the lead compound is associated with diminished van der Waals interactions compared to the drug-bound forms of renin, and (iv) the environment (H2O/Na(+)) has a small effect on the renin-remikiren interaction. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Facial expression: An under-utilised tool for the assessment of welfare in mammals.

    PubMed

    Descovich, Kris A; Wathan, Jennifer; Leach, Matthew C; Buchanan-Smith, Hannah M; Flecknell, Paul; Farningham, David; Vick, Sarah-Jane

    2017-01-01

    Animal welfare is a key issue for industries that use or impact upon animals. The accurate identification of welfare states is particularly relevant to the field of bioscience, where the 3Rs framework encourages refinement of experimental procedures involving animal models. The assessment and improvement of welfare states in animals depends on reliable and valid measurement tools. Behavioral measures (activity, attention, posture and vocalization) are frequently used because they are immediate and non-invasive, however no single indicator can yield a complete picture of the internal state of an animal. Facial expressions are extensively studied in humans as a measure of psychological and emotional experiences but are infrequently used in animal studies, with the exception of emerging research on pain behavior. In this review, we discuss current evidence for facial representations of underlying affective states, and how communicative or functional expressions can be useful within welfare assessments. Validated tools for measuring facial movement are outlined, and the potential of expressions as honest signals is discussed, alongside other challenges and limitations to facial expression measurement within the context of animal welfare. We conclude that facial expression determination in animals is a useful but underutilized measure that complements existing tools in the assessment of welfare.

  19. Blunt-Body Aerothermodynamic Database from High-Enthalpy CO2 Testing in an Expansion Tunnel

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Prabhu, Dinesh K.; Maclean, Matthew; Dufrene, Aaron

    2016-01-01

    An extensive database of heating, pressure, and flow field measurements on a 70-deg sphere-cone blunt body geometry in high-enthalpy, CO2 flow has been generated through testing in an expansion tunnel. This database is intended to support development and validation of computational tools and methods to be employed in the design of future Mars missions. The test was conducted in an expansion tunnel in order to avoid uncertainties in the definition of free stream conditions noted in previous studies performed in reflected shock tunnels. Data were obtained across a wide range of test velocity/density conditions that produced various physical phenomena of interest, including laminar and transitional/turbulent boundary layers, non-reacting to completely dissociated post-shock gas composition and shock-layer radiation. Flow field computations were performed at the test conditions and comparisons were made with the experimental data. Based on these comparisons, it is recommended that computational uncertainties on surface heating and pressure, for laminar, reacting-gas environments can be reduced to +/-10% and +/-5%, respectively. However, for flows with turbulence and shock-layer radiation, there were not sufficient validation-quality data obtained in this study to make any conclusions with respect to uncertainties, which highlights the need for further research in these areas.

  20. Instrumented urethral catheter and its ex vivo validation in a sheep urethra

    NASA Astrophysics Data System (ADS)

    Ahmadi, Mahdi; Rajamani, Rajesh; Timm, Gerald; Sezen, Serdar

    2017-03-01

    This paper designs and fabricates an instrumented catheter for instantaneous measurement of distributed urethral pressure profiles. Since the catheter enables a new type of urological measurement, a process for accurate ex vivo validation of the catheter is developed. A flexible sensor strip is first fabricated with nine pressure sensors and integrated electronic pads for an associated sensor IC chip. The flexible sensor strip and associated IC chip are assembled on a 7 Fr Foley catheter. A sheep bladder and urethra are extracted and used in an ex vivo set up for verification of the developed instrumented catheter. The bladder-urethra are suspended in a test rig and pressure cuffs placed to apply known static and dynamic pressures around the urethra. A significant challenge in the performance of the sensor system is the presence of parasitics that introduce large bias and drift errors in the capacitive sensor signals. An algorithm based on use of reference parasitic transducers is used to compensate for the parasitics. Extensive experimental results verify that the developed compensation method works effectively. Results on pressure variation profiles circumferentially around the urethra and longitudinally along the urethra are presented. The developed instrumented catheter will be useful in improved urodynamics to more accurately diagnose the source of urinary incontinence in patients.

Top